Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2076 2024-03-01 00:02:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2078) Syringe

Gist

A syringe is a medical instrument for injecting or drawing off liquid, consisting of a hollow cylinder with a plunger inside and a thin hollow needle attached or a similar device used in gardening, cooking, etc.

Details

A syringe is a simple reciprocating pump consisting of a plunger (though in modern syringes, it is actually a piston) that fits tightly within a cylindrical tube called a barrel. The plunger can be linearly pulled and pushed along the inside of the tube, allowing the syringe to take in and expel liquid or gas through a discharge orifice at the front (open) end of the tube. The open end of the syringe may be fitted with a hypodermic needle, a nozzle or tubing to direct the flow into and out of the barrel. Syringes are frequently used in clinical medicine to administer injections, infuse intravenous therapy into the bloodstream, apply compounds such as glue or lubricant, and draw/measure liquids. There are also prefilled syringes (disposable syringes marketed with liquid inside).

The word "syringe" is derived from the Greek (syrinx, meaning "Pan flute", "tube").

Medical syringes

Sectors in the syringe and needle market include disposable and safety syringes, injection pens, needleless injectors, insulin pumps, and specialty needles. Hypodermic syringes are used with hypodermic needles to inject liquid or gases into body tissues, or to remove from the body. Injecting of air into a blood vessel is hazardous, as it may cause an air embolism; preventing embolisms by removing air from the syringe is one of the reasons for the familiar image of holding a hypodermic syringe pointing upward, tapping it, and expelling a small amount of liquid before an injection into the bloodstream.

The barrel of a syringe is made of plastic or glass, usually has graduated marks indicating the volume of fluid in the syringe, and is nearly always transparent. Glass syringes may be sterilized in an autoclave. Plastic syringes can be constructed as either two-part or three-part designs. A three-part syringe contains a plastic plunger/piston with a rubber tip to create a seal between the piston and the barrel, where a two-part syringe is manufactured to create a perfect fit between the plastic plunger and the barrel to create the seal without the need for a separate synthetic rubber piston. Two-part syringes have been traditionally used in European countries to prevent introduction of additional materials such as silicone oil needed for lubricating three-part plungers. Most modern medical syringes are plastic because they are cheap enough to dispose of after being used only once, reducing the risk of spreading blood-borne diseases. Reuse of needles and syringes has caused spread of diseases, especially HIV and hepatitis, among intravenous drug users. Syringes are also commonly reused by diabetics, as they can go through several in a day with multiple daily insulin injections, which becomes an affordability issue for many. Even though the syringe and needle are only used by a single person, this practice is still unsafe as it can introduce bacteria from the skin into the bloodstream and cause serious and sometimes lethal infections. In medical settings, single-use needles and syringes effectively reduce the risk of cross-contamination.

Medical syringes are sometimes used without a needle for orally administering liquid medicines to young children or animals, or milk to small young animals, because the dose can be measured accurately and it is easier to squirt the medicine into the subject's mouth instead of coaxing the subject to drink out of a measuring spoon.

Tip designs

Syringes come with a number of designs for the area in which the blade locks to the syringe body. Perhaps the most well known of these is the Luer lock, which simply twists the two together.

Bodies featuring a small, plain connection are known as slip tips and are useful for when the syringe is being connected to something not featuring a screw lock mechanism.

Similar to this is the catheter tip, which is essentially a slip tip but longer and tapered, making it good for pushing into things where there the plastic taper can form a tight seal. These can also be used for rinsing out wounds or large abscesses in veterinary use.

There is also an eccentric tip, where the nozzle at the end of the syringe is not in the centre of the syringe but at the side. This causes the blade attached to the syringe to lie almost in line with the walls of the syringe itself and they are used when the blade needs to get very close to parallel with the skin (when injecting into a surface vein or artery for example).

Syringes for insulin users are designed for standard U-100 insulin. The dilution of insulin is such that 1 mL of insulin fluid has 100 standard "units" of insulin. Since insulin vials are typically 10 mL, each vial has 1000 units.

Insulin syringes are made specifically for self injections and have friendly features:

* shorter needles, as insulin injections are subcutaneous (under the skin) rather than intramuscular,
* finer gauge needles, for less pain,
* markings in insulin units to simplify drawing a measured dose of insulin, and
* low dead space to reduce complications caused by improper drawing order of different insulin strengths.

Multishot needle syringes

There are needle syringes designed to reload from a built-in tank (container) after each injection, so they can make several or many injections on a filling. These are not used much in human medicine because of the risk of cross-infection via the needle. An exception is the personal insulin autoinjector used by diabetic patients and in dual-chambered syringe designs intended to deliver a prefilled saline flush solution after the medication.

Venom extraction syringes

Venom extraction syringes are different from standard syringes, because they usually do not puncture the wound. The most common types have a plastic nozzle which is placed over the affected area, and then the syringe piston is pulled back, creating a vacuum that allegedly drags out the venom. Attempts to treat snakebites in this way are specifically advised against, as they are ineffective and can cause additional injury.

Syringes of this type are sometimes used for extracting human botfly larvae from the skin.

Oral

An oral syringe is a measuring instrument used to accurately measure doses of liquid medication, expressed in millilitres (mL). They do not have threaded tips, because no needle or other device needs to be screwed onto them. The contents are simply squirted or sucked from the syringe directly into the mouth of the person or animal.

Oral syringes are available in various sizes, from 1–10 mL and larger. An oral syringe is typically purple in colour to distinguish it from a standard injection syringe with a luer tip. The sizes most commonly used are 1 mL, 2.5 mL, 3 mL, 5 mL and 10 mL.

Dental syringes

A dental syringe is a used by dentists for the injection of an anesthetic. It consists of a breech-loading syringe fitted with a sealed cartridge containing an anesthetic solution.

In 1928, Bayer Dental developed, coined and produced a sealed cartridge system under the registered trademark Carpule®. The current trademark owner is Kulzer Dental GmbH.

The carpules have long been reserved for anesthetic products for dental use. It is practically a bottomless flask. The latter is replaced by an elastomer plug that can slide in the body of the cartridge. This plug will be pushed by the plunger of the syringe. The neck is closed with a rubber cap. The dentist places the cartridge directly into a stainless steel syringe, with a double-pointed (single-use) needle. The tip placed on the cartridge side punctures the capsule and the piston will push the product. There is therefore no contact between the product and the ambient air during use.

The ancillary tool (generally part of a dental engine) used to supply water, compressed air or mist (formed by combination of water and compressed air) to the oral cavity for the purpose of irrigation (cleaning debris away from the area the dentist is working on), is also referred to as a dental syringe or a dental irrigation nozzle.

A 3-way syringe/nozzle has separate internal channels supplying air, water or a mist created by combining the pressurized air with the waterflow. The syringe tip can be separated from the main body and replaced when necessary.

In the UK and Ireland, manually operated hand syringes are used to inject lidocaine into patients' gums.

Dose-sparing syringes

A dose-sparing syringe is one which minimises the amount of liquid remaining in the barrel after the plunger has been depressed. These syringes feature a combined needle and syringe, and a protrusion on the face of the plunger to expel liquid from the needle hub. Such syringes were particularly popular during the COVID-19 pandemic as vaccines were in short supply.

Regulation

In some jurisdictions, the sale or possession of hypodermic syringes may be controlled or prohibited without a prescription, due to its potential use with illegal intravenous drugs.

Non-medical uses

The syringe has many non-medical applications.

Laboratory applications

Laboratory grease, commonly used to lubricate ground glass joints and stopcocks, is sometimes loaded in syringes for easy application.

Some chemical compounds, such as thermal paste and various glues, e.g. epoxy, are sold in prepackaged syringes.
Medical-grade disposable hypodermic syringes are often used in research laboratories for convenience and low cost. Another application is to use the needle tip to add liquids to very confined spaces, such as washing out some scientific apparatus. They are often used for measuring and transferring solvents and reagents where a high precision is not required. Alternatively, microliter syringes can be used to measure and dose chemicals very precisely by using a small diameter capillary as the syringe barrel.

The polyethylene construction of these disposable syringes usually makes them rather chemically resistant. There is, however, a risk of the contents of the syringes leaching plasticizers from the syringe material. Non-disposable glass syringes may be preferred where this is a problem. Glass syringes may also be preferred where a very high degree of precision is important (i.e. quantitative chemical analysis), because their engineering tolerances are lower and the plungers move more smoothly. In these applications, the transfer of pathogens is usually not an issue.

Used with a long needle or cannula, syringes are also useful for transferring fluids through rubber septa when atmospheric oxygen or moisture are being excluded. Examples include the transfer of air-sensitive or pyrophoric reagents such as phenylmagnesium bromide and n-butyllithium respectively. Glass syringes are also used to inject small samples for gas chromatography (1 micro liter) and mass spectrometry (10 micro liter). Syringe drivers may be used with the syringe as well.

Cooking

Some culinary uses of syringes are injecting liquids (such as gravy) into other foods, or for the manufacture of some candies.

Syringes may also be used when cooking meat to enhance flavor and texture by injecting juices inside the meat, and in baking to inject filling inside a pastry. It is common for these syringes to be made of stainless steel components, including the barrel. Such facilitates easy disassembly and cleaning.

Others

Syringes are used to refill ink cartridges with ink in fountain pens.

Common workshop applications include injecting glue into tight spots to repair joints where disassembly is impractical or impossible; and injecting lubricants onto working surfaces without spilling.

Sometimes a large hypodermic syringe is used without a needle for very small baby mammals to suckle from in artificial rearing.

Historically, large pumps that use reciprocating motion to pump water were referred to as syringes. Pumps of this type were used as early firefighting equipment.

There are fountain syringes where the liquid is in a bag or can and goes to the nozzle via a pipe. In earlier times, clyster syringes were used for that purpose.

Loose snus is often applied using modified syringes. The nozzle is removed so the opening is the width of the chamber. The snus can be packed tightly into the chamber and plunged into the upper lip. Syringes, called portioners, are also manufactured for this particular purpose.

Additional Information

Syringes were invented long before hypodermic needles. Their origins are found in Greek and Roman literature where there are descriptions of hollow reeds for the ritual of anointing the body with oil, and as musical instruments using a plunger to alter the pitch. Simple piston syringes for delivering ointments and creams for medical use were described by Galen (129-200 CE) and an Egyptian, Ammar bin Ali al-Mawsili, reported using glass tubes to apply suction for cataract extraction from about 900 CE. In 1650, Pascal’s experimental work in hydraulics stimulated him to invent the first modern syringe which allowed the infusion of medicines. Christopher Wren (better known as an architect than for his medical training), used a ‘cut-down’ technique to intravenously inject dogs with poppy sap through goose quill canulae. By 1660 Drs Major and Esholttz used this method on humans with similar fatal results due to ignorance of suitable dosage and the need for sterilising utensils and the infusion. The disastrous consequences  of these experiments delayed the use of injections for 200 years.

The first hypodermic needle was probably made by Francis Rynd in Dublin in 1844, using the technology of annealing the edges of a folded flat strip of steel to make a tube. This was then drawn through increasingly narrower dies whilst maintaining the patency of the needle. The bevelled point is cut and ground, and then the hub is added with its variety of fittings and locks. A syringe has three elements, the barrel (glass, plastic or metal), the plunger and the piston which may be of rubber, mineral, metal or synthetic material but in early examples waxed linen tape or asbestos was wound on a reel to obtain a watertight seal. Charles Pravaz, in France, administered coagulant to sheep in 1853, but it seems that Alexander Wood in Edinburgh combined a functional syringe with a hypodermic needle in the same year, to inject morphine into humans and probably should be credited with inventing the technique. The basic design has remained unchanged though interchangeable parts and the use of plastic resulted in the almost universal use of disposable syringes and needles since the mid-1950s. 

Looking to the future of the parenteral administration of medicines and vaccines, it’s likely that there will be increasing use of direct percutaneous absorption, especially for children. Micro-silicon-based needles, so small that they don’t trigger pain nerves are being developed, however, these systems cannot deliver intravenous or bolus injections so hypodermic needles, with or without syringes, are likely to be with us for a long time. They are also required for catheter-introduced surgical procedures in deep anatomical locations.

179478_1652215601.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2077 2024-03-02 00:02:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2079) Aromatherapy

Gist

Aromatherapy is the practice of using essential oils for therapeutic benefit. Aromatherapy has been used for centuries. When inhaled, the scent molecules in essential oils travel from the olfactory nerves directly to the brain and especially impact the amygdala, the emotional center of the brain.

Summary

Aromatherapy is a practice based on the use of aromatic materials, including essential oils and other aroma compounds, with claims for improving psychological well-being. It is offered as a complementary therapy or as a form of alternative medicine. Fragrances used in aromatherapy are not approved as prescription drugs in the United States.

People may use blends of essential oils as topical application, massage, inhalation, or water immersion. There is no good medical evidence that aromatherapy can either prevent, treat or cure any disease. There is disputed evidence that it may be effective in combating postoperative nausea and vomiting.

Choice and purchase

Aromatherapy products, and essential oils in particular, may be regulated differently depending on their intended use. Products that are marketed with a therapeutic use in the US are regulated by the US Food and Drug Administration (FDA); products with a cosmetic use must meet safety requirements, regardless of their source. The US Federal Trade Commission (FTC) regulates any aromatherapy advertising claims.

There are no standards for determining the quality of essential oils in the United States; while the term "therapeutic grade" is in use, it does not have a regulatory meaning.

Analysis using gas chromatography and mass spectrometry has been used to identify bioactive compounds in essential oils. These techniques are able to measure the levels of components to a few parts per billion. This does not make it possible to determine whether each component is natural or whether a poor oil has been "improved" by the addition of synthetic aromachemicals, but the latter is often signalled by the minor impurities present. For example, linalool made in plants will be accompanied by a small amount of hydro-linalool whilst synthetic linalool has traces of dihydro-linalool.

Effectiveness

There is no good medical evidence that aromatherapy can prevent or cure any disease, although it may be useful for managing symptoms.

Evidence for the efficacy of aromatherapy in treating medical conditions is poor, with a particular lack of studies employing rigorous methodology. In 2015, the Australian Government's Department of Health published the results of a review of alternative therapies that sought to determine if any were suitable for being covered by health insurance; aromatherapy was one of 17 therapies evaluated for which no clear evidence of effectiveness was found.

A number of systematic reviews have studied the clinical effectiveness of aromatherapy in respect to pain management in labor, the treatment of post-operative nausea and vomiting, managing challenging behaviors in people suffering from dementia, and symptom relief in cancer.

According to the National Cancer Institute, no studies of aromatherapy in cancer treatment have been published in a peer-reviewed scientific journal. Results are mixed for other studies. Some showed improved sleep,[25] anxiety, mood, nausea, and pain, while others showed no change in symptoms.

Details

Aromatherapy is the use of essential oils to improve your health or well-being. You may apply essential oils (properly diluted) to your skin through techniques like massage. Or, you may choose to inhale the aroma by creating a facial steam or using an essential oil diffuser. Possible benefits include reduced anxiety and improved sleep quality.

Overview:

What is aromatherapy?

Aromatherapy is a form of complementary and alternative medicine (CAM). It uses essential oils to manage symptoms or boost your well-being. It’s a holistic therapy, meaning it supports your whole self — mind, body and spirit. Aromatherapy involves inhaling essential oils or applying them (diluted) to your skin.

People around the world have used aromatherapy for centuries. In the U.S., aromatherapy often complements other treatments for people with conditions like anxiety. People also use aromatherapy to maintain wellness and feel better in general.

Healthcare providers who specialize in CAM or integrative medicine provide aromatherapy services in their offices or clinics. You can also use aromatherapy on your own, but it’s important to learn proper techniques for doing so. Talk to a healthcare provider before starting aromatherapy to learn how to do it right and make sure it’s safe for you.

How does aromatherapy work?

When inhaled, aromatherapy stimulates your nervous system (brain, spinal cord and nerves). This means aromatherapy starts a chain reaction of signals to your brain and chemical responses throughout your body. This activity begins once you start smelling an essential oil.

Essential oils (like all substances that smell) release tiny molecules into the air. When you inhale an essential oil, those molecules move into your nose. Special cells in your nose called olfactory receptors notice the molecules are there. In response, they send messages to your brain through your olfactory nerve.

These messages stimulate activity in your hypothalamus and your brain’s limbic system. Your limbic system is a group of structures (including the amygdala) that help control your emotions and store your memories. Your brain then releases hormones like:

* Serotonin.
* Endorphins.
* Dopamine.

These hormones help regulate many body functions like mood, sleep and digestion. The release of these hormones can help you in various ways, like lowering anxiety and reducing your perception of pain.

Researchers continue to investigate how aromatherapy affects your body.

What conditions are treated with aromatherapy?

There’s evidence that aromatherapy may help you manage:

* Stress.
* Sleep disorders.
* Menstrual cramps.
* Early labor.

Some research shows aromatherapy may help relieve dementia symptoms (like issues with behavior, thinking and mood). But other studies show no benefit. As a result, a review published in 2020 concluded there’s not enough evidence to show aromatherapy can help people with dementia.

You might hear aromatherapy can help with a specific condition you have. If so, it’s a good idea to talk to your healthcare provider. They have access to the latest research, and they can help you learn if aromatherapy has possible benefits for you.

Aromatherapy for anxiety

Aromatherapy may help manage anxiety, according to many studies. It seems most helpful in treating state anxiety, or an emotional state you feel when you perceive yourself as facing stress or danger. State anxiety is temporary and happens because of a specific situation you’re in. People sometimes feel it during medical situations. For example, you might feel state anxiety when you’re:

* Having a test, like an MRI (magnetic resonance imaging).
* About to go into surgery.
* In labor.

Some research shows aromatherapy may also help with other forms of anxiety, including trait anxiety. This is a tendency to feel anxious that’s more constant in your life, and not something you only feel in certain situations. For example, trait anxiety can be a symptom of generalized anxiety disorder. People living with chronic diseases might also experience anxiety on a more regular basis.

Your healthcare provider can tell you more about the possible benefits of aromatherapy in your unique situation.

What are aromatherapy oils?

Aromatherapy oils, or essential oils, are highly concentrated plant extracts. They come from various parts of plants, including flowers, stems and leaves. Manufacturers use different processes to remove these oils, like distillation and cold press. Many pounds of plant materials go into one small bottle of essential oil.

What are carrier oils?

Carrier oils, also called base oils or fixed oils, are substances made from plants. Their chemical makeup is different from that of essential oils. They don’t have a strong smell, and they don’t evaporate like essential oils do.

Carrier oils are a vehicle for safely getting essential oils into your body. People dilute essential oils in carrier oils. Because essential oils are so potent, you usually use a much higher percentage of carrier oil compared to essential oil. Carrier oils contain many ingredients that are good for your skin. These include antioxidants and essential fatty acids.

Here are just a few examples of carrier oils:

* Coconut oil.
* Rosehip oil.
* Grapeseed oil.
* Sweet almond oil.

Procedure Details:

What are the techniques for aromatherapy?

Common techniques include:

* Inhalation. There are many ways to inhale essential oils. You might create a facial steam by adding an essential oil to a bowl of hot water (up to six drops per ounce of water). Then, lean over the bowl with your eyes closed and breathe in. You may also choose to use a diffuser to spread the scent throughout your room or home. Be sure to follow the instructions on the specific diffuser you buy.
* Aromatherapy massage. A qualified practitioner can give you a massage with lotion or oil containing essential oils. You may also choose to use massage at home. You should only use properly diluted essential oil. For massage, dilute your essential oil so it’s concentrated at about 1%.
* Bath. You may choose to add essential oils to your bath. You should always mix the essential oil with a carrier oil (like jojoba oil) or dispersant (like solubol) first. Undiluted essential oil won’t mix in with your bath water, and it may irritate your skin.

Tips for using essential oils in aromatherapy

Essential oils are powerful substances. They come from nature but can still harm you if you don’t use them properly. Here’s some advice for using them safely:

* Always dilute essential oils before putting them on your skin. This means you mix an essential oil with another substance, like a carrier oil or unscented lotion. The National Association for Holistic Aromatherapy offers guidance on proper dilution (how many drops of essential oil to add to another substance). You should never apply essential oil directly to your body from the bottle. You also shouldn’t add drops directly from the bottle to your bath.
* Don’t use or store essential oils near open flames. They’re flammable, meaning they can easily catch fire when exposed to a flame.
* Don’t drink or otherwise consume essential oils. It’s not safe to drink essential oils or add drops to tea or water.
* Keep essential oils away from children and pets. These oils are toxic in large amounts. Make sure the bottles are tightly sealed. Store them out of reach of children, pets and anyone in your household who might unknowingly consume them.
* Aromatherapy oils and other products are easy to find online and in stores. However, their ease of access may falsely suggest anyone can do aromatherapy and reap the same benefits. Talking to a healthcare provider before you start can help avoid common pitfalls. Your provider will also help you select high-quality essential oils that have the greatest chance of helping you.

Risks / Benefits

What are some potential aromatherapy benefits?

Aromatherapy may help you manage stress, anxiety and other health issues that affect your daily life. Many people choose aromatherapy because it:

* Uses natural, plant-based products.
* Can be tailored to your preferences (for example, the specific scents you enjoy).
* Can be used along with other treatment methods, like psychotherapy.

What are the risks of aromatherapy?

There are many different essential oils available. Each one has unique risks based on the plant it comes from and its chemical makeup. A healthcare provider can tell you more about the risks of specific essential oils and help choose the best ones for you.

In general, possible risks of using essential oils on your skin include:

* Skin irritation.
* Allergic contact dermatitis.
* Skin discoloration when you’re exposed to sunlight (photosensitivity).

The risks are low when you use essential oils properly (for example, when you dilute them in carrier oils). Some essential oils have a higher risk of irritating your skin because they contain higher levels of natural chemicals called phenols. Examples include clove oil and cinnamon bark oil.

Talk to a healthcare provider before using aromatherapy in any form if you:

* Are pregnant or could become pregnant. Aromatherapy is generally safe during pregnancy. But your provider may tell you to avoid certain essential oils or techniques.
* Have any diagnosed medical conditions. Aromatherapy may not be safe for people with certain conditions like epilepsy, asthma and some skin conditions.
* Take prescription medication. Aromatherapy uses natural, plant-based products, but those still can interact with medications. Just as you should talk to your provider before taking herbal supplements, you should also tell them about any plans to use aromatherapy. They’ll make sure it’s safe for you. Also, keep in mind that aromatherapy doesn’t replace medication, and you should never stop taking a medication without talking to your healthcare provider first.

Recovery and Outlook

Does aromatherapy work?

The evidence is mixed and depends on the context. Some studies show aromatherapy is effective in certain situations, like managing anxiety or insomnia. Other studies conclude aromatherapy doesn’t help with certain symptoms. For example, a study published in 2022 finds aromatherapy doesn’t reduce symptoms of depression in people with cancer.

In some cases, researchers conclude there’s not enough evidence to say aromatherapy works or doesn’t work. For example, a review published in 2018 looks at the role of aromatherapy in easing nausea and vomiting after surgery. It concludes aromatherapy may help, but there’s not enough evidence to say for sure.

You might wonder why the findings can be so different. There are several reasons:

* Combined therapies. Some researchers combine aromatherapy with other therapies (like massage or music therapy). So, it’s not always clear if the essential oils have benefits on their own.
* Research methods. Different studies use different methods, so it can be hard to compare their results. For example, one study might examine the effects of aromatherapy on people with cancer, while another might look at aromatherapy in labor and delivery. Studies also use different types of essential oils (like lavender vs. lemon) or use them for different amounts of time.
* Essential oil quality. Even studies that use the same type of essential oil might still have different results because of the quality of the oil. The chemical makeup of a specific oil can vary based on where the plant grows and how people extract the oil from the plant. Storage methods also play a role. Essential oils can break down over time from exposure to air, sun and heat.
* Expectation bias. This is when a person in a study believes a certain result will happen. For example, you might believe lavender essential oil will help you relax. So, in a study, you’d perceive yourself as calmer and less anxious even if the oil didn’t cause changes in your body. In research, it’s hard to erase this tendency.
* Individual factors. Researchers can’t control all the factors that influence how essential oils affect a person. For example, your age and skin health help determine how your body responds to aromatherapy. It’s hard to predict these factors in research or level the playing field to make effective comparisons.
* Small sample sizes. Many aromatherapy studies use small groups of people to draw their conclusions. This sometimes means the results can’t represent aromatherapy’s effects in larger populations.
* Lack of research. In some cases, there isn’t enough evidence to prove aromatherapy’s benefits for certain conditions or groups of people. Researchers continue to design new studies to get more information on how aromatherapy can help you.

Additional Information

Aromatherapy is the use of aromatic plant oils, including essential oils, for psychological and physical wellbeing. Aromatherapists blend therapeutic essential oils especially for each person and suggest methods of use such as topical application, massage, inhalation or water immersion to stimulate the desired responses.

The different smells (aromas), and the chemical constituents of the oils, can produce different emotional and physiological reactions. Essential oils can be massaged into the skin, added to bath water or vaporised in an oil burner. Although aromatherapy has been practised for centuries in various cultures, the modern version was developed mainly in France.

Aromatherapy has not yet undergone as much scientific scrutiny as other complementary therapies, but itmay be effective in helping with some complaints.

Risks of aromatherapy

Some aromatic plant oils are toxic and should never be used at all – for example, camphor, pennyroyal and wintergreen.

Aromatic plant oils are very potent and should never be swallowed or applied undiluted to the skin. People with asthma and those prone to nose bleeds should use caution when inhaling vaporising oils. Do not use aromatic plant oils in any orifice such as ears, mouth or math.

Aromatic plant oils (essential oils) can be poisonous if taken in by mouth. Consumption of essential oils is an increasing cause of poisoning in children. All aromatic plant oils should be secured and kept out of reach of children.

Pregnant women and people with certain conditions, including epilepsy and high blood pressure, should consult their doctor before using any aromatic plant oils. Some oils can be dangerous during pregnancy and for people with certain conditions.

900x500_banner_HK-Health-benefits-of-Aromatherapy.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2078 2024-03-03 00:12:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2080) Allopathic Medicine

Gist

A system in which medical doctors and other health care professionals (such as nurses, pharmacists, and therapists) treat symptoms and diseases using drugs, radiation, or surgery. Also called biomedicine, conventional medicine, mainstream medicine, orthodox medicine, and Western medicine.

Summary

Even if you aren’t familiar with the term, chances are that an allopathic doctor has helped you or treated you at some point during your lifetime. These medical professionals treat conditions, symptoms, or diseases using a range of drugs, surgery, or therapies.

Simply put, an allopathic doctor is one who practices modern medicine. Other terms for allopathic medicine include Western, orthodox, mainstream, or conventional medicine.

“Allo,” which comes from the Greek word for “opposite,” means to treat the symptom with its opposite, or remedy. Allopathic doctors may specialize in a number of areas of clinical practice and have the title of medical doctor, or MD.

What Does an Allopathic Doctor Do?

An allopathic doctor uses allopathic treatments to help people with a variety of conditions or diseases.


They may choose to focus on research or teaching throughout their career, in addition to choosing a field in which to specialize. You can find them in private practice, hospitals, medical centers, universities, or clinics.

Medical doctors practice allopathic medicine, as opposed to osteopathic medicine. More than 90% of doctors currently practicing in the United States practice allopathic medicine and have the title MD.

The other 10% are doctors of osteopathic medicine, or osteopaths. They’re similar to allopathic doctors in that they use a variety of modern medicine, technology, and drugs to treat people. However, they also incorporate holistic care and philosophy into their practice.

An allopathic doctor is certified to diagnose and treat illnesses, in addition to performing surgery and prescribing medications. An allopathic doctor can get licensed to perform their duties in any of the 50 states of the United States.

Education and Training

All doctors who practice allopathy follow a similar path. First, they complete an undergraduate degree in a related field. Next, the candidate receives a satisfactory score on the Medical College Admission Test (MCAT) and successfully completes four years or medical school.

After medical school, an allopathic doctor completes a residency program to get hands-on training alongside medical professionals. Depending on the specialty, a residency program can last from 3 to 7 years.

Some specialties in allopathy include:

* Cardiovascular medicine
* Neurology
* Oncology
* Pediatrics
* Surgery
* Family medicine
* Dermatology
* Orthopedics
* Internal medicine

The American Board of Medical Specialties recognizes 24 board-certified areas of specialties in allopathy. Within these specialties are many other subspecialties that an allopathic doctor may choose to focus on.

Details

Allopathic medicine is another term for conventional Western medicine. In contrast to complementary medicine (sometimes known as alternative medicine), allopathy uses mainstream medical practices like diagnostic blood work, prescription drugs, and surgery.

An allopathic doctor is typically an MD, while osteopaths (DO), chiropractors (DC), and Oriental medical doctors (OMD) usually fall under the complementary medicine umbrella.

This article discusses the history of allopathic medicine, its uses, and its limitations. It also looks at the philosophical differences between conventional and alternative medicine and how these diverse disciplines complement one another.

What Is Allopathic Medicine?

Allopathic medicine refers to the practice of conventional Western medicine. The term allopathic medicine is most often used to contrast conventional medicine with complementary medicine.

Complementary medicine is a term for the role alternative medicine plays as a "complement" to allopathic medicine, but this meaning has become obscure in recent years.

Integrative medicine is the term that is being increasingly used to refer to the practice of combining the best of complementary medicine with the best of conventional medicine to manage and reduce the risk of disease.

Allopathic medicine examples include:

* Antibiotics
* Blood work and laboratory testing
* Chemotherapy
* Hormone replacement therapy
* Insulin
* Pain relievers, like Tylenol (acetaminophen) and Aleve (naproxen)
* Primary care medicine
* Specialists, like cardiology, endocrinology, oncology, and rheumatology
* Surgery
* Vaccines
* Ultrasounds
* X-rays

History of Allopathy

The term allopathic medicine was coined in the 1800s to differentiate two types of medicine.4 Homeopathy was on one side, based on the theory that "like cures like." The thought with homeopathy is that very small doses of a substance that cause the symptoms of a disease could be used to alleviate that disease.

In contrast, allopathic medicine was defined as the practice of using opposites: using treatments that have the opposite effects of the symptoms of a condition.

At the time, the term allopathic medicine was often used in a derogatory sense. It referred to radical treatments such as bleeding people to relieve fever. Over the years this meaning has changed, and now the term encompasses most of the modern medicine in developed countries.

Current Allopathic Practices

Today, allopathic medicine is mainstream medicine. The term is no longer derogatory and instead describes current Western medicine. Most physicians are considered allopathic providers.

Medical insurance covers most types of allopathic care, whereas complementary medicine is often an out-of-pocket cost.

Examples of allopathic medicine include everything from primary care physicians to specialists and surgeons.

Other terms used interchangeably with allopathic medicine include:

* Conventional medicine
* Traditional Western medicine
* Orthodox medicine
* Mainstream medicine
* Biomedicine
* Evidence-based medicine (Alternative medicine modalities can also be evidence-based if significant research has shown it works.)

These allopathic monikers are usually contrasted with complementary practices, such as:

* Ayurveda
* Traditional Chinese Medicine
* Folk medicine
* Homeopathy
* Natural medicine or naturopathy
* Bioregulatory medicine
* Phytotherapy

How is Osteopathic Medicine Different than Allopathic?

Doctors of osteopathy are also trained in allopathic techniques. They can prescribe medication covered by insurance. Basically, most osteopaths typically work alongside conventional doctors in hospitals and medical practices.

The main difference between an M.D. and a D.O. is osteopaths are also trained in osteopathic manipulation. This hands-on technique is used to treat musculoskeletal conditions, which is similar—but not identical—to chiropractic manipulation.

While there is overlap between the two, osteopathic and allopathic disciplines are trained in different medical schools with varying course requirements and certifications.

Additional Information

Allopathic medicine, or allopathy, is an archaic and derogatory label originally used by 19th-century homeopaths to describe heroic medicine, the precursor of modern evidence-based medicine. There are regional variations in usage of the term. In the United States, the term is sometimes used to contrast with osteopathic medicine, especially in the field of medical education. In India, the term is used to distinguish conventional modern medicine from Siddha medicine, Ayurveda, homeopathy, Unani and other alternative and traditional medicine traditions, especially when comparing treatments and drugs.

The terms were coined in 1810 by the creator of homeopathy, Samuel Hahnemann. Heroic medicine was the conventional European medicine of the time and did not rely on evidence of effectiveness. It was based on the belief that disease is caused by an imbalance of the four "humours" (blood, phlegm, yellow bile, and black bile) and sought to treat disease symptoms by correcting that imbalance, using "harsh and abusive" methods to induce symptoms seen as opposite to those of diseases rather than treating their underlying causes: disease was caused by an excess of one humour and thus would be treated with its "opposite".

A study released by the World Health Organization (WHO) in 2001 defined allopathic medicine as "the broad category of medical practice that is sometimes called Western medicine, biomedicine, evidence-based medicine, or modern medicine." The WHO used the term in a global study in order to differentiate Western medicine from traditional and alternative medicine, noting that in certain areas of the world "the legal standing of practitioners is equivalent to that of allopathic medicine" where practitioners can be separately certified in complementary/alternative medicine and Western medicine.

The term allopathy was also used to describe anything that was not homeopathy. Kimball Atwood, an American medical researcher and alternative medicine critic, said the meaning implied by the label of allopathy has never been accepted by conventional medicine and is still considered pejorative. American health advocate and sceptic William T. Jarvis, stated that "although many modern therapies can be construed to conform to an allopathic rationale (e.g., using a laxative to relieve constipation), standard medicine has never paid allegiance to an allopathic principle" and that the label "allopath" was "considered highly derisive by regular medicine." Most modern science-based medical treatments (antibiotics, vaccines, and chemotherapeutics, for example) do not fit Hahnemann's definition of allopathy, as they seek to prevent illness or to alleviate an illness by eliminating its cause.

img-allopathic-medicine.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2079 2024-03-04 00:02:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2081) Aircraft Maintenance Engineers

Aircraft maintenance engineers keep aeroplanes safe. They install, inspect, maintain and repair aircraft, aircraft radio, avionic (electronic), navigation, communication and electrical and mechanical systems.

Summary

An Aircraft Maintenance Engineer’s fundamental purpose is to ensure aircraft are safe and airworthy to fly. No aircraft takes off without being certified to do so by an engineer. It’s a role people don’t often think about, however, it’s one of the most crucial roles when it comes to the safety and viability of the aviation industry. If you enjoy finding practical solutions to complex problems and knowing how things work, then this might be the career path for you.

Learning about the technology behind the aircraft systems and fixing them makes for a fascinating and rewarding career.

Why become an Aircraft Maintenance Engineer?

* Well paid job outcomes for a global qualification, especially if you become licensed
* Stable career with job security
* Jobs are in demand worldwide
* Work anywhere in the world
* Career growth opportunities
* Always something new to learn in this industry
* Opportunities for travel are plentiful
* Work with the latest technology
* Use hand-on skills
* Work in a team environment

Details

Play a crucial role in the aviation sector, ensuring the safety and reliability of aircraft operations. The Aircraft Maintenance Engineers Technology program is designed to equip you with the knowledge and practical skills necessary for a successful career in aircraft maintenance.

Work towards your Aircraft Maintenance Engineer “M” (AME) license. As a licensed AME, you’ll be entrusted with the maintenance, repair, and overall upkeep of various aircraft, such as general aviation planes, corporate jets, charter aircraft, transport-category aircraft, helicopters, and airliners.

Throughout this two-year diploma program, you’ll be immersed in a comprehensive curriculum that covers all facets of aircraft maintenance. The program is not just theoretical. Get hands-on training that prepares you for real-world challenges. Classes are at the Art Smith Aero Centre for Training and Technology at the Calgary International Airport.

In this program, you will: 

*learn the fundamentals of aerodynamics, including the principles of flight and how various forces act on an aircraft
* learn about the electronic systems used in aircraft, such as navigation, communication, and auto-flight systems
* gain knowledge of engine, electrical, environment control, flight control, hydraulic, fuel, landing gear, and other mechanical systems crucial for aircraft operation
* learn about structural materials and hardware used in aircraft manufacturing, including metals, composites, and fasteners
* train in safety protocols and procedures to maintain a safe working environment
* develop troubleshooting and diagnostics techniques for identifying and resolving aircraft mechanical and electrical issues
* learn preventive maintenance inspection techniques for regular maintenance to prevent issues and ensure aircraft longevity and safety and methods for conducting thorough inspections to ensure airworthiness
* acquire the ability to accurately read and interpret technical manuals and blueprints, including engineering drawings, maintenance manuals, and blueprints.

The program also offers web-based courses, which offer you flexibility.   

If you are passionate about aviation, have a knack for problem-solving, and aspire to be part of a team that ensures millions of people travel safely, the Aircraft Maintenance Engineers Technology program could be the first step toward a fulfilling job or career in this essential industry.

Additional Information

An aircraft maintenance engineer (AME), also licensed aircraft maintenance engineer (LAME or L-AME), is a licensed person who carries out and certifies aircraft maintenance. The license is widespread internationally and is recognised by the International Civil Aviation Organization (ICAO). The American FAA recognise the qualification in foreign countries but refers to it as aviation maintenance engineer rather than "Aircraft...". Unlicensed mechanics or tradespersons are sometimes informally referred to as "Unlicensed AMEs".

Countries which issue or recognize AME licenses internally include; Australia, Bangladesh, Canada, India, Ireland, New Zealand, the United Kingdom and much of Asia.

The American equivalent of an AME is an aircraft maintenance technician (AMT), also known as an A&P.

Up until 1998, Type I and Type II aircraft maintenance engineer (AME) licences were distinguished. In 1998 ICAO replaced these by a single AME licence.

In 2005 the relationship between the Canadian AME and the US A&P (AMT) was further revised, through a Bilateral Aviation Safety Agreement (BASA) between the US and Canada.

1*V1ut4LlA7aDlFQpj3wrBHA.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2080 2024-03-05 00:03:19

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2082) Chief Executive Officer

Gist

A chief executive officer (CEO) (executive officer, or just chief executive (CE), or as managing director (MD) in the UK) is the highest officer charged with the management of an organization – especially a company or nonprofit institution.

Summary

Chief executive officer (CEO) is the senior manager or leader of a business or other organization, such as a nonprofit or nongovernmental organization (NGO). A chief executive officer has final decision-making authority within the organization (subject to the general consent of a board of directors, if there is one) and holds a number of important responsibilities, including: articulating the mission or purpose of the business or organization; determining its managerial and reporting structure; setting its short- and long-term goals; devising and implementing strategies for attaining such goals; managing at a high level its operations and resources; establishing a culture that fosters employee engagement, adaptability, accountability, and learning (or skill building); reviewing the performance of lower-level executives and their units or teams; representing the business or organization in public forums and industry-related gatherings; assessing and reporting (to shareholders, stakeholders, boards of directors, or owners) on its operations at regular intervals; and, more generally, ensuring its overall success—which, in the case of businesses, is normally assessed in terms of potential increases in net income, market share, shareholder returns, and brand recognition. In the case of larger businesses and some other larger organizations, the CEO is hired or internally promoted by a board of directors, which may subsequently dismiss the CEO if it is dissatisfied with his or her performance.

The specific duties and responsibilities of CEOs tend to vary with the size, structure, and culture of the organization and the nature of the industry or environment in which it operates. For example, the CEO of a small company is typically more directly involved in day-to-day operations and lower-level decision-making than the CEO of a large company.

Details

The chief executive officer (CEO) is the top position in an organization and responsible for implementing existing plans and policies, improving the company's financial strength, supporting ongoing digital business transformation and setting future strategy.

The CEO is ultimately responsible for the company's success or failure and oversees its various functions, including operations, finance, marketing, sales, human resources, legal, compliance and technology, while balancing the needs of employees, customers, investors and other stakeholders.

The CEO title most often applies to for-profit businesses whose size in terms of employee numbers or revenue justifies this top position. Some nonprofit organizations also choose to have their most senior person hold the CEO title. Business laws influence whether the title CEO is appropriate within an entity. Corporations, by law, must have CEOs, other chief officers and boards of directors. A limited liability company (LLC) can structure itself like a corporation and have a CEO, but it's not required by the laws governing LLCs.

Additionally, some business and nonprofit entities can have their top leader fulfill the duties of a CEO yet opt for other titles such as president or executive director.

CEO roles and responsibilities

Although a CEO's key responsibilities are generally the same from one organization to the next, exact duties can vary based on several factors, including the size of the company and whether it's a public or privately held company. The CEO or owner of a startup or a small family business generally performs more hands-on day-to-day operations and management tasks than the CEO of a large company.

One of the CEO's key tasks is developing, communicating and implementing corporate policies and strategies, determining the company's plan of action in terms of which budgets, investments, markets, partnerships and products, among others, to pursue and implement to best fulfill the organization's primary mission: maximize profits, as is the case in most businesses, or meet specific humanitarian and philanthropic goals, as with nonprofits and some for-profit enterprises.

Other key CEO tasks include organizing leadership and staff to meet strategic goals, ensuring that appropriate governance and controls exist to limit risk and comply with laws and regulations, identifying and delivering value to the various stakeholders, and providing leadership at all times, especially during times of crisis.

Differences between a CEO and owner of a company

CEO is a functional title with daily leadership duties and responsibilities, while ownership is a legal designation.

The board of directors usually selects the CEO, who is the highest-level person, while a business owner is typically the founder, considered the sole proprietor and entrepreneur who owns most or all the company, and in charge of all business functions. In a publicly traded company, the shareholders are the owners and the CEO is an employee held accountable by the shareholders through the board of directors.

The CEO can be the owner, and the owner can be the CEO, so the roles are aren't mutually exclusive. Successful CEOs and owners often possess similar traits, including business acumen, critical thinking, interpersonal communication skills, passion for the job and loyalty to the company. They also may be responsible for filling high-level positions in their organizations.

An owner can play a passive role in the business, providing guidance and advice to the CEO, or a direct role by managing some or all business functions. The CEO almost always has a direct role in the business with responsibility for day-to-day oversight and the company's success or failure.

CEO's role in staff hiring and retention

The CEO is also responsible for hiring C-level members of the executive team and firing those who don't perform up to the standards set by the CEO. These chief officers are tasked with advising the CEO on functional areas, including the chief financial officer (CFO) on financial planning and risk management, chief operating officer (COO) on business operations, chief technical officer (CTO) on technology requirements, chief information officer (CIO) on IT processes, chief marketing officer on marketing and sales, chief data officer on data science and analytics, chief compliance officer on policies and procedures, and chief human resources officer or chief people officer on HR and talent management. These executives help the CEO formulate strategy and implement the policies and directions set by the CEO and are in charge of managing their functional areas on the CEO's behalf.

The CEO is also responsible for promoting the company's culture by helping to determine the attitudes, behaviors and values that best represent the organization's mission, modeling those characteristics, ensuring management's support and recognizing the value and contributions of each employee.

An organization's board of directors generally hires the CEO, determines compensation and evaluates performance. The CEO, who can also hold the position of company president or chairman of the board of directors, is expected to regularly keep the board informed of corporate affairs.

Similar C-suite positions

As a C-suite position, the CEO is part of the executive staff that sets the company's strategy. While most of an organization's lower-ranking employees require technical know-how, C-suite executives must possess leadership and team-building skills. Additionally, C-suite executives often require better business acumen because their decisions have a major influence on an enterprise's overall direction and success. Other C-suite positions include the CFO, COO and CIO.

* The chief financial officer compiles budgets, tracks expenses and revenue, analyzes financial data and reports this information to the CEO. The CFO is also the liaison between the company and banks, money lenders and financial institutions with which the company does business.

* The chief operating officer usually oversees operations and day-to-day functions within the company, particularly when the organization is very large. The COO reports directly to and advises the CEO and works closely with the CFO and CIO.

* The chief information officer manages IT strategy and implementation, overseeing the hardware, software and data that help other members of the C-suite do their jobs effectively. The CIO must research new technologies, strategize how technology can provide business value and address the risks and rewards associated with digital information.

There are other C-suite positions with titles such as chief digital officer, chief data officer and chief marketing officer, but the exact titles and roles vary from company to company. For example, a healthcare organization would require a chief medical officer, and cutting-edge technology companies often employ a chief innovation officer.

Examples of successful CEOs

The best CEOs excel at innovation, disrupting industries, improving the financial success of their companies and bettering the lives of their employees and society. In technology, for example, highly successful CEOs demonstrate a unique vision, longevity and tenacity, at times generate controversy, and eventually attain iconic status as their brands become household names.

Steve Jobs co-founded Apple in 1976 with the mission of contributing to society by making tools for the mind that advance humankind. While CEO of Apple, the company developed breakthrough products such as the Macintosh computer, iPhone and iPad and revolutionized digital music through iTunes. Apple, however, had its share of financial problems in the 1990s and approached bankruptcy partly due to Mac clones cutting into sales. Jobs was brought back as CEO and set about reinventing the computer maker. His ambition and vision transformed Apple into one of the world's most successful and influential companies by the time of his death in 2011. In 2018, Apple became the first publicly traded U.S. company to reach $1 trillion in valuation.

Jeff Bezos walked away from a hedge fund job in 1994 and founded Amazon, which began as an online bookstore operating in Bezos' garage to take advantage of the rapid growth in internet usage. Bezos' vision transformed Amazon into an "everything store" where customers could buy a wide range of products. To optimize the customer experience, the online retail giant offered more book titles than those found in brick-and-mortar bookstores, provided perks like one-click shopping and published good and bad product reviews. These practices are commonplace now, but they were revolutionary in 1997 when Amazon went public. The company entered the grocery business and substantially increased its footprint as a brick-and-mortar retailer with the purchase of Whole Foods in 2017. Amazon is currently the world's largest retailer and marketplace, smart speaker provider and cloud computing service through AWS. Bezos stepped down as Amazon CEO in 2021 to become executive chairman.

Elon Musk, CEO of Tesla and SpaceX, founder of several other businesses, including the company that later became PayPal, and currently negotiating the purchase of Twitter, is said to be revolutionizing transportation on Earth with electric cars and in space with reusable rocket launch systems. Musk joined Tesla as chairman and product architect in 2004 and became CEO in 2008. Tesla introduced the Roadster sports car in 2006, Model S sedan in 2012 and less-expensive Model 3 in 2017, which became the best-selling electric car of all time. Believing that humanity must learn to live on other planets to survive as a species, Musk founded SpaceX as CEO and chief designer in 2002 and set out to make rockets reusable and more affordable. SpaceX's Dragon spacecraft has carried astronauts and supplies to the International Space Station, and the Super Heavy Starship system will eventually carry a spacecraft designed for rapid transportation between cities on Earth and travel to the Moon and Mars.

Additional Information

A chief executive officer (CEO) (executive officer, or just chief executive (CE), or as managing director (MD) in the UK) is the highest officer charged with the management of an organization – especially a company or nonprofit institution.

CEOs find roles in various organizations, including public and private corporations, nonprofit organizations, and even some government organizations (notably state-owned enterprises). The CEO of a corporation or company typically reports to the board of directors and is charged with maximizing the value of the business, which may include maximizing the share price, market share, revenues, or another element. In the nonprofit and government sector, CEOs typically aim at achieving outcomes related to the organization's mission, usually provided by legislation. CEOs are also frequently assigned the role of the main manager of the organization and the highest-ranking officer in the C-suite.

Origins

The term "chief executive officer" is attested as early as 1782, when an ordinance of the Congress of the Confederation of the United States of America used the term to refer to governors and other leaders of the executive branches of each of the Thirteen Colonies. In draft additions to the Oxford English Dictionary published online in 2011, the Dictionary says that the use of "CEO" as an acronym for a chief executive officer originated in Australia, with the first attestation being in 1914. The first American usage cited is from 1972.

Responsibilities

The responsibilities of an organization's CEO are set by the organization's board of directors or other authority, depending on the organization's structure. They can be far-reaching or quite limited, and are typically enshrined in a formal delegation of authority regarding business administration. Typically, responsibilities include being an active decision-maker on business strategy and other key policy issues, leader, manager, and executor. The communicator role can involve speaking to the press and to the public, as well as to the organization's management and employees; the decision-making role involves high-level decisions about policy and strategy. The CEO is tasked with implementing the goals, targets and strategic objectives as determined by the board of directors.

As an executive officer of the company, the CEO reports the status of the business to the board of directors, motivates employees, and drives change within the organization. As a manager, the CEO presides over the organization's day-to-day operations. The CEO is the person who is ultimately accountable for a company's business decisions, including those in operations, marketing, business development, finance, human resources, etc.

The use of the CEO title is not necessarily limited to describing the owner or the head of a company. For example, the CEO of a political party is often entrusted with fundraising, particularly for election campaigns.

International use

In some countries, there is a dual board system with two separate boards, one executive board for the day-to-day business and one supervisory board for control purposes (selected by the shareholders). In these countries, the CEO presides over the executive board and the chairperson presides over the supervisory board, and these two roles will always be held by different people. This ensures a distinction between management by the executive board and governance by the supervisory board. This allows for clear lines of authority. The aim is to prevent a conflict of interest and too much power being concentrated in the hands of one person.

In the United States, the board of directors (elected by the shareholders) is often equivalent to the supervisory board, while the executive board may often be known as the executive committee (the division/subsidiary heads and C-level officers that report directly to the CEO).

In the United States, and in business, the executive officers are usually the top officers of a corporation, the chief executive officer (CEO) being the best-known type. The definition varies; for instance, the California Corporate Disclosure Act defines "executive officers" as the five most highly compensated officers not also sitting on the board of directors. In the case of a sole proprietorship, an executive officer is the sole proprietor. In the case of a partnership, an executive officer is a managing partner, senior partner, or administrative partner. In the case of a limited liability company, an executive officer is any member, manager, or officer.

Related positions

Depending on the organization, a CEO may have several subordinate executives to help run the day-to-day administration of the company, each of whom has specific functional responsibilities referred to as senior executives,[8] executive officers or corporate officers. Subordinate executives are given different titles in different organizations, but one common category of subordinate executive, if the CEO is also the president, is the vice president (VP). An organization may have more than one vice president, each tasked with a different area of responsibility (e.g., VP of finance, VP of human resources). Examples of subordinate executive officers who typically report to the CEO include the chief operating officer (COO), chief financial officer (CFO), chief strategy officer (CSO), chief marketing officer (CMO) and chief business officer (CBO). The public relations-focused position of chief reputation officer is sometimes included as one such subordinate executive officer, but, as suggested by Anthony Johndrow, CEO of Reputation Economy Advisors, it can also be seen as "simply another way to add emphasis to the role of a modern-day CEO – where they are both the external face of, and the driving force behind, an organization culture".

United States

In the US, the term chief executive officer is used primarily in business, whereas the term executive director is used primarily in the not-for-profit sector. These terms are generally mutually exclusive and refer to distinct legal duties and responsibilities.

United Kingdom

In the UK, chief executive and chief executive officer are used in local government, where their position in law is described as the "head of paid service", and in business and in the charitable sector. As of 2013, the use of the term director for senior charity staff is deprecated to avoid confusion with the legal duties and responsibilities associated with being a charity director or trustee, which are normally non-executive (unpaid) roles. The term managing director is often used in lieu of chief executive officer.

Celebrity CEOs

Business publicists since the days of Edward Bernays (1891–1995) and his client John D. Rockefeller (1839–1937) and even more successfully the corporate publicists for Henry Ford, promoted the concept of the "celebrity CEO". Business journalists have often adopted this approach, which assumes that the corporate achievements, especially in the arena of manufacturing, are produced by uniquely talented individuals, especially the "heroic CEO". In effect, journalists celebrate a CEO who takes distinctive strategic actions. The model is the celebrity in entertainment, sports, and politics – compare the "great man theory". Guthey et al. argues that "...these individuals are not self-made, but rather are created by a process of widespread media exposure to the point that their actions, personalities, and even private lives function symbolically to represent significant dynamics and tensions prevalent in the contemporary business atmosphere". Journalism thereby exaggerates the importance of the CEO and tends to neglect harder-to-describe broader corporate factors. There is little attention to the intricately organized technical bureaucracy that actually does the work. Hubris sets in when the CEO internalizes the celebrity and becomes excessively self-confident in making complex decisions. There may be an emphasis on the sort of decisions that attract the celebrity journalists.

Research published in 2009 by Ulrike Malmendier and Geoffrey Tate indicates that "firms with award-winning CEOs subsequently underperform, in terms both of stock and of operating performance".

ceo-chief-executive-officer.jpeg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2081 2024-03-06 00:06:44

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2083) Garland

Gist

A garland is a decorative braid, knot or wreath of flowers, leaves, or other material.

Summary

Garland is a band, or chain, of flowers, foliage, and leaves; it may be joined at the ends to form a circle (wreath), worn on the head (chaplet), or draped in loops (festoon or swag). Garlands have been a part of religious ritual and tradition from ancient times: the Egyptians placed garlands of flowers on their mummies as a sign of celebration in entering the afterlife; the Greeks decorated their homes, civic buildings, and temples with garlands and placed them crosswise on banquet tables; in ancient Rome, garlands of rose petals were worn, and carved wooden festoons (a craft revived in the 17th and 18th centuries) decorated homes. These garlands are a recurrent motif in classical and Renaissance paintings and relief sculptures. In the Byzantine culture a spiral garland made with foliage and tiny flowers was popular as were those of narrow bands of alternating fruit or flowers and foliage. During the 15th and 16th centuries garlands of fruits and flowers, especially of roses, were worn in pageants, festivals, and at weddings, a custom echoed in the folk festivals of Europe in which cattle are decked with flowers and dances are performed with chains of flowers linking the participants (garland dance). The religious significance of garlands was evident in the European Middle Ages (c. 5th–15th centuries) when they were hung on religious statues. The Hindus in India also attach a spiritual meaning to flowers, wearing and adorning their statues with blessed garlands.

Details

A garland is a decorative braid, knot or wreath of flowers, leaves, or other material. Garlands can be worn on the head or around the neck, hung on an inanimate object, or laid in a place of cultural or religious importance.

Etymology

From the French guirlande, itself from the Italian ghirlanda, a braid.

Types

* Bead garland
* Flower garland
* Lei - The traditional garland of Hawaiʻi.
* Pennant garland
* Pine garland
* Popcorn and/or cranberry garland
* Rope garland
* Tinsel garland
* Vine garland
* Balloon garland
* Mundamala - Garland of severed heads or skulls, found in Hindu and Tibetan Buddhist iconography.

Daisy chain

A garland created from the daisy flower (generally as a children's game) is called a daisy chain. One method of creating a daisy chain is to pick daisies and create a hole towards the base of the stem (such as with fingernails or by tying a knot). The stem of the next flower can be threaded through until stopped by the head of the flower. By repeating this with many daisies, it is possible to build up long chains and to form them into simple bracelets and necklaces. Another popular method involves pressing the flower heads against each other to create a look similar to a caterpillar. In Alice's Adventures in Wonderland by Lewis Carroll, before Alice's adventures begin, she is seen sitting outside with her sister considering whether to make a daisy chain, before being interrupted by a White Rabbit.

The terms "daisy chain" or "daisy chaining" can also refer to various technical and social "chains".

In literature

In the Bible (English Standard Version), Proverbs 4:9 describes Wisdom as the giver of a garland: "She will place on your head a graceful garland; she will bestow on you a beautiful crown." In the deutercanonical Book of Sirach, the organiser or "ruler" of a banquet is authorised and permitted to wear a garland once the meal has been arranged and the guests settled. The "drunkards of Ephraim" (northern Israel) are condemned by the prophet Isaiah by reference to the fading flowers of their garlands (Isaiah 28:2).

In the 1913 novel The Golden Road by Lucy M. Montgomery a "fading garland" is used as a metaphor for the evening of life or aging in general "[...] Did she realize in a flash of prescience that there was no earthly future for our sweet Cecily? Not for her were to be the lengthening shadows or the fading garland. The end was to come while the rainbow still sparkled on her wine of life, ere a single petal had fallen from her rose of joy. [...]".

In the 1906 children's book The Railway Children, Edith Nesbit uses a garland as a metaphor: "Let the garland of friendship be ever green".

Regional practices:

Indian subcontinent

In countries of the Indian subcontinent, such as India and Pakistan, people may place garland around the necks of guests of honour, as a way of showing respect to them. Garlands are worn by the bridegroom in South Asian weddings.

India

Garlands were historically purely secular at first, sought for their fragrance and beauty and used for decorating houses, roads, and streets. It is eventually applied to Hindu deities as an important and traditional role in every festival where these garlands are made using different fragrant flowers (often jasmine) and leaves. Both fragrant and non-fragrant flowers and religiously-significant leaves are used to make garlands to worship Hindu deities. Some popular flowers include:

* jasmine
* champaka
* lotus
* lily
* ashoka
* nerium/oleander
* chrysanthemum
* rose
* hibiscus
* pinwheel flower
* manoranjini, etc.

House main door frame decorated with door frame garland (Nila Maalai) during a Housewarming party in Tamil Nadu
Apart from these, leaves and grasses like arugampul, maruvakam, davanam, maachi, paneer leaves, lavancha are also used for making garlands. Fruit, vegetables, and sometimes even currency notes are also used for garlands, given as thanksgiving.

Wedding ceremonies in India include the bride and groom wearing a wedding garland. On other occasions, garlands are given as a sign of respect to an individual person or to a divine image.

A gajra is a flower garland which women in India and Bangladesh wear in their hair during traditional festivals. It is commonly made with jasmine. It can be worn around a bun, as well as in braids. Women usually wear these when they wear sarees. Sometimes, they are pinned in the hair with other flowers, such as roses.

South India

In ancient times, Tamil kings employed people to manufacture garlands daily for a particular deity. These garlands were not available for public consumption.

In contemporary times, each Hindu temple in southern India has a nandavanam (flower garden) where flowers and trees for garlands are grown. Large Shiva temples like Thillai Nataraja Temple, Chidambaram, Thyagaraja Temple, Tiruvarur, and Arunachaleswara Temple, and those found in Thiruvannamalai still preserve such nandavanams for supplying flowers for daily rituals.

Stone inscriptions of Rajaraja I at Thanjavur gives details of patronage bestowed by royals to the conservation of nadavanams that belonged to the "Big Temple".

Marigold and nitya kalyani garlands are used only for corpses in burial rituals. At social functions, garlands are used to denote the host.

At Srirangam Ranganathar temple, only garlands made by temple sattharars (brahmacaris employed for garland-making) are used to adorn the deity Ranganatha. Garland and flowers from outside the temple grounds are forbidden. Sattarars have several disciplinary rules for many aspects of their profession, some of which include:

* Flowers should be picked in the early morning.
* Flowers should not be smelled by anyone.
* Flowers should be picked only after one has bathed.
* The flowers which fallen from the plant and touched the ground should not be used.
* Namajapam, or the repetition of holy names, should be done while picking flowers.

While making garlands, the sattarars keep flowers and other materials on a table in order to keep them away from the feet, which are traditionally viewed as unclean and unfit for use in a religious context. Material is always kept above hip level.

South Indian garlands are of different types. Some of them are as follows:

* Thodutha maalai - Garlands made from the fiber of the banana tree (vaazhainaar). Common in marriage ceremonies and devotional offerings. In all Hindu marriages the bride and bridegroom exchange garlands three times. These garlands range in length from 0.5 to 3.7 m (1+1⁄2 to 12 ft) and vary from 5 cm (2 in) to 0.9–1.2 m (3–4 ft) in diameter.
* Kortha maalai - Made using needle and thread. Jasmine, mullai, and lotus garlands are made using this method. Malas for the gods have two free lower ends with kunjam (bunch of flowers), i.e. only the upper two ends are joined and the lower ends should not be not joined. They have two kunjams, whereas garlands for human use have both lower ends joined (only one kunjam).

Each Hindu deity has a unique garland:

* Lalitha wears hibiscus
* Vishnu wears tulasi leaves
* Shiva wears bilva leaves
* Subrahmanya wears jasmine
* Lakshmi wears red lotus
* Sarasvati wears white lotus
* Durga wears nerium oleander
* Vinayaka wears dūrvā grass

The tradition of garlanding statues as a sign of respect extends to respected non-divine beings, including ancient King Perumbidugu Mutharaiyar II and the innovative colonial administrator Mark Cubbon.

Nepal

A reference to a garland is found in the Nepalese national anthem, Sayaun Thunga Phulka. The first line reads, "Woven from hundreds of flowers, we are one garland that's Nepali."

Christendom

In Christian countries, garlands are often used as Christmas decorations, such as being wrapped around a Christmas tree.

2-Layer-Jasmine-Garland.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2082 2024-03-07 00:02:53

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2084) Miner

Gist

A miner is a person who makes a living digging coal, salt, gold, minerals, or other natural resources out of the earth. The root here is the noun mine—not the word that possessive toddlers like to shout, but the one that describes a man-made underground network of tunnels and quarries.

Summary

Mining is the process of digging things out of the ground. Any material that cannot be grown must be mined. Mining things from the ground is called extraction. Mining can include extraction of metals and minerals, like coal, diamond, gold, silver, platinum, copper, tin and iron. Mining can also include other things like oil and natural gas.

Some mining is done by scraping away the soil (dirt) from the top of the ground. This is called surface mining. Some mining is done by going deep underground into a mine shaft. This is called underground mining. Some mining, such as gold mining, is done in other ways. Gold can be mined by searching in the bed of a river or other stream of water to remove the flakes of gold. This is called panning or placer mining.

A worker in a mine is called a miner. Underground mining is a dangerous job. Many mines have accidents. Hundreds of miners die every year from accidents, mostly in poor countries. Safety rules and special safety equipment is used to try and protect miners from accidents. Underground coal mining is especially dangerous because coal can give off poisonous and explosive gases.

Some towns are mining towns. People live there because they can make money as miners or by doing things for miners. When mining stops the town may become a ghost town.

Mining operations often make the environment worse, both during the mining activity and after the mine has closed. Hence, most countries have passed regulations to decrease the effect of mining. Work safety has long been a concern, and has been improved in mines.

Details

A miner is a person who extracts ore, coal, chalk, clay, or other minerals from the earth through mining. There are two senses in which the term is used. In its narrowest sense, a miner is someone who works at the rock face; cutting, blasting, or otherwise working and removing the rock. In a broader sense, a "miner" is anyone working within a mine, not just a worker at the rock face.

Mining is one of the most dangerous trades in the world. In some countries, miners lack social guarantees and in case of injury may be left to cope without assistance.

In regions with a long mining tradition, many communities have developed cultural traditions and aspects specific to the various regions, in the forms of particular equipment, symbolism, music, and the like.

Roles

Different functions of the individual miner. Many of the roles are specific to a type of mining, such as coal mining. Roles considered to be "miners" in the narrower sense have included:

* Hewer (also known as "cake" or "pickman"), whose job was to hew the rock.
* Collier, a hewer who hews coal with a pick.
* Driller, who works a rock drill to bore holes for placing dynamite or other explosives.

Other roles within mines that did not involve breaking rock (and thus fit the broader definition) have included:

* Loader (also called a "bandsman"), who loads the mining carts with coal at the arm.
* Putter (also known as a "drags-man"), who works the carts around the mine.
* Barrow-man, who transported the broken coal from the face in wheelbarrows.
* Hurrier, who transported coal carts from a mine to the surface.
* Timbers, who fashions and installs timber supports to support the walls and ceiling in an underground mine.

In addition to miners working in the seam, a mine employs other workers in duties in the sea. In addition to the office staff of various sorts, these may include:

* Brakesman, who operate the winding engine.
* Breaker boy who removes impurities from coal.
* Emergency Structure Engineer, who makes sure that cave-ins are dealt with when called

Modern miners

Mining engineers use the principles of math and science to develop philosophical solutions to technical problems for miners. In most cases, a bachelor's degree in engineering, mining engineering or geological engineering is required. Because technology is constantly changing, miners and mining engineers need to continue their education.

The basics of mining engineering includes finding, extracting, and preparing minerals, metals and coal. These mined products are used for electric power generation and manufacturing industries. Mining engineers also supervise the construction of underground mine operations and create ways to transport the extracted minerals to processing plants.

960x0.jpg?format=jpg&width=1440


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2083 2024-03-08 00:12:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2085) Psychology

Gist

Psychology is the scientific study of the mind and behavior. Psychologists are actively involved in studying and understanding mental processes, brain functions, and behavior.

Summary

Psychology is the study of mind and behavior. Its subject matter includes the behavior of humans and nonhumans, both conscious and unconscious phenomena, and mental processes such as thoughts, feelings, and motives. Psychology is an academic discipline of immense scope, crossing the boundaries between the natural and social sciences. Biological psychologists seek an understanding of the emergent properties of brains, linking the discipline to neuroscience. As social scientists, psychologists aim to understand the behavior of individuals and groups.

A professional practitioner or researcher involved in the discipline is called a psychologist. Some psychologists can also be classified as behavioral or cognitive scientists. Some psychologists attempt to understand the role of mental functions in individual and social behavior. Others explore the physiological and neurobiological processes that underlie cognitive functions and behaviors.

Psychologists are involved in research on perception, cognition, attention, emotion, intelligence, subjective experiences, motivation, brain functioning, and personality. Psychologists' interests extend to interpersonal relationships, psychological resilience, family resilience, and other areas within social psychology. They also consider the unconscious mind. Research psychologists employ empirical methods to infer causal and correlational relationships between psychosocial variables. Some, but not all, clinical and counseling psychologists rely on symbolic interpretation.

While psychological knowledge is often applied to the assessment and treatment of mental health problems, it is also directed towards understanding and solving problems in several spheres of human activity. By many accounts, psychology ultimately aims to benefit society. Many psychologists are involved in some kind of therapeutic role, practicing psychotherapy in clinical, counseling, or school settings. Other psychologists conduct scientific research on a wide range of topics related to mental processes and behavior. Typically the latter group of psychologists work in academic settings (e.g., universities, medical schools, or hospitals). Another group of psychologists is employed in industrial and organizational settings. Yet others are involved in work on human development, aging, sports, health, forensic science, education, and the media.

Details

Psychology is a scientific discipline that studies mental states and processes and behaviour in humans and other animals.

The discipline of psychology is broadly divisible into two parts: a large profession of practitioners and a smaller but growing science of mind, brain, and social behaviour. The two have distinctive goals, training, and practices, but some psychologists integrate the two.

Early history

In Western culture, contributors to the development of psychology came from many areas, beginning with philosophers such as Plato and Aristotle. Hippocrates philosophized about basic human temperaments (e.g., choleric, sanguine, melancholic) and their associated traits. Informed by the biology of his time, he speculated that physical qualities, such as yellow bile or too much blood, might underlie differences in temperament (see also humour). Aristotle postulated the brain to be the seat of the rational human mind, and in the 17th century René Descartes argued that the mind gives people the capacities for thought and consciousness: the mind “decides” and the body carries out the decision—a dualistic mind-body split that modern psychological science is still working to overcome. Two figures who helped to found psychology as a formal discipline and science in the 19th century were Wilhelm Wundt in Germany and William James in the United States. James’s The Principles of Psychology (1890) defined psychology as the science of mental life and provided insightful discussions of topics and challenges that anticipated much of the field’s research agenda a century later.

During the first half of the 20th century, however, behaviourism dominated most of American academic psychology. In 1913 John B. Watson, one of the influential founders of behaviourism, urged reliance on only objectively measurable actions and conditions, effectively removing the study of consciousness from psychology. He argued that psychology as a science must deal exclusively with directly observable behaviour in lower animals as well as humans, emphasized the importance of rewarding only desired behaviours in child rearing, and drew on principles of learning through classical conditioning (based on studies with dogs by the Russian physiologist Ivan Pavlov and thus known as Pavlovian conditioning). In the United States most university psychology departments became devoted to turning psychology away from philosophy and into a rigorous empirical science.

Behaviourism

Beginning in the 1930s, behaviourism flourished in the United States, with B.F. Skinner leading the way in demonstrating the power of operant conditioning through reinforcement. Behaviourists in university settings conducted experiments on the conditions controlling learning and “shaping” behaviour through reinforcement, usually working with laboratory animals such as rats and pigeons. Skinner and his followers explicitly excluded mental life, viewing the human mind as an impenetrable “black box,” open only to conjecture and speculative fictions. Their work showed that social behaviour is readily influenced by manipulating specific contingencies and by changing the consequences or reinforcement (rewards) to which behaviour leads in different situations. Changes in those consequences can modify behaviour in predictable stimulus-response (S-R) patterns. Likewise, a wide range of emotions, both positive and negative, may be acquired through processes of conditioning and can be modified by applying the same principles.

Concurrently, in a curious juxtaposition, the psychoanalytic theories and therapeutic practices developed by the Vienna-trained physician Sigmund Freud and his many disciples—beginning early in the 20th century and enduring for many decades—were undermining the traditional view of human nature as essentially rational. Freudian theory made reason secondary: for Freud, the unconscious and its often socially unacceptable irrational motives and desires, particularly the sexual and aggressive, were the driving force underlying much of human behaviour and mental illness. Making the unconscious conscious became the therapeutic goal of clinicians working within this framework.

Freud proposed that much of what humans feel, think, and do is outside awareness, self-defensive in its motivations, and unconsciously determined. Much of it also reflects conflicts grounded in early childhood that play out in complex patterns of seemingly paradoxical behaviours and symptoms. His followers, the ego psychologists, emphasized the importance of the higher-order functions and cognitive processes (e.g., competence motivation, self-regulatory abilities) as well as the individual’s psychological defense mechanisms. They also shifted their focus to the roles of interpersonal relations and of secure attachment in mental health and adaptive functioning, and they pioneered the analysis of these processes in the clinical setting.

After World War II and Sputnik

After World War II, American psychology, particularly clinical psychology, grew into a substantial field in its own right, partly in response to the needs of returning veterans. The growth of psychology as a science was stimulated further by the launching of Sputnik in 1957 and the opening of the Russian-American space race to the Moon. As part of this race, the U.S. government fueled the growth of science. For the first time, massive federal funding became available, both to support behavioral research and to enable graduate training. Psychology became both a thriving profession of practitioners and a scientific discipline that investigated all aspects of human social behaviour, child development, and individual differences, as well as the areas of animal psychology, sensation, perception, memory, and learning.

Training in clinical psychology was heavily influenced by Freudian psychology and its offshoots. But some clinical researchers, working with both normal and disturbed populations, began to develop and apply methods focusing on the learning conditions that influence and control social behaviour. This behaviour therapy movement analyzed problematic behaviours (e.g., aggressiveness, bizarre speech patterns, smoking, fear responses) in terms of the observable events and conditions that seemed to influence the person’s problematic behaviour. Behavioral approaches led to innovations for therapy by working to modify problematic behaviour not through insight, awareness, or the uncovering of unconscious motivations but by addressing the behaviour itself. Behaviourists attempted to modify the maladaptive behaviour directly, examining the conditions controlling the individual’s current problems, not their possible historical roots. They also intended to show that such efforts could be successful without the symptom substitution that Freudian theory predicted. Freudians believed that removing the troubling behaviour directly would be followed by new and worse problems. Behaviour therapists showed that this was not necessarily the case.

To begin exploring the role of genetics in personality and social development, psychologists compared the similarity in personality shown by people who share the same genes or the same environment. Twin studies compared monozygotic (identical) as opposed to dizygotic (fraternal) twins, raised either in the same or in different environments. Overall, these studies demonstrated the important role of heredity in a wide range of human characteristics and traits, such as those of the introvert and extravert, and indicated that the biological-genetic influence was far greater than early behaviourism had assumed. At the same time, it also became clear that how such dispositions are expressed in behaviour depends importantly on interactions with the environment in the course of development, beginning in utero.

Impact and aftermath of the cognitive revolution

By the early 1960s the relevance of the Skinnerian approach for understanding complex mental processes was seriously questioned. The linguist Noam Chomsky’s critical review of Skinner’s theory of “verbal behaviour” in 1959 showed that it could not properly account for human language acquisition. It was one of several triggers for a paradigm shift that by the mid-1960s became the “cognitive revolution,” which compellingly argued against behaviourism and led to the development of cognitive science. In conjunction with concurrent analyses and advances in areas from computer science and artificial intelligence to neuroscience, genetics, and applications of evolutionary theory, the scientific study of the mind and mental activity quickly became the foundation for much of the evolving new psychological science in the 21st century.

Psychological scientists demonstrated that organisms have innate dispositions and that human brains are distinctively prepared for diverse higher-level mental activities, from language acquisition to mathematics, as well as space perception, thinking, and memory. They also developed and tested diverse theoretical models for conceptualizing mental representations in complex information processing conducted at multiple levels of awareness. They asked such questions as: How does the individual’s stored knowledge give rise to the patterns or networks of mental representations activated at a particular time? How is memory organized? In a related direction, the analysis of visual perception took increasing account of how the features of the environment (e.g., the objects, places, and other animals in one’s world) provide information, the perception of which is vital for the organism’s survival. Consequently, information about the possibilities and dangers of the environment, on the one side, and the animal’s dispositions and adaptation efforts, on the other, become inseparable: their interactions become the focus of research and theory building.

Concurrently, to investigate personality, individual differences, and social behaviour, a number of theorists made learning theories both more social (interpersonal) and more cognitive. They moved far beyond the earlier conditioning and reward-and-punishment principles, focusing on how a person’s characteristics interact with situational opportunities and demands. Research demonstrated the importance of learning through observation from real and symbolic models, showing that it occurs spontaneously and cognitively without requiring any direct reinforcement. Likewise, studies of the development of self-control and the ability to delay gratification in young children showed that it is crucially important how the situation and the temptations are cognitively appraised: when the appraisal changes, so does the behaviour. Thus, the focus shifted from reinforcement and “stimulus control” to the mental mechanisms that enable self-control.

Traditional personality-trait taxonomies continued to describe individuals and types using such terms as introversion-extraversion and sociable-hostile, based on broad trait ratings. In new directions, consistent with developments in cognitive science and social psychology, individual differences were reconceptualized in terms of cognitive social variables, such as people’s constructs (encoding of information), personal goals and beliefs, and competencies and skills. Research examined the nature of the consistencies and variability that characterize individuals distinctively across situations and over time and began to identify how different types of individuals respond to different types of psychological situations. The often surprising findings led to new models of cognitive and affective information-processing systems.

In clinical applications, cognitive-behaviour therapy (CBT) was developed. CBT focuses on identifying and changing negative, inaccurate, or otherwise maladaptive beliefs and thought patterns through a combination of cognitive and behaviour therapy. It helps people to change how they think and feel about themselves and others. In time, these cognitive-behavioral treatment innovations, often supplemented with medications, were shown to be useful for treating diverse problems, including disabling fears, self-control difficulties, addictions, and depression.

In social psychology, beginning in the early 1970s, social cognition—how people process social information about other people and the self—became a major area of study. Research focused on such topics as the nature and functions of self-concepts and self-esteem; cultural differences in information processing; interpersonal relations and social communication; attitudes and social-influence processes; altruism, aggression, and obedience; motivation, emotion, planning, and self-regulation; and the influence of people’s dispositions and characteristics on their dealings with different types of situations and experiences. Recognizing that much information processing occurs at levels below awareness and proceeds automatically, research turned to the effects of subliminal (below awareness) stimuli on the activation of diverse kinds of mental representations, emotions, and social behaviours. Research at the intersection of social cognition and health psychology began to examine how people’s beliefs, positive illusions, expectations, and self-regulatory abilities may help them deal with diverse traumas and threats to their health and the stress that arises when trying to cope with diseases such as HIV/AIDS and cancer. Working with a variety of animal species, from mice and birds to higher mammals such as apes, researchers investigated social communication and diverse social behaviours, psychological characteristics, cognitive abilities, and emotions, searching for similarities and differences in comparison with humans.

In developmental psychology, investigators identified and analyzed with increasing precision the diverse perceptual, cognitive, and numerical abilities of infants and traced their developmental course, while others focused on life-span development and mental and behavioral changes in the aging process. Developmental research provided clear evidence that humans, rather than entering the world with a mental blank slate, are extensively prepared for all sorts of cognitive and skill development. At the same time, research also has yielded equally impressive evidence for the plasticity of the human brain and the possibilities for change in the course of development.

Linking mind, brain, and behaviour

Late in the 20th century, methods for observing the activity of the living brain were developed that made it possible to explore links between what the brain is doing and psychological phenomena, thus opening a window into the relationship between the mind, brain, and behaviour. The functioning of the brain enables everything one does, feels, and knows. To examine brain activity, functional magnetic resonance imaging (fMRI) is used to measure the magnetic fields created by the functioning nerve cells in the brain, detecting changes in blood flow. With the aid of computers, this information can be translated into images, which virtually “light up” the amount of activity in different areas of the brain as the person performs mental tasks and experiences different kinds of perceptions, images, thoughts, and emotions. They thus allow a much more precise and detailed analysis of the links between activity in the brain and the mental state a person experiences while responding to different types of stimuli and generating different thoughts and emotions. These can range, for example, from thoughts and images about what one fears and dreads to those directed at what one craves the most. The result of this technology is a virtual revolution for work that uses the biological level of neural activity to address questions that are of core interest for psychologists working in almost all areas of the discipline.

Social cognitive neuroscience

The advances described above led to the development in the early years of the 21st century of a new, highly popular field: social cognitive neuroscience (SCN). This interdisciplinary field asks questions about topics traditionally of interest to social psychologists, such as person perception, attitude change, and emotion regulation. It does so by using methods traditionally employed by cognitive neuroscientists, such as functional brain imaging and neuropsychological patient analysis. By integrating the theories and methods of its parent disciplines, SCN tries to understand the interactions between social behaviour, cognition, and brain mechanisms.

Epigenetics

The term epigenetic is used to describe the dynamic interplay between genes and the environment during the course of development. The study of epigenetics highlights the complex nature of the relationship between the organism’s genetic code, or genome, and the organism’s directly observable physical and psychological manifestations and behaviours. In contemporary use, the term refers to efforts to explain individual differences in physical as well as behavioral traits (e.g., hostility-aggression) in terms of the molecular mechanisms that affect the activity of genes, essentially turning on some genes and turning off others.

Epigenetic regulation of gene activity plays a critical role in the process of development, influencing the organism’s psychological and behavioral expressions. Thus, while the genome provides the possibilities, the environment determines which genes become activated. In the early 21st century there emerged evidence for the important role of the environment (e.g., in maternal behaviour with the newborn) in shaping the activity of genes. Epigenetic factors may serve as a critical biological link between the experiences of an individual and subsequent individual differences in brain and behaviour, both within and across generations. Epigenetic research points to the pathways through which environmental influence and psychological experiences may be transformed and transmitted at the biological level. It thus provides another route for the increasingly deep analysis of mind-brain-behaviour links at multiple levels of analysis, from the psychological to the biological.

The discoveries and advances of psychological science keep expanding its scope and tools and changing its structure and organization. For most of the 20th century, psychological science consisted of a variety of specialized subfields with little interconnection. They ranged from clinical psychology to the study of individual differences and personality, to social psychology, to industrial-organizational psychology, to community psychology, to the experimental study of such basic processes as memory, thinking, perception and sensation, to animal behaviour, and to physiological psychology. In larger academic psychology departments, the list got longer. The various subfields, each with its own distinct history and specialized mission, usually were bundled together within academic departments, essentially a loose federation of unrelated disciplines, each with its own training program and research agenda. Late in the 20th century this situation began to change, fueled in part by the rapid growth of developments in cognitive science and social cognitive neuroscience, including the discovery of new methods for studying cognition, emotion, the brain, and genetic influences on mind and behaviour.

In the early years of the 21st century, psychology became an increasingly integrative science at the intersection or hub of diverse other disciplines, from biology, neurology, and economics to sociology and anthropology. For example, stimulated by Amos Tversky’s and Daniel Kahneman’s theory of decision making under risk, new areas developed, including behavioral economics and decision making, often being taught by psychologists in business schools. Likewise, advances in cognitive neuroscience led to the subfield of neuroeconomics.

In another direction, links deepened between psychology and law. This connection reflected new findings in psychology about the nature of human social behaviour, as well as the fallibility of eyewitness testimony in legal trials and the distortions in retrospective memory.

Likewise, with recognition of the role of mental processes and self-care behaviour in the maintenance of health, the fields of behavioral medicine and health psychology emerged. These subfields study links between psychological processes, social behaviour, and health.

At the same time, within psychology, old sub-disciplinary boundaries were crossed more freely. Interdisciplinary teams often work on a common problem using different methods and tools that draw on multiple levels of analysis, from the social to the cognitive and to the biological.

Research methods

An extremely wide range of diverse research methods are used by psychological scientists to pursue their particular goals. To study verbal and nonverbal behaviour and mental processes in humans, these include questionnaires, ratings, self-reports, and case studies; tests of personality, attitudes, and intelligence; structured interviews; daily diary records; and direct observation and behaviour sampling outside the laboratory. Diverse laboratory measures are used to study perception, attention, memory, decision making, self-control, delay of gratification, and many other visual, cognitive, and emotional processes, at levels of both conscious and automatic or unconscious information processing.

Complex data-analysis methods

The astonishing growth in computational power that began in the final decades of the 20th century transformed research on methods of data analysis in psychology. More-flexible and more-powerful general linear models and mixed models became available. Similarly, for nonexperimental data, multiple regression analysis began to be augmented by structural equation models that allow for chains and webs of interrelationships and for analysis of extremely complex data. The availability of free, fast, and flexible software also began to change teaching in the measurement area.

Additional Information

Psychology is the study of mind and behavior. It encompasses the biological influences, social pressures, and environmental factors that affect how people think, act, and feel.

Gaining a richer and deeper understanding of psychology can help people achieve insights into their own actions as well as a better understanding of other people.

It's difficult to capture everything that psychology encompasses in just a brief definition, but topics such as development, personality, thoughts, feelings, emotions, motivations, and social behaviors represent just a portion of what psychology seeks to understand, predict, and explain.

Types of Psychology

Psychology is a broad and diverse field that encompasses the study of human thought, behavior, development, personality, emotion, motivation, and more. As a result, some different subfields and specialty areas have emerged. The following are some of the major areas of research and application within psychology:

* Abnormal psychology is the study of abnormal behavior and psychopathology. This specialty area is focused on research and treatment of a variety of mental disorders and is linked to psychotherapy and clinical psychology.
* Biological psychology (biopsychology) studies how biological processes influence the mind and behavior. This area is closely linked to neuroscience and utilizes tools such as MRI and PET scans to look at brain injury or brain abnormalities.
* Clinical psychology is focused on the assessment, diagnosis, and treatment of mental disorders.
* Cognitive psychology is the study of human thought processes including attention, memory, perception, decision-making, problem-solving, and language acquisition.
* Comparative psychology is the branch of psychology concerned with the study of animal behavior.
* Developmental psychology is an area that looks at human growth and development over the lifespan including cognitive abilities, morality, social functioning, identity, and other life areas.
* Forensic psychology is an applied field focused on using psychological research and principles in the legal and criminal justice system.
* Industrial-organizational psychology is a field that uses psychological research to enhance work performance and select employees.
* Personality psychology focuses on understanding how personality develops as well as the patterns of thoughts, behaviors, and characteristics that make each individual unique.
* Social psychology focuses on group behavior, social influences on individual behavior, attitudes, prejudice, conformity, aggression, and related topics.

Psychology_Courses.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2084 2024-03-09 00:13:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2086) Numerical Analysis

Gist

Numerical analysis is a branch of mathematics that solves continuous problems using numeric approximation. It involves designing methods that give approximate but accurate numeric solutions, which is useful in cases where the exact solution is impossible or prohibitively expensive to calculate.

Summary

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.

The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.

Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.

Applications

The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:

* Advanced numerical methods are essential in making numerical weather prediction feasible.
* Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
* Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
* Hedge funds (private investment funds) use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
* Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
* Insurance companies use numerical programs for actuarial analysis.

Details

Numerical analysis is an area of mathematics and computer science that creates, analyzes, and implements algorithms for obtaining numerical solutions to problems involving continuous variables. Such problems arise throughout the natural sciences, social sciences, engineering, medicine, and business. Since the mid 20th century, the growth in power and availability of digital computers has led to an increasing use of realistic mathematical models in science and engineering, and numerical analysis of increasing sophistication is needed to solve these more detailed models of the world. The formal academic area of numerical analysis ranges from quite theoretical mathematical studies to computer science issues.

With the increasing availability of computers, the new discipline of scientific computing, or computational science, emerged during the 1980s and 1990s. The discipline combines numerical analysis, symbolic mathematical computations, computer graphics, and other areas of computer science to make it easier to set up, solve, and interpret complicated mathematical models of the real world.

Common perspectives in numerical analysis

Numerical analysis is concerned with all aspects of the numerical solution of a problem, from the theoretical development and understanding of numerical methods to their practical implementation as reliable and efficient computer programs. Most numerical analysts specialize in small subfields, but they share some common concerns, perspectives, and mathematical methods of analysis. These include the following:

* When presented with a problem that cannot be solved directly, they try to replace it with a “nearby problem” that can be solved more easily. Examples are the use of interpolation in developing numerical integration methods and root-finding methods.
* There is widespread use of the language and results of linear algebra, real analysis, and functional analysis (with its simplifying notation of norms, vector spaces, and operators).
* There is a fundamental concern with error, its size, and its analytic form. When approximating a problem, it is prudent to understand the nature of the error in the computed solution. Moreover, understanding the form of the error allows creation of extrapolation processes to improve the convergence behaviour of the numerical method.
* Numerical analysts are concerned with stability, a concept referring to the sensitivity of the solution of a problem to small changes in the data or the parameters of the problem. Numerical methods for solving problems should be no more sensitive to changes in the data than the original problem to be solved. Moreover, the formulation of the original problem should be stable or well-conditioned.
* Numerical analysts are very interested in the effects of using finite precision computer arithmetic. This is especially important in numerical linear algebra, as large problems contain many rounding errors.
* Numerical analysts are generally interested in measuring the efficiency (or “cost”) of an algorithm. For example, the use of Gaussian elimination to solve a linear system Ax = b containing n equations will require approximately (2n^3)/3 arithmetic operations. Numerical analysts would want to know how this method compares with other methods for solving the problem.

Modern applications and computer software

Numerical analysis and mathematical modeling are essential in many areas of modern life. Sophisticated numerical analysis software is commonly embedded in popular software packages (e.g., spreadsheet programs) and allows fairly detailed models to be evaluated, even when the user is unaware of the underlying mathematics. Attaining this level of user transparency requires reliable, efficient, and accurate numerical analysis software, and it requires problem-solving environments (PSE) in which it is relatively easy to model a given situation. PSEs are usually based on excellent theoretical mathematical models, made available to the user through a convenient graphical user interface.

Applications

Computer-aided engineering (CAE) is an important subject within engineering, and some quite sophisticated PSEs have been developed for this field. A wide variety of numerical analysis techniques is involved in solving such mathematical models. The models follow the basic Newtonian laws of mechanics, but there is a variety of possible specific models, and research continues on their design. One important CAE topic is that of modeling the dynamics of moving mechanical systems, a technique that involves both ordinary differential equations and algebraic equations (generally nonlinear). The numerical analysis of these mixed systems, called differential-algebraic systems, is quite difficult but necessary in order to model moving mechanical systems. Building simulators for cars, planes, and other vehicles requires solving differential-algebraic systems in real time.

Another important application is atmospheric modeling. In addition to improving weather forecasts, such models are crucial for understanding the possible effects of human activities on the Earth’s climate. In order to create a useful model, many variables must be introduced. Fundamental among these are the velocity V(x, y, z, t), pressure P(x, y, z, t), and temperature T(x, y, z, t), all given at position (x, y, z) and time t. In addition, various chemicals exist in the atmosphere, including ozone, certain chemical pollutants, carbon dioxide, and other gases and particulates, and their interactions have to be considered. The underlying equations for studying V(x, y, z, t), P(x, y, z, t), and T(x, y, z, t) are partial differential equations; and the interactions of the various chemicals are described using some quite difficult ordinary differential equations. Many types of numerical analysis procedures are used in atmospheric modeling, including computational fluid mechanics and the numerical solution of differential equations. Researchers strive to include ever finer detail in atmospheric models, primarily by incorporating data over smaller and smaller local regions in the atmosphere and implementing their models on highly parallel supercomputers.

Modern businesses rely on optimization methods to decide how to allocate resources most efficiently. For example, optimization methods are used for inventory control, scheduling, determining the best location for manufacturing and storage facilities, and investment strategies.

Computer software

Software to implement common numerical analysis procedures must be reliable, accurate, and efficient. Moreover, it must be written so as to be easily portable between different computer systems. Since about 1970, a number of government-sponsored research efforts have produced specialized, high-quality numerical analysis software.

The most popular programming language for implementing numerical analysis methods is Fortran, a language developed in the 1950s that continues to be updated to meet changing needs. Other languages, such as C, C++, and Java, are also used for numerical analysis. Another approach for basic problems involves creating higher level PSEs, which often contain quite sophisticated numerical analysis, programming, and graphical tools. Best known of these PSEs is MATLAB, a commercial package that is arguably the most popular way to do numerical computing. Two popular computer programs for handling algebraic-analytic mathematics (manipulating and displaying formulas) are Maple and Mathematica.

Historical background

Numerical algorithms are at least as old as the Egyptian Rhind papyrus (c. 1650 BC), which describes a root-finding method for solving a simple equation. Ancient Greek mathematicians made many further advancements in numerical methods. In particular, Eudoxus of Cnidus (c. 400–350 BC) created and Archimedes (c. 285–212/211 BC) perfected the method of exhaustion for calculating lengths, areas, and volumes of geometric figures. When used as a method to find approximations, it is in much the spirit of modern numerical integration; and it was an important precursor to the development of calculus by Isaac Newton (1642–1727) and Gottfried Leibniz (1646–1716).

Calculus, in particular, led to accurate mathematical models for physical reality, first in the physical sciences and eventually in the other sciences, engineering, medicine, and business. These mathematical models are usually too complicated to be solved explicitly, and the effort to obtain approximate, but highly useful, solutions gave a major impetus to numerical analysis. Another important aspect of the development of numerical methods was the creation of logarithms about 1614 by the Scottish mathematician John Napier and others. Logarithms replaced tedious multiplication and division (often involving many digits of accuracy) with simple addition and subtraction after converting the original values to their corresponding logarithms through special tables. (Mechanization of this process spurred the English inventor Charles Babbage (1791–1871) to build the first computer.

Newton created a number of numerical methods for solving a variety of problems, and his name is still attached to many generalizations of his original ideas. Of particular note is his work on finding roots (solutions) for general functions and finding a polynomial equation that best fits a set of data (“polynomial interpolation”). Following Newton, many of the mathematical giants of the 18th and 19th centuries made major contributions to numerical analysis. Foremost among these were the Swiss Leonhard Euler (1707–1783), the French Joseph-Louis Lagrange (1736–1813), and the German Carl Friedrich Gauss (1777–1855).

One of the most important and influential of the early mathematical models in science was that given by Newton to describe the effect of gravity. The force on m is directed toward the centre of gravity of the Earth. Newton’s model has led to many problems that require solution by approximate means, usually involving ordinary differential equations.The force on m is directed toward the centre of gravity of the Earth. Newton’s model has led to many problems that require solution by approximate means, usually involving ordinary differential equations.

Following the development by Newton of his basic laws of physics, many mathematicians and physicists applied these laws to obtain mathematical models for solid and fluid mechanics. Civil and mechanical engineers still base their models on this work, and numerical analysis is one of their basic tools. In the 19th century, phenomena involving heat, electricity, and magnetism were successfully modeled; and in the 20th century, relativistic mechanics, quantum mechanics, and other theoretical constructs were created to extend and improve the applicability of earlier ideas. One of the most widespread numerical analysis techniques for working with such models involves approximating a complex, continuous surface, structure, or process by a finite number of simple elements. Known as the finite element method (FEM), this technique was developed by the American engineer Harold Martin and others to help the Boeing Company analyze stress forces on new jet wing designs in the 1950s. FEM is widely used in stress analysis, heat transfer, fluid flow, and torsion analysis.

Solving differential and integral equations

Most mathematical models used in the natural sciences and engineering are based on ordinary differential equations, partial differential equations, and integral equations. Numerical methods for solving these equations are primarily of two types. The first type approximates the unknown function in the equation by a simpler function, often a polynomial or piecewise polynomial (spline) function, chosen to closely follow the original equation. The finite element method discussed above is the best known approach of this type. The second type of numerical method approximates the equation of interest, usually by approximating the derivatives or integrals in the equation. The approximating equation has a solution at a discrete set of points, and this solution approximates that of the original equation. Such numerical procedures are often called finite difference methods. Most initial value problems for ordinary differential equations and partial differential equations are solved in this way. Numerical methods for solving differential and integral equations often involve both approximation theory and the solution of quite large linear and nonlinear systems of equations.

Effects of computer hardware

Almost all numerical computation is carried out on digital computers. The structure and properties of digital computers affect the structure of numerical algorithms, especially when solving large linear systems. First and foremost, the computer arithmetic must be understood. Historically, computer arithmetic varied greatly between different computer manufacturers, and this was a source of many problems when attempting to write software that could be easily ported between different computers. Variations were reduced significantly in 1985 with the development of the Institute for Electrical and Electronic Engineering (IEEE) standard for computer floating-point arithmetic. The IEEE standard has been adopted by all personal computers and workstations as well as most mainframe computers.

For large-scale problems, especially in numerical linear algebra, it is important to know how the elements of an array A or a vector x are stored in memory. Knowing this can lead to much faster transfer of numbers from the memory into the arithmetic registers of the computer, thus leading to faster programs. A somewhat related topic is that of “pipelining.” This is a widely used technique whereby the executions of computer operations are overlapped, leading to faster execution. Machines with the same basic clock speed can have very different program execution times due to differences in pipelining and differences in the way memory is accessed.

Most personal computers are sequential in their operation, but parallel computers are being used ever more widely in public and private research institutions. (See supercomputer.) Shared-memory parallel computers have several independent central processing units (CPUs) that all access the same computer memory, whereas distributed-memory parallel computers have separate memory for each CPU. Another form of parallelism is the use of pipelining of vector arithmetic operations. Numerical algorithms must be modified to run most efficiently on whatever combination of methods a particular computer employs.

Additional Information

Numerical Analysis refers to the process of in-depth analysis of algorithms and natural approximations. It is considered a part of both computer science and mathematical sciences. It is used in several disciplines like engineering, medicine, social science, etc. Because this branch is concerned with understanding, creating, and implementing the algorithms for common problems.

Some commonly used numerical computation methods include differential equations (that help predict planetary motion), linear algebra, etc.

Applications of Numerical Analysis:

* The major benefit of numerical analysis is that the scientific models which were earlier difficult to make became comparatively easier.   
* It helps us in coming up with formulas and solutions that can give us near to accurate answers for various problems.
* Numerical weather predictions have become easier with advanced numerical analysis.
* We can also better understand how human activities are influencing climatic conditions with the help of numerical computation because several variables will get easily adjusted in numerical equations in the form of alphabets.
* Through numerical analysis, scientists also try to calculate spacecraft trajectories.
* Insurance companies and hedge fund companies (that have to deal with future pricing) also use this information for analyzing information about participants.
* Use by airlines: Airline companies use numerical analysis for understanding fuel needs, ticket pricing, and other such mathematical decisions.
* Even famous car companies rely on numerical analysis for making their car safer by checking crash possibilities. 
* In Engineering based on computer science, several numerical analyses are used. Mixed systems in which two different kinds of mechanical systems are used require numerical analysis.
* Even in the production and manufacturing sector, numerical analysis is useful for deciding the apt location, the amount of inventory, and other such decisions.

Types of numerical methods:

The numerical analysis takes the help of geometry, algebra, calculus, and other mathematical tools in which the variable is not constant. Following are types of numerical methods:

* Linear and nonlinear equations: Linear equations which are usually in the form of px=q are found in almost all areas. We use it to calculate demand and supply, time and speed, and much more. Similarly, nonlinear equations in which our sole goal is to find roots of x are also helpful in several fields like aeronautics.

* Integration and differentiation: Almost all of us have solved mathematical problems involving integration and differentiation. These two equations also form an important part of numerical analysis. These are further divided into partial differential equations and ordinary differential equations.

* Approximation theory (Use of nearby problems):

Approximations are also used widely for computing numerical functions. The method is based on simple mathematical operators (addition, subtraction, multiplication, and division) but helps solve complex polynomial problems.

One of the widely used methods for numerical computation through approximations is the use of nearby problems. It is used when we face a situation where direct solvation is difficult. So, we use a nearby problem in place of that situation and then solve it. This makes the problem easy to solve.

An example of this method is the interpolation technique which is used to replace difficult numerical integration situations.

* Vectors and functional analysis: Sometimes vectors and functional analysis are also used for computing Numerical problems.

Besides these simple methods, certain complex theorems and mathematical rules are also used for Numerical analysis. Examples include the Euler method, Taylor Series method, Shooting method, RK-2 and RK-4 method, trapezoidal rule, etc. Out of all these methods, the finite element approach is used for computer automated calculation and analysis. The method is simple. All the differential equations are first converted into simple equations and then the computer can easily solve them.

Conclusion:

In this article, we looked at one of the most important branches of mathematics used for approximations and calculations, called Numerical Analysis. From the introduction of Numerical Analysis to its application, we understood how it plays a major role in scientific, engineering, and other disciplines. We also looked at different types of Numerical computation methods like linear equations, Integration, approximations, etc. So you can use them according to the situation and calculate the possibilities of a real-life problem.

ma214.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2085 2024-03-10 00:01:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2087) Business Process Outsourcing

Gist

BPO is the abbreviation for business process outsourcing, which refers to when companies outsource business processes to a third-party (external) company.

Summary

Business process outsourcing (BPO) is a business practice in which an organization contracts with an external service provider to perform an essential business function or task.

An organization typically contracts with another business for such services after it has identified a process that, although necessary for its operations, is not part of its core value proposition. This step requires a good understanding of the processes within the organization and strong business process management.

Many organizations consider processes that are performed the same or similarly from company to company, such as payroll and accounting, as good candidates for BPO.

BPO typically offers flexibility and cost efficiency to organizations that implement it. Companies calculate that outsourcing these processes to a provider that specializes in them could deliver better results.

Details

Business Process Outsourcing (BPO) is a subset of outsourcing that involves the contracting of the operations and responsibilities of a specific business process to a third-party service provider. Originally, this was associated with manufacturing firms, such as Coca-Cola that outsourced large segments of its supply chain.

BPO is typically categorized into back office outsourcing, which includes internal business functions such as human resources or finance and accounting, and front office outsourcing, which includes customer-related services such as contact centre (customer care) services.

BPO that is contracted outside a company's country is called offshore outsourcing. BPO that is contracted to a company's neighbouring (or nearby) country is called nearshore outsourcing.

Often the business processes are information technology-based, and are referred to as ITES-BPO, where ITES stands for Information Technology Enabled Service. Knowledge process outsourcing (KPO) and legal process outsourcing (LPO) are some of the sub-segments of business process outsourcing industry.

Benefits

The main advantage of any BPO is the way in which it helps increase a company's flexibility. In early 2000s BPO was all about cost efficiency, which allowed a certain level of flexibility at the time. Due to technological advances and changes in the industry (specifically the move to more service-based rather than product-based contracts), companies who choose to outsource their back-office increasingly look for time flexibility and direct quality control. Business process outsourcing enhances the flexibility of an organization in different ways:

* Most services provided by BPO vendors are offered on a fee-for-service basis, using business models such as Remote In-Sourcing or similar software development and outsourcing models. This can help a company to become more flexible by transforming fixed into variable costs. A variable cost structure helps a company responding to changes in required capacity and does not require a company to invest in assets, thereby making the company more flexible.
* Another way in which BPO contributes to a company’s flexibility is that a company is able to focus on its core competencies, without being burdened by the demands of bureaucratic restraints. Key employees are herewith released from performing non-core or administrative processes and can invest more time and energy in building the firm’s core businesses. The key lies in knowing which of the main value drivers to focus on – customer intimacy, product leadership, or operational excellence. Focusing more on one of these drivers may help a company create a competitive edge.
* A third way in which BPO increases organizational flexibility is by increasing the speed of business processes. Supply chain management with the effective use of supply chain partners and business process outsourcing increases the speed of several business processes, such as the throughout in the case of a manufacturing company.
* Finally, flexibility is seen as a stage in the organizational life cycle: A company can maintain growth goals while avoiding standard business bottlenecks. BPO therefore allows firms to retain their entrepreneurial speed and agility, which they would otherwise sacrifice in order to become efficient as they expanded. It avoids a premature internal transition from its informal entrepreneurial phase to a more bureaucratic mode of operation.
* A company may be able to grow at a faster pace as it will be less constrained by large capital expenditures for people or equipment that may take years to amortize, may become outdated or turn out to be a poor match for the company over time.

Although the above-mentioned arguments favour the view that BPO increases the flexibility of organizations, management needs to be careful with the implementation of it as there are issues, which work against these advantages. Among problems, which arise in practice are: A failure to meet service levels, unclear contractual issues, changing requirements and unforeseen charges, and a dependence on the BPO which reduces flexibility. Consequently, these challenges need to be considered before a company decides to engage in business process outsourcing.

A further issue is that in many cases there is little that differentiates the BPO providers other than size. They often provide similar services, have similar geographic footprints, leverage similar technology stacks, and have similar Quality Improvement approaches.

Threats

Risk is the major drawback with business process outsourcing. Outsourcing of an information system, for example, can cause security risks both from a communication and from a privacy perspective. For example, security of North American or European company data is more difficult to maintain when accessed or controlled in other countries. From a knowledge perspective, a changing attitude in employees, underestimation of running costs and the major risk of losing independence, outsourcing leads to a different relationship between an organization and its contractor.

Risks and threats of outsourcing must therefore be managed, to achieve any benefits. In order to manage outsourcing in a structured way, maximising positive outcome, minimising risks and avoiding any threats, a business continuity management model is set up. This model consists of a set of steps, to successfully identify, manage and control the business processes that are, or can be outsourced.

Analytic hierarchy process is a framework of BPO focused on identifying potential outsourceable information systems. L. Willcocks, M. Lacity and G. Fitzgerald identify several contracting problems companies face, ranging from unclear contract formatting, to a lack of understanding of technical IT processes.

Technological pressures

Industry analysts have identified robotic process automation software as a potential threat to the industry and speculate as to the likely long term impact. In the short term, however, there is likely to be little impact as existing contracts run their course: it is only reasonable to expect demand for cost efficiency and innovation to result in transformative changes at the point of contract renewals. With the average length of a BPO contract being 5 years or more - and many contracts being longer - this hypothesis will take some time to play out.

On the other hand, an academic study by the London School of Economics was at pains to counter the so-called "myth" that robotic process automation will bring back many jobs from offshore. One possible argument behind such an assertion is that new technology provides new opportunities for increased quality, reliability, scalability and cost control, thus enabling BPO providers to increasingly compete on an outcomes-based model rather than competing on cost alone. With the core offering potentially changing from a "lift and shift" approach based on fixed costs to a more qualitative, service-based and outcomes-based model, there is perhaps a new opportunity to grow the BPO industry with a new offering.

Financial risks

One of the primary reasons businesses choose to outsource is to reduce costs. However, this can also come with significant financial risks. If the cost of outsourcing increases due to changes in the global market or if the outsourced company increases its fees, the financial benefits can be eroded. In addition, hidden costs, such as transition and management costs, can also emerge. This can make outsourcing less financially viable than originally anticipated.

Loss of control

Outsourcing also potentially leads to a loss of control over certain business operations. This can create challenges in terms of managing and coordinating these operations. Moreover, the outsourced company might not fully understand or align with the hiring company's business culture, values, and objectives, leading to potential conflicts and inefficiencies.

Quality control

Quality control can become more challenging when business operations are outsourced. While third-party companies might be experts in their fields, they may not maintain the same standards as the hiring company. This can result in a decline in the quality of goods or services, potentially damaging the company's brand and customer relationships.

Industry Size

One estimate of the worldwide BPO market from the BPO Services Global Industry Almanac 2017, puts the size of the industry in 2016 at about US$140 billion.

India, China and the Philippines are major powerhouses in the industry. In 2017, in India the BPO industry generated US$30 billion in revenue according to the national industry association. The BPO industry is a small segment of the total outsourcing industry in India. The BPO industry workforce in India is expected to shrink by 14% in 2021.

The BPO industry and IT services industry in combination are worth a total of US$154 billion in revenue in 2017. The BPO industry in the Philippines generated $22.9 billion in revenues in 2016, while around 700 thousand medium and high skill jobs would be created by 2022.

In 2015, official statistics put the size of the total outsourcing industry in China, including not only the BPO industry but also IT outsourcing services, at $130.9 billion.

Additional Information:

What Is Business Process Outsourcing (BPO)?

Business process outsourcing (BPO) is a method of subcontracting various business-related operations to third-party vendors.

Although BPO originally applied solely to manufacturing entities, such as soft drink manufacturers that outsourced large segments of their supply chains, BPO now applies to the outsourcing of various products and services.

Key Takeaways

* Business process outsourcing (BPO) utilizes third-party vendors or subcontractors to carry out certain parts of their business operations.
* BPO began with large manufacturing companies to aid with supply chain management,
* Today BPO has grown to include all sorts of sectors, including services companies.
* BPO is considered "offshore outsourcing" if the vendor or subcontractor is located in a different country; for instance, in the case of customer support.
* BPO today is an industry unto itself, with companies specializing in facilitating BPO to companies around the world.

Understanding Business Process Outsourcing (BPO)

Many businesses, from small startups to large companies, opt to outsource processes, as new and innovative services are increasingly available in today's ever-changing, highly competitive business climate.

Broadly speaking, companies adopt BPO practices in the two main areas of back-office and front-office operations. Back office BPO refers to a company contracting its core business support operations such as accounting, payment processing, IT services, human resources, regulatory compliance, and quality assurance to outside professionals who ensure the business runs smoothly.

By contrast, front office BPO tasks commonly include customer-related services such as tech support, sales, and marketing.

Special Considerations

The breadth of a business' BPO options depends on whether it contracts its operations within or outside the borders of its home country. BPO is deemed "offshore outsourcing" if the contract is sent to another country where there is political stability, lower labor costs, and/or tax savings. A U.S. company using an offshore BPO vendor in Singapore is one such example of offshore outsourcing.

BPO is referred to as "nearshore outsourcing" if the job is contracted to a neighboring country. Such would be the case if a U.S. company partnered with a BPO vendor located in Canada.

A third option, known as "onshore outsourcing" or "domestic sourcing," occurs when BPO is contracted within the company’s own country, even if its vendor partners are located in different cities or states.

BPO is often referred to as information technology-enabled services (ITES) because it relies on technology/infrastructure that enables external companies to efficiently perform their roles.

The Attraction of BPO

Companies are often drawn to BPO because it affords them greater operational flexibility. By outsourcing non-core and administrative functions, companies can reallocate time and resources to core competencies like customer relations and product leadership, which ultimately results in advantages over competing businesses in their industry.

BPO offers businesses access to innovative technological resources that they might not otherwise have exposure to. BPO partners and companies constantly strive to improve their processes by adopting the most recent technologies and practices.

Since the U.S. corporate income tax is among the highest in the developed world, American companies benefit from outsourcing operations to countries with lower income taxes and cheaper labor forces as viable cost reduction measures.

BPO also offers companies the benefits of quick and accurate reporting, improved productivity, and the ability to swiftly reassign its resources, when necessary.

Some Disadvantages of BPO

While there are many advantages of BPO, there are also disadvantages. A business that outsources its business processes may be prone to data breaches or have communication issues that delay project completion, and such businesses may underestimate the running costs of BPO providers.

Another disadvantage could be customer backlash against outsourcing if they perceive this to be of inferior quality or at the expense of domestic employment.

A Career in BPO

Business process outsourcing is a fast-growing sector of the economy, which makes it attractive as a career path or startup opportunity. According to industry research, the BPO market was valued at nearly $250 billion in 2021, and is projected to grow at 9% per year over the next decade.

According to data from Zip Recruiter, BPO jobs in America pay an average of $91,100 as of 2022.

What Is the Goal of BPO and What Are Its Types?

BPO is the abbreviation for business process outsourcing, which refers to when companies outsource business processes to a third-party (external) company. The primary goal is to cut costs, free up time, and focus on core aspects of the business. Two types of BPO are front-office and back-office. Back-office BPO entails the internal aspects of a business, such as payroll, inventory purchasing, and billing. Front-office BPO focuses on activities external to the company, such as marketing and customer service.

What Are the Advantages of BPO?

There are numerous advantages to BPO. One of the primary advantages is that it lowers costs. Performing a certain job function internally costs a specific amount. BPO can reduce these costs by outsourcing this job to an external party, often in a less cost-intensive country, reducing the overall cost of performing that job function.

Other advantages include a company being allowed to focus on core business functions that are critical to its success, rather than administrative tasks or other aspects of running a company that are not critical. BPO also helps with growth, particularly in global expansion. If a company is interested in opening an overseas branch or operating overseas, utilizing a BPO company that has experience in the local industry and that speaks the language is extremely beneficial.

What Are the Types of BPO Companies?

There are three primary types of BPO companies. These are local outsourcing, offshore outsourcing, and nearshore outsourcing. Local outsourcing is a company that is in the same country as your business. Offshore outsourcing is a company that is in another country, and nearshore outsourcing is a company that is in a country that is not too far from your country.

What Is a BPO Call Center?

A BPO call center handles outsourced incoming and outgoing customer calls on behalf of other businesses. Many BPO call centers will have agents that can individually handle customer complaints or inquiries standing in for a number of different companies, often within a particular specialty. For instance, one call center agent may be able to field tech support phone calls for a variety of vendors or manufacturers.

The Bottom Line

Business process outsourcing (BPO) utilizes third-party specialists to carry out some part of a business process or operation (as opposed to outsourcing the entire production). BPO can lower a company's costs, increase efficiency, and provide flexibility. At the same time, the BPO industry is rapidly-growing, which means that in our increasingly global economy, process outsourcing is not going anywhere.

bpo-img.gif


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2086 2024-03-10 23:29:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2088) Clinical Laboratory

Gist

Clinical laboratories are healthcare facilities providing a wide range of laboratory procedures that aid clinicians in diagnosing, treating, and managing patients. These laboratories are manned by scientists trained to perform and analyze tests on samples of biological specimens collected from patients.

In addition, clinical laboratories may employ pathologists, clinical biochemists, laboratory assistants, laboratory managers, biomedical scientists, medical laboratory technicians, medical laboratory assistants, phlebotomists, and histology technicians. Most clinical laboratories are situated within or near hospital facilities to provide access to clinicians and their patients.

Classifications of clinical laboratories indicated below reveal that these facilities provide quality laboratory tests that are significant for addressing medical and public health needs.

Summary

Clinical Laboratory Science, also called Medical Laboratory Science or Medical Technology, is the health profession that provides laboratory information and services needed for the diagnosis and treatment of disease. Clinical Laboratory Scientists perform a variety of laboratory tests, ensure the quality of the test results, explain the significance of laboratory tests, evaluate new methods and study the effectiveness of laboratory tests. Examples of laboratory tests performed by Clinical Laboratory Scientists include:

* the detection of the abnormal cells that cause leukemia
* the analysis of cardiac enzyme activity released during a heart attack
* the identification of the type of bacteria causing an infection
* the detection of DNA markers for genetic diseases

Molecular Diagnostic Science is a specialized area of Clinical Laboratory Science that uses sensitive and specific techniques to detect and identify biomarkers at the most basic level: that of nucleic acids (DNA and RNA). Common applications of molecular methods include medical diagnosis, establishing prognosis, monitoring the course of disease, and selecting optimal therapies. Molecular methods are also used in both forensic and non-forensic identification. A variety of biological materials can be used for molecular testing including fetal cells from amniotic fluid, dried blood spots from newborn screening programs, blood samples, buccal (mouth) swabs, bone, and hair follicles.

Molecular diagnostic tests are increasingly used in many major areas of medicine including genetic disorders, infectious diseases, cancer, pharmacogenetics and identity testing. Examples include:

* Genetic disorders: Molecular methods are used to detect common inherited diseases such as cystic fibrosis, hemochromatosis, and fragile X syndrome.
* Infectious diseases: Many diseases – including hepatitis, tuberculosis, human immunodeficiency virus (HIV), human papilloma virus (HPV), Chlamydia, Neisseria gonorrhoeae, and methicillin-resistant Staphylococcus aureus (MRSA) – can be identified faster and more accurately using molecular techniques as compared to traditional culture or antibody assays.
* Cancer: Some leukemias and solid tumor cancers can be detected and identified by molecular probes which target the abnormal gene rearrangements occurring in these disorders.
* Pharmacogenetics. Molecular testing can be used to individualize a specific dosing schedule for patients on a common blood thinner, warfarin, and thereby reduce the likelihood of overmedication and potential bleeding problems.
* Identity testing: Molecular diagnostic tests are used in determining the identity of combat casualties, in analyzing crime scene evidence, in determining paternity, and identifying foreign DNA in transplantation medicine.

These examples are only a small sample of the many clinical and other applications of molecular testing methods.

Details

A medical laboratory or clinical laboratory is a laboratory where tests are conducted out on clinical specimens to obtain information about the health of a patient to aid in diagnosis, treatment, and prevention of disease. Clinical medical laboratories are an example of applied science, as opposed to research laboratories that focus on basic science, such as found in some academic institutions.

Medical laboratories vary in size and complexity and so offer a variety of testing services. More comprehensive services can be found in acute-care hospitals and medical centers, where 70% of clinical decisions are based on laboratory testing. Doctors offices and clinics, as well as skilled nursing and long-term care facilities, may have laboratories that provide more basic testing services. Commercial medical laboratories operate as independent businesses and provide testing that is otherwise not provided in other settings due to low test volume or complexity.

Departments

In hospitals and other patient-care settings, laboratory medicine is provided by the Department of Pathology and Medical Laboratory, and generally divided into two sections, each of which will be subdivided into multiple specialty areas. The two sections are:

* Anatomic pathology: areas included here are histopathology, cytopathology, electron microscopy, and gross pathology.
Medical Laboratory, which typically includes the following areas:
* Clinical microbiology: This encompasses several different sciences, including bacteriology, virology, parasitology, immunology, and mycology.
* Clinical chemistry: This area typically includes automated analysis of blood specimens, including tests related to enzymology, toxicology and endocrinology.
* Hematology: This area includes automated and manual analysis of blood cells. It also often includes coagulation.
* Blood bank involves the testing of blood specimens in order to provide blood transfusion and related services.
* Molecular diagnostics DNA testing may be done here, along with a subspecialty known as cytogenetics.
* Reproductive biology testing is available in some laboratories, including Semen analysis, Sperm bank and assisted reproductive technology.

Layouts of clinical laboratories in health institutions vary greatly from one facility to another. For instance, some health facilities have a single laboratory for the microbiology section, while others have a separate lab for each specialty area.

The following is an example of a typical breakdown of the responsibilities of each area:

* Microbiology includes culturing of the bacteria in clinical specimens, such as feces, urine, blood, sputum, cerebrospinal fluid, and synovial fluid, as well as possible infected tissue. The work here is mainly concerned with cultures, to look for suspected pathogens which, if found, are further identified based on biochemical tests. Also, sensitivity testing is carried out to determine whether the pathogen is sensitive or resistant to a suggested medicine. Results are reported with the identified organism(s) and the type and amount of drug(s) that should be prescribed for the patient.
* Parasitology is where specimens are examined for parasites. For example, fecal samples may be examined for evidence of intestinal parasites such as tapeworms or hookworms.
* Virology is concerned with identification of viruses in specimens such as blood, urine, and cerebrospinal fluid.
* Hematology analyzes whole blood specimens to perform full blood counts, and includes the examination of blood films.
* Other specialized tests include cell counts on various bodily fluids.
* Coagulation testing determines various blood clotting times, coagulation factors, and platelet function.
* Clinical biochemistry commonly performs dozens of different tests on serum or plasma. These tests, mostly automated, includes quantitative testing for a wide array of substances, such as lipids, blood sugar, enzymes, and hormones.
* Toxicology is mainly focused on testing for pharmaceutical and recreational drugs. Urine and blood samples are the common specimens.
* Immunology/Serology uses the process of antigen-antibody interaction as a diagnostic tool. Compatibility of transplanted organs may also be determined with these methods.
* Immunohematology, or blood bank determines blood groups, and performs compatibility testing on donor blood and recipients. It also prepares blood components, derivatives, and products for transfusion. This area determines a patient's blood type and Rh status, checks for antibodies to common antigens found on red blood cells, and cross matches units that are negative for the antigen.
* Urinalysis tests urine for many analysts, including microscopically. If more precise quantification of urine chemicals is required, the specimen is processed in the clinical biochemistry lab.
* Histopathology processes solid tissue removed from the body (biopsies) for evaluation at the microscopic level.
* Cytopathology examines smears of cells from all over the body (such as from the cervix) for evidence of inflammation, cancer, and other conditions.
* Molecular diagnostics includes specialized tests involving DNA and RNA analysis.
* Cytogenetics involves using blood and other cells to produce a DNA karyotype. This can be helpful in cases of prenatal diagnosis (e.g. Down's syndrome) as well as in some cancers which can be identified by the presence of abnormal chromosomes.
* Surgical pathology examines organs, limbs, tumors, fetuses, and other tissues biopsied in surgery such as breast mastectomies.

Medical laboratory staff

The staff of clinical laboratories may include:

* Pathologist
* Clinical biochemist
* Laboratory assistant (LA)
* Laboratory manager
* Biomedical scientist (BMS) in the UK, Medical laboratory scientist (MT, MLS or CLS) in the US or Medical laboratory technologist in Canada
* Medical laboratory technician/clinical laboratory technician (MLT or CLT in US)
* Medical laboratory assistant (MLA)
* Phlebotomist (PBT)
* Histology technician
* Labor shortages

The United States has a documented shortage of working laboratory professionals. For example, as of 2016 vacancy rates for Medical Laboratory Scientists ranged from 5% to 9% for various departments. The decline is primarily due to retirements, and to at-capacity educational programs that cannot expand which limits the number of new graduates. Professional organizations and some state educational systems are responding by developing ways to promote the lab professions in an effort to combat this shortage. In addition, the vacancy rates for the MLS were tested again in 2018. The percentage range for the various departments has developed a broader range of 4% to as high as 13%. The higher numbers were seen in the Phlebotomy and Immunology. Microbiology was another department that has had a struggle with vacancies. Their average in the 2018 survey was around 10-11% vacancy rate across the United States. Recruitment campaigns, funding for college programs, and better salaries for the laboratory workers are a few ways they are focusing to decrease the vacancy rate. The National Center For Workforce Analysis has estimated that by 2025 there will be a 24% increase in demand for lab professionals. Highlighted by the COVID-19 pandemic, work is being done to address this shortage including bringing pathology and laboratory medicine into the conversation surrounding access to healthcare. COVID-19 brought the laboratory to the attention of the government and the media, thus giving opportunity for the staffing shortages as well as the resource challenges to be heard and dealt with.

Types of laboratory

In most developed countries, there are two main types of lab processing the majority of medical specimens. Hospital laboratories are attached to a hospital, and perform tests on their patients. Private (or community) laboratories receive samples from general practitioners, insurance companies, clinical research sites and other health clinics for analysis. For extremely specialised tests, samples may go to a research laboratory. Some tests involve specimens sent between different labs for uncommon tests. For example, in some cases it may be more cost effective if a particular laboratory specializes in a less common tests, receiving specimens (and payment) from other labs, while sending other specimens to other labs for those tests they do not perform.

In many countries there are specialized types of medical laboratories according to the types of investigations carried out. Organisations that provide blood products for transfusion to hospitals, such as the Red Cross, will provide access to their reference laboratory for their customers. Some laboratories specialize in Molecular diagnostic and cytogenetic testing, in order to provide information regarding diagnosis and treatment of genetic or cancer-related disorders.

Specimen processing and work flow

In a hospital setting, sample processing will usually start with a set of samples arriving with a test request, either on a form or electronically via the laboratory information system (LIS). Inpatient specimens will already be labeled with patient and testing information provided by the LIS. Entry of test requests onto the LIS system involves typing (or scanning where barcodes are used) in the laboratory number, and entering the patient identification, as well as any tests requested. This allows laboratory analyzers, computers and staff to recognize what tests are pending, and also gives a location (such as a hospital department, doctor or other customer) for results reporting.

Once the specimens are assigned a laboratory number by the LIS, a sticker is typically printed that can be placed on the tubes or specimen containers. This label has a barcode that can be scanned by automated analyzers and test requests uploaded to the analyzer from the LIS.

Specimens are prepared for analysis in various ways. For example, chemistry samples are usually centrifuged and the serum or plasma is separated and tested. If the specimen needs to go on more than one analyzer, it can be divided into separate tubes.

Many specimens end up in one or more sophisticated automated analysers, that process a fraction of the sample to return one or more test results. Some laboratories use robotic sample handlers (Laboratory automation) to optimize the workflow and reduce the risk of contamination from sample handling by the staff.

The work flow in a hospital laboratory is usually heaviest from 2:00 am to 10:00 am. Nurses and doctors generally have their patients tested at least once a day with common tests such as complete blood counts and chemistry profiles. These orders are typically drawn during a morning run by phlebotomists for results to be available in the patient's charts for the attending physicians to consult during their morning rounds. Another busy time for the lab is after 3:00 pm when private practice physician offices are closing. Couriers will pick up specimens that have been drawn throughout the day and deliver them to the lab. Also, couriers will stop at outpatient drawing centers and pick up specimens. These specimens will be processed in the evening and overnight to ensure results will be available the following day.

Laboratory informatics

The large amount of information processed in laboratories is managed by a system of software programs, computers, and terminology standards that exchange data about patients, test requests, and test results known as a Laboratory information system or LIS. The LIS is often interfaced with the hospital information system, EHR and/or laboratory instruments. Formats for terminologies for test processing and reporting are being standardized with systems such as Logical Observation Identifiers Names and Codes (LOINC) and Nomenclature for Properties and Units terminology (NPU terminology).

These systems enable hospitals and labs to order the correct test requests for each patient, keep track of individual patient and specimen histories, and help guarantee a better quality of results. Results are made available to care providers electronically or by printed hard copies for patient charts.

Result analysis, validation and interpretation

According to various regulations, such as the international ISO 15189 norm, all pathological laboratory results must be verified by a competent professional. In some countries, staffs composed of clinical scientists do the majority of this work inside the laboratory with certain abnormal results referred to the relevant pathologist. Doctor Clinical Laboratory scientists have the responsibility for limited interpretation of testing results in their discipline in many countries. Interpretation of results can be assisted by some software in order to validate normal or non-modified results.

In other testing areas, only professional medical staff (pathologist or clinical Laboratory) is involved with interpretation and consulting. Medical staff are sometimes also required in order to explain pathology results to physicians. For a simple result given by phone or to explain a technical problem, often a medical technologist or medical lab scientist can provide additional information.

Medical Laboratory Departments in some countries are exclusively directed by a specialized Doctor laboratory Science. In others, a consultant, medical or non-medical, may be the head the department. In Europe and some other countries, Clinical Scientists with a Masters level education may be qualified to head the department. Others may have a PhD and can have an exit qualification equivalent to medical staff (e.g., FRCPath in the UK).

In France, only medical staff (Pharm.D. and M.D. specialized in anatomical pathology or clinical Laboratory Science) can discuss Laboratory results.

Medical laboratory accreditation

Credibility of medical laboratories is paramount to the health and safety of the patients relying on the testing services provided by these labs. Credentialing agencies vary by country. The international standard in use today for the accreditation of medical laboratories is ISO 15189 - Medical laboratories - Requirements for quality and competence.

In the United States, billions of dollars is spent on unaccredited lab tests, such as Laboratory developed tests which do not require accreditation or FDA approval; about a billion USD a year is spent on US autoimmune LDTs alone. Accreditation is performed by the Joint Commission, College of American Pathologists, AAB (American Association of Bioanalysts), and other state and federal agencies. Legislative guidelines are provided under CLIA 88 (Clinical Laboratory Improvement Amendments) which regulates Medical Laboratory testing and personnel.

The accrediting body in Australia is NATA, where all laboratories must be NATA accredited to receive payment from Medicare.

In France the accrediting body is the Comité français d'accréditation (COFRAC). In 2010, modification of legislation established ISO 15189 accreditation as an obligation for all clinical laboratories.

In the United Arab Emirates, the Dubai Accreditation Department (DAC) is the accreditation body that is internationally recognised[21] by the International Laboratory Accreditation Cooperation (ILAC) for many facilities and groups, including Medical Laboratories, Testing and Calibration Laboratories, and Inspection Bodies.

In Hong Kong, the accrediting body is Hong Kong Accreditation Service (HKAS). On 16 February 2004, HKAS launched its medical testing accreditation programme.

In Canada, laboratory accreditation is not mandatory, but is becoming more and more popular. Accreditation Canada (AC) is the national reference. Different provincial oversight bodies mandate laboratories in EQA participations like LSPQ (Quebec), IQMH (Ontario) for example.

Industry

The laboratory industry is a part of the broader healthcare and health technology industry. Companies exist at various levels, including clinical laboratory services, suppliers of instrumentation equipment and consumable materials, and suppliers and developers of diagnostic tests themselves (often by biotechnology companies).

Clinical laboratory services includes large multinational corporations such LabCorp, Quest Diagnostics, and Sonic Healthcare but a significant portion of revenue, estimated at 60% in the United States, is generated by hospital labs. In 2018, the total global revenue for these companies was estimated to reach $146 billion by 2024. Another estimate places the market size at $205 billion, reaching $333 billion by 2023. The American Association for Clinical Chemistry (AACC) represents professionals in the field.

Clinical laboratories are supplied by other multinational companies which focus on materials and equipment, which can be used for both scientific research and medical testing. The largest of these is Thermo Fisher Scientific. In 2016, global life sciences instrumentation sales were around $47 billion, not including consumables, software, and services. In general, laboratory equipment includes lab centrifuges, transfection solutions, water purification systems, extraction techniques, gas generators, concentrators and evaporators, fume hoods, incubators, biological safety cabinets, bioreactors and fermenters, microwave-assisted chemistry, lab washers, and shakers and stirrers.

YMS308_0140_MedicalLab_292656_36846_v3.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2087 2024-03-11 14:54:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2089) Human body temperature

Gist

The human body's normal temperature was long considered to be 98.6 degrees Fahrenheit. Normal adult body temperatures actually range from 97 to 99 F. Plus, there's some evidence to suggest normal temperature has decreased over time and is closer to 97.9 F.

Your temperature can fluctuate and vary based on your age and the method used to measure your temperature. A fever is when your body temperature is higher than normal. Most healthcare providers consider a fever to be at 100.4 or higher.

Summary

Normal body temperature is considered to be 37°C (98.6°F); however, a wide variation is seen. Among normal individuals, mean daily temperature can differ by 0.5°C (0.9°F), and daily variations can be as much as 0.25 to 0.5°C. The nadir in body temperature usually occurs at about 4 a.m. and the peak at about 6 p.m. This circadian rhythm is quite constant for an individual and is not disturbed by periods of fever or hypothermia. Prolonged change to daytime-sleep and nighttime-awake cycles will effect an adaptive correction in the circadian temperature rhythm. Normal rectal temperature is typically 0.27° to 0.38°C (0.5° to 0.7°F) greater than oral temperature. Axillary temperature is about 0.55°C (1.0°F) less than the oral temperature.

For practical clinical purposes, a patient is considered febrile or pyrexial if the oral temperature exceeds 37.5°C (99.5°F) or the rectal temperature exceeds 38°C (100.5°F). Hyperpyrexia is the term applied to the febrile state when the temperature exceeds 41.1°C (or 106°F). Hypothermia is defined by a rectal temperature of 35°C (95°F) or less.

Details

Normal human body temperature (normothermia, euthermia) is the typical temperature range found in humans. The normal human body temperature range is typically stated as 36.5–37.5 °C (97.7–99.5 °F).

Human body temperature varies. It depends on gender, age, time of day, exertion level, health status (such as illness and menstruation), what part of the body the measurement is taken at, state of consciousness (waking, sleeping, sedated), and emotions. Body temperature is kept in the normal range by a homeostatic function known as thermoregulation, in which adjustment of temperature is triggered by the central nervous system.

Methods of measurement

Taking a human's temperature is an initial part of a full clinical examination. There are various types of medical thermometers, as well as sites used for measurement, including:

* In the rectum (rectal temperature)
* In the mouth (oral temperature)
* Under the arm (axillary temperature)
* In the ear (tympanic temperature)
* On the skin of the forehead over the temporal artery
* Using heat flux sensors

Variations

* Diurnal variation in body temperature, ranging from about 37.5 °C (99.5 °F) from 10 a.m. to 6 p.m., and falling to about 36.4 °C (97.5 °F) from 2 a.m. to 6 a.m. (Based on figure in entry for 'Animal Heat' in 11th edition of the Encyclopædia Britannica, 1910)
* Temperature control (thermoregulation) is a homeostatic mechanism that keeps the organism at optimum operating temperature, as the temperature affects the rate of chemical reactions. In humans, the average internal temperature is widely accepted to be 37 °C (98.6 °F), a "normal" temperature established in the 1800s. But newer studies show that average internal temperature for men and women is 36.4 °C (97.5 °F). No person always has exactly the same temperature at every moment of the day. Temperatures cycle regularly up and down through the day, as controlled by the person's circadian rhythm. The lowest temperature occurs about two hours before the person normally wakes up. Additionally, temperatures change according to activities and external factors.

In addition to varying throughout the day, normal body temperature may also differ as much as 0.5 C (0.9 F) from one day to the next, so that the highest or lowest temperatures on one day will not always exactly match the highest or lowest temperatures on the next day.

Normal human body temperature varies slightly from person to person and by the time of day. Consequently, each type of measurement has a range of normal temperatures. The range for normal human body temperatures, taken orally, is 36.8 ± 0.5 °C (98.2 ± 0.9 °F). This means that any oral temperature between 36.3 and 37.3 °C (97.3 and 99.1 °F) is likely to be normal.

The normal human body temperature is often stated as 36.5–37.5 °C (97.7–99.5 °F). In adults a review of the literature has found a wider range of 33.2–38.2 °C (91.8–100.8 °F) for normal temperatures, depending on the gender and location measured.

Reported values vary depending on how it is measured: oral (under the tongue): 36.8±0.4 °C (98.2±0.72 °F), internal (rectal, vaginal): 37.0 °C (98.6 °F). A rectal or vaginal measurement taken directly inside the body cavity is typically slightly higher than oral measurement, and oral measurement is somewhat higher than skin measurement. Other places, such as under the arm or in the ear, produce different typical temperatures. While some people think of these averages as representing normal or ideal measurements, a wide range of temperatures has been found in healthy people. The body temperature of a healthy person varies during the day by about 0.5 °C (0.9 °F) with lower temperatures in the morning and higher temperatures in the late afternoon and evening, as the body's needs and activities change. Other circumstances also affect the body's temperature. The core body temperature of an individual tends to have the lowest value in the second half of the sleep cycle; the lowest point, called the nadir, is one of the primary markers for circadian rhythms. The body temperature also changes when a person is hungry, sleepy, sick, or cold.

Natural rhythms

Body temperature normally fluctuates over the day following circadian rhythms, with the lowest levels around 4 a.m. and the highest in the late afternoon, between 4:00 and 6:00 p.m. (assuming the person sleeps at night and stays awake during the day). Therefore, an oral temperature of 37.3 °C (99.1 °F) would, strictly speaking, be a normal, healthy temperature in the afternoon but not in the early morning. An individual's body temperature typically changes by about 0.5 °C (0.9 °F) between its highest and lowest points each day.

Body temperature is sensitive to many hormones, so women have a temperature rhythm that varies with the menstrual cycle, called a circamensal rhythm. A woman's basal body temperature rises sharply after ovulation, as estrogen production decreases and progesterone increases. Fertility awareness programs use this change to identify when a woman has ovulated to achieve or avoid pregnancy. During the luteal phase of the menstrual cycle, both the lowest and the average temperatures are slightly higher than during other parts of the cycle. However, the amount that the temperature rises during each day is slightly lower than typical, so the highest temperature of the day is not very much higher than usual. Hormonal contraceptives both suppress the circamensal rhythm and raise the typical body temperature by about 0.6 °C (1.1 °F).

Temperature also may vary with the change of seasons during each year. This pattern is called a circannual rhythm. Studies of seasonal variations have produced inconsistent results. People living in different climates may have different seasonal patterns.

It has been found that physically active individuals have larger changes in body temperature throughout the day. Physically active people have been reported to have lower body temperatures than their less active peers in the early morning and similar or higher body temperatures later in the day.

With increased age, both average body temperature and the amount of daily variability in the body temperature tend to decrease. Elderly patients may have a decreased ability to generate body heat during a fever, so even a somewhat elevated temperature can indicate a serious underlying cause in geriatrics. One study suggested that the average body temperature has also decreased since the 1850s. The study's authors believe the most likely explanation for the change is a reduction in inflammation at the population level due to decreased chronic infections and improved hygiene.

Measurement methods

Method  :  Women  :  Men
Oral  :  33.2–38.1 °C  :  (91.8–100.6 °F)  :  35.7–37.7 °C (96.3–99.9 °F)
Rectal  :  36.8–37.1 °C (98.2–98.8 °F)  :  36.7–37.5 °C (98.1–99.5 °F)
Tympanic  :  35.7–37.5 °C (96.3–99.5 °F)  :  35.5–37.5 °C (95.9–99.5 °F)
Axillary  :  35.5–37.0 °C (95.9–98.6 °F)

Different methods used for measuring temperature produce different results. The temperature reading depends on which part of the body is being measured. The typical daytime temperatures among healthy adults are as follows:

* Temperature in the math (rectum/rectal), math, or in the ear (tympanic) is about 37.5 °C (99.5 °F)
* Temperature in the mouth (oral) is about 36.8 °C (98.2 °F)
* Temperature under the arm (axillary) is about 36.5 °C (97.7 °F)

Generally, oral, rectal, gut, and core body temperatures, although slightly different, are well-correlated.

Oral temperatures are influenced by drinking, chewing, smoking, and breathing with the mouth open. Mouth breathing, cold drinks or food reduce oral temperatures; hot drinks, hot food, chewing, and smoking raise oral temperatures.

Each measurement method also has different normal ranges depending on gender.

Infrared thermometer

As of 2016, reviews of infrared thermometers have found them to be of variable accuracy. This includes tympanic infrared thermometers in children.

Variations due to outside factors

Sleep disturbances also affect temperatures. Normally, body temperature drops significantly at a person's normal bedtime and throughout the night. Short-term sleep deprivation produces a higher temperature at night than normal, but long-term sleep deprivation appears to reduce temperatures. Insomnia and poor sleep quality are associated with smaller and later drops in body temperature. Similarly, waking up unusually early, sleeping in, jet lag and changes to shift work schedules may affect body temperature.

Concept:

Fever

A temperature setpoint is the level at which the body attempts to maintain its temperature. When the setpoint is raised, the result is a fever. Most fevers are caused by infectious disease and can be lowered, if desired, with antipyretic medications.

An early morning temperature higher than 37.3 °C (99.1 °F) or a late afternoon temperature higher than 37.7 °C (99.9 °F) is normally considered a fever, assuming that the temperature is elevated due to a change in the hypothalamus's setpoint. Lower thresholds are sometimes appropriate for elderly people. The normal daily temperature variation is typically 0.5 °C (0.90 °F), but can be greater among people recovering from a fever.

An organism at optimum temperature is considered afebrile, meaning "without fever". If temperature is raised, but the setpoint is not raised, then the result is hyperthermia.

Hyperthermia

Hyperthermia occurs when the body produces or absorbs more heat than it can dissipate. It is usually caused by prolonged exposure to high temperatures. The heat-regulating mechanisms of the body eventually become overwhelmed and unable to deal effectively with the heat, causing the body temperature to climb uncontrollably. Hyperthermia at or above about 40 °C (104 °F) is a life-threatening medical emergency that requires immediate treatment. Common symptoms include headache, confusion, and fatigue. If sweating has resulted in dehydration, then the affected person may have dry, red skin.

In a medical setting, mild hyperthermia is commonly called heat exhaustion or heat prostration; severe hyperthermia is called heat stroke. Heatstroke may come on suddenly, but it usually follows the untreated milder stages. Treatment involves cooling and rehydrating the body; fever-reducing drugs are useless for this condition. This may be done by moving out of direct sunlight to a cooler and shaded environment, drinking water, removing clothing that might keep heat close to the body, or sitting in front of a fan. Bathing in tepid or cool water, or even just washing the face and other exposed areas of the skin, can be helpful.

With fever, the body's core temperature rises to a higher temperature through the action of the part of the brain that controls the body temperature; with hyperthermia, the body temperature is raised without the influence of the heat control centers.

Hypothermia

In hypothermia, body temperature drops below that required for normal metabolism and bodily functions. In humans, this is usually due to excessive exposure to cold air or water, but it can be deliberately induced as a medical treatment. Symptoms usually appear when the body's core temperature drops by 1–2 °C (1.8–3.6 °F) below normal temperature.

Basal body temperature

Basal body temperature is the lowest temperature attained by the body during rest (usually during sleep). It is generally measured immediately after awakening and before any physical activity has been undertaken, although the temperature measured at that time is somewhat higher than the true basal body temperature. In women, temperature differs at various points in the menstrual cycle, and this can be used in the long term to track ovulation both to aid conception or avoid pregnancy. This process is called fertility awareness.

Core temperature

Core temperature, also called core body temperature, is the operating temperature of an organism, specifically in deep structures of the body such as the liver, in comparison to temperatures of peripheral tissues. Core temperature is normally maintained within a narrow range so that essential enzymatic reactions can occur. Significant core temperature elevation (hyperthermia) or depression (hypothermia) over more than a brief period of time is incompatible with human life.

Temperature examination in the heart, using a catheter, is the traditional gold standard measurement used to estimate core temperature (oral temperature is affected by hot or cold drinks, ambient temperature fluctuations as well as mouth-breathing). Since catheters are highly invasive, the generally accepted alternative for measuring core body temperature is through rectal measurements. Rectal temperature is expected to be approximately 1 Fahrenheit (or 0.55 Celsius) degree higher than an oral temperature taken on the same person at the same time. Ear thermometers measure temperature from the tympanic membrane using infrared sensors and also aim to measure core body temperature, since the blood supply of this membrane is directly shared with the brain. However, this method of measuring body temperature is not as accurate as rectal measurement and has a low sensitivity for fever, failing to determine three or four out of every ten fever measurements in children. Ear temperature measurement may be acceptable for observing trends in body temperature but is less useful in consistently identifying and diagnosing fever.

Until recently, direct measurement of core body temperature required either an ingestible device or surgical insertion of a probe. Therefore, a variety of indirect methods have commonly been used as the preferred alternative to these more accurate albeit more invasive methods. The rectal or vaginal temperature is generally considered to give the most accurate assessment of core body temperature, particularly in hypothermia. In the early 2000s, ingestible thermistors in capsule form were produced, allowing the temperature inside the digestive tract to be transmitted to an external receiver; one study found that these were comparable in accuracy to rectal temperature measurement. More recently, a new method using heat flux sensors have been developed. Several research papers show that its accuracy is similar to the invasive methods.

Internal variation

Measurement within the body finds internal variation temperatures as different as 21.5 °C. for the radial artery and 31.1°. It has been observed that "chaos" has been "introduced into physiology by the fictitious assumption of a constant blood temperature"

COS-cosinuss-bodytemperature-core_shell-EN-XXXX-V01.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2088 2024-03-12 00:01:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2090) Luminosity

Gist

Luminosity is the quality of something that gives off light or shines with reflected light. The most noticeable quality of a large, sparkly diamond is its luminosity.

You might describe a bright and lively oil painting in terms of its luminosity, or marvel at the luminosity of a brilliant sunset. Astronomers use the word luminosity to talk about a specific property of physics, the energy given of by an astronomical object in a certain amount of time. There's a direct correlation between this amount of energy and the object's brightness. The Latin root is lumen, meaning "light."

Summary

Luminosity is the amount of power given off by an astronomical object.

Stars and other objects emit energy in the form of radiation. It is measured in joules per second, which are equal to watts. A watt is one unit of power. Just as a light bulb is measured in watts, the Sun can also be measured in watts. The sun gives off 3.846×{10}^{26} W. This amount of power is known as 1 sol, the symbol for which is

There are other ways to describe luminosity. The most common is apparent magnitude, which is how bright an object looks to an observer on Earth. It only applies to light, that is, visible wavelengths. Apparent magnitude is contrasted with absolute magnitude, which is an object's intrinsic brightness at visible wavelengths, irrespective of distance. The apparent magnitude is the less, for objects more than 32.6 light years away.

When talking about the total power output across all wavelengths, that is called bolometric magnitude.

Luminosity, in astronomy, is the amount of light emitted by an object in a unit of time. The luminosity of the Sun is 3.846 × {10}^{26} watts (or 3.846 × {10}^{33} ergs per second). Luminosity is an absolute measure of radiant power; that is, its value is independent of an observer’s distance from an object. Astronomers usually refer to the luminosity of an object in terms of solar luminosities, with one solar luminosity being equal to the luminosity of the Sun. The most luminous stars emit several million solar luminosities. The most luminous supernovae shine with {10}^{17} solar luminosities. The dim brown dwarfs have luminosities a few millionths that of the Sun.

Details

Luminosity is an absolute measure of radiated electromagnetic energy (light) per unit time, and is synonymous with the radiant power emitted by a light-emitting object. In astronomy, luminosity is the total amount of electromagnetic energy emitted per unit of time by a star, galaxy, or other astronomical objects.

In SI units, luminosity is measured in joules per second, or watts. In astronomy, values for luminosity are often given in the terms of the luminosity of the Sun, L⊙. Luminosity can also be given in terms of the astronomical magnitude system: the absolute bolometric magnitude (Mbol) of an object is a logarithmic measure of its total energy emission rate, while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band.

In contrast, the term brightness in astronomy is generally used to refer to an object's apparent brightness: that is, how bright an object appears to an observer. Apparent brightness depends on both the luminosity of the object and the distance between the object and observer, and also on any absorption of light along the path from object to observer. Apparent magnitude is a logarithmic measure of apparent brightness. The distance determined by luminosity measures can be somewhat ambiguous, and is thus sometimes called the luminosity distance.

Measurement

When not qualified, the term "luminosity" means bolometric luminosity, which is measured either in the SI units, watts, or in terms of solar luminosities (L☉). A bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating. A star also radiates neutrinos, which carry off some energy (about 2% in the case of the Sun), contributing to the star's total luminosity. The IAU has defined a nominal solar luminosity of 3.828×1026 W to promote publication of consistent and comparable values in units of the solar luminosity.

While bolometers do exist, they cannot be used to measure even the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the Earth. In practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum that is most likely to match those measurements. In some cases, the process of estimation is extreme, with luminosities being calculated when less than 1% of the energy output is observed, for example with a hot Wolf-Rayet star observed only in the infrared. Bolometric luminosities can also be calculated using a bolometric correction to a luminosity in a particular passband.

The term luminosity is also used in relation to particular passbands such as a visual luminosity of K-band luminosity. These are not generally luminosities in the strict sense of an absolute measure of radiated power, but absolute magnitudes defined for a given filter in a photometric system. Several different photometric systems exist. Some such as the UBV or Johnson system are defined against photometric standard stars, while others such as the AB system are defined in terms of a spectral flux density.

Stellar luminosity

A star's luminosity can be determined from two stellar characteristics: size and effective temperature. The former is typically represented in terms of solar radii, R⊙, while the latter is represented in kelvins, but in most cases neither can be measured directly. To determine a star's radius, two other metrics are needed: the star's angular diameter and its distance from Earth. Both can be measured with great accuracy in certain cases, with cool supergiants often having large angular diameters, and some cool evolved stars having masers in their atmospheres that can be used to measure the parallax using VLBI. However, for most stars the angular diameter or parallax, or both, are far below our ability to measure with any certainty. Since the effective temperature is merely a number that represents the temperature of a black body that would reproduce the luminosity, it obviously cannot be measured directly, but it can be estimated from the spectrum.

An alternative way to measure stellar luminosity is to measure the star's apparent brightness and distance. A third component needed to derive the luminosity is the degree of interstellar extinction that is present, a condition that usually arises because of gas and dust present in the interstellar medium (ISM), the Earth's atmosphere, and circumstellar matter. Consequently, one of astronomy's central challenges in determining a star's luminosity is to derive accurate measurements for each of these components, without which an accurate luminosity figure remains elusive. Extinction can only be measured directly if the actual and observed luminosities are both known, but it can be estimated from the observed colour of a star, using models of the expected level of reddening from the interstellar medium.

In the current system of stellar classification, stars are grouped according to temperature, with the massive, very young and energetic Class O stars boasting temperatures in excess of 30,000 K while the less massive, typically older Class M stars exhibit temperatures less than 3,500 K. Because luminosity is proportional to temperature to the fourth power, the large variation in stellar temperatures produces an even vaster variation in stellar luminosity. Because the luminosity depends on a high power of the stellar mass, high mass luminous stars have much shorter lifetimes. The most luminous stars are always young stars, no more than a few million years for the most extreme. In the Hertzsprung–Russell diagram, the x-axis represents temperature or spectral type while the y-axis represents luminosity or magnitude. The vast majority of stars are found along the main sequence with blue Class O stars found at the top left of the chart while red Class M stars fall to the bottom right. Certain stars like Deneb and Betelgeuse are found above and to the right of the main sequence, more luminous or cooler than their equivalents on the main sequence. Increased luminosity at the same temperature, or alternatively cooler temperature at the same luminosity, indicates that these stars are larger than those on the main sequence and they are called giants or supergiants.

Blue and white supergiants are high luminosity stars somewhat cooler than the most luminous main sequence stars. A star like Deneb, for example, has a luminosity around 200,000 L⊙, a spectral type of A2, and an effective temperature around 8,500 K, meaning it has a radius around 203 R☉ (1.41×{10}^{11} m). For comparison, the red supergiant Betelgeuse has a luminosity around 100,000 L⊙, a spectral type of M2, and a temperature around 3,500 K, meaning its radius is about 1,000 R☉ (7.0×{10}^{11} m). Red supergiants are the largest type of star, but the most luminous are much smaller and hotter, with temperatures up to 50,000 K and more and luminosities of several million L⊙, meaning their radii are just a few tens of R⊙. For example, R136a1 has a temperature over 46,000 K and a luminosity of more than 6,100,000 L⊙ (mostly in the UV), it is only 39 R☉ (2.7×{10}^{10} m).

KuXaAMx2TYaBezspWDxbZ5-650-80.jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2089 2024-03-13 00:01:49

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2091) Geography

Gist

Geography is the study of places and the relationships between people and their environments. Geographers explore both the physical properties of Earth's surface and the human societies spread across it.

Summary

Geography is the study of the diverse environments, places, and spaces of Earth’s surface and their interactions. It seeks to answer the questions of why things are as they are, where they are. The modern academic discipline of geography is rooted in ancient practice, concerned with the characteristics of places, in particular their natural environments and peoples, as well as the relations between the two. Its separate identity was first formulated and named some 2,000 years ago by the Greeks, whose geo and graphein were combined to mean “earth writing” or “earth description.” However, what is now understood as geography was elaborated before then, in the Arab world and elsewhere. Ptolemy, author of one of the discipline’s first books, Guide to Geography (2nd century CE), defined geography as “a representation in pictures of the whole known world together with the phenomena which are contained therein.” This expresses what many still consider geography’s essence—a description of the world using maps (and now also pictures, as in the kind of “popular geographies” exemplified by National Geographic Magazine)—but, as more was learned about the world, less could be mapped, and words were added to the pictures.

To most people, geography means knowing where places are and what they are like. Discussion of an area’s geography usually refers to its topography—its relief and drainage patterns and predominant vegetation, along with climate and weather patterns—together with human responses to that environment, as in agricultural, industrial, and other land uses and in settlement and urbanization patterns.

Although there was a much earlier teaching of what is now called geography, the academic discipline is largely a 20th-century creation, forming a bridge between the natural and social sciences. The history of geography is the history of thinking about the concepts of environments, places, and spaces. Its content covers an understanding of the physical reality we occupy and our transformations of environments into places that we find more comfortable to inhabit (although many such modifications often have negative long-term impacts). Geography provides insights into major contemporary issues, such as globalization and environmental change, as well as a detailed appreciation of local differences; changes in disciplinary interests and practices reflect those issues.

Details

Geography is the study of the lands, features, inhabitants, and phenomena of Earth. Geography is an all-encompassing discipline that seeks an understanding of Earth and its human and natural complexities—not merely where objects are, but also how they have changed and come to be. While geography is specific to Earth, many concepts can be applied more broadly to other celestial bodies in the field of planetary science. Geography has been called "a bridge between natural science and social science disciplines."

Origins of many of the concepts in geography can be traced to Greek Eratosthenes of Cyrene, who may have coined the term "geographia" (c. 276 BC – c. 195/194 BC). The first recorded use of the word γεωγραφία was as the title of a book by Greek scholar Claudius Ptolemy (100 – 170 AD). This work created the so called "Ptolemaic tradition" of geography, which included "Ptolemaic cartographic theory." However, the concepts of geography (such as cartography) date back to the earliest attempts to understand the world spatially, with the earliest example of an attempted world map dating to the 9th century BCE in ancient Babylon. The history of geography as a discipline spans cultures and millennia, being independently developed by multiple groups, and cross-pollinated by trade between these groups. The core concepts of geography consistent between all approaches are a focus on space, place, time, and scale.

Today, geography is an extremely broad discipline with multiple approaches and modalities. There have been multiple attempts to organize the discipline, including the four traditions of geography, and into branches. Techniques employed can generally be broken down into quantitative and qualitative approaches, with many studies taking mixed-methods approaches. Common techniques include cartography, remote sensing, interviews, and surveys.

Fundamentals

Geography is a systematic study of the Earth (other celestial bodies are specified, such as "geography of Mars", or given another name, such as areography in the case of Mars), its features, and phenomena that take place on it. For something to fall into the domain of geography, it generally needs some sort of spatial component that can be placed on a map, such as coordinates, place names, or addresses. This has led to geography being associated with cartography and place names. Although many geographers are trained in toponymy and cartology, this is not their main preoccupation. Geographers study the Earth's spatial and temporal distribution of phenomena, processes, and features as well as the interaction of humans and their environment. Because space and place affect a variety of topics, such as economics, health, climate, plants, and animals, geography is highly interdisciplinary. The interdisciplinary nature of the geographical approach depends on an attentiveness to the relationship between physical and human phenomena and their spatial patterns.

Names of places...are not geography...To know by heart a whole gazetteer full of them would not, in itself, constitute anyone a geographer. Geography has higher aims than this: it seeks to classify phenomena (alike of the natural and of the political world, in so far as it treats of the latter), to compare, to generalize, to ascend from effects to causes, and, in doing so, to trace out the laws of nature and to mark their influences upon man. This is 'a description of the world'—that is Geography. In a word, Geography is a Science—a thing not of mere names but of argument and reason, of cause and effect.

— William Hughes, 1863

Geography as a discipline can be split broadly into three main branches: human geography, physical geography, and technical geography. Human geography largely focuses on the built environment and how humans create, view, manage, and influence space. Physical geography examines the natural environment and how organisms, climate, soil, water, and landforms produce and interact. The difference between these approaches led to the development of integrated geography, which combines physical and human geography and concerns the interactions between the environment and humans. Technical geography involves studying and developing the tools and techniques used by geographers, such as remote sensing, cartography, and geographic information system.

Key concepts

Narrowing down geography to a few key concepts is extremely challenging, and subject to tremendous debate within the discipline. In one attempt, the 1st edition of the book "Key Concepts in Geography" broke down this into chapters focusing on "Space," "Place," "Time," "Scale," and "Landscape." The 2nd edition of the book expanded on these key concepts by adding "Environmental systems," "Social Systems," "Nature," "Globalization," "Development," and "Risk," demonstrating how challenging narrowing the field can be.

Another approach used extensively in teaching geography are the Five themes of geography established by "Guidelines for Geographic Education: Elementary and Secondary Schools," published jointly by the National Council for Geographic Education and the Association of American Geographers in 1984. These themes are Location, place, relationships within places (often summarized as Human-Environment Interaction), movement, and regions The five themes of geography have shaped how American education approaches the topic in the years since.

Space

Just as all phenomena exist in time and thus have a history, they also exist in space and have a geography.

— United States National Research Council, 1997

For something to exist in the realm of geography, it must be able to be described spatially. Thus, space is the most fundamental concept at the foundation of geography. The concept is so basic, that geographers often have difficulty defining exactly what it is. Absolute space is the exact site, or spatial coordinates, of objects, persons, places, or phenomena under investigation. We exist in space. Absolute space leads to the view of the world as a photograph, with everything frozen in place when the coordinates were recorded. Today, geographers are trained to recognize the world as a dynamic space where all processes interact and take place, rather than a static image on a map.

Place

Place is one of the most complex and important terms in geography. In human geography, place is the synthesis of the coordinates on the Earth's surface, the activity and use that occurs, has occurred, and will occur at the coordinates, and the meaning ascribed to the space by human individuals and groups. This can be extraordinarily complex, as different spaces may have different uses at different times and mean different things to different people. In physical geography, a place includes all of the physical phenomena that occur in space, including the lithosphere, atmosphere, hydrosphere, and biosphere. Places do not exist in a vacuum and instead have complex spatial relationships with each other, and place is concerned how a location is situated in relation to all other locations. As a discipline then, the term place in geography includes all spatial phenomena occurring at a location, the diverse uses and meanings humans ascribe to that location, and how that location impacts and is impacted by all other locations on Earth. In one of Yi-Fu Tuan's papers, he explains that in his view, geography is the study of Earth as a home for humanity, and thus place and the complex meaning behind the term is central to the discipline of geography.

Time

Examples of the visual language of time geography: space-time cube, path, prism, bundle, and other concepts.

Time is usually thought to be within the domain of history, however, it is of significant concern in the discipline of geography. In physics, space and time are not separated, and are combined into the concept of spacetime. Geography is subject to the laws of physics, and in studying things that occur in space, time must be considered. Time in geography is more than just the historical record of events that occurred at various discrete coordinates; but also includes modeling the dynamic movement of people, organisms, and things through space. Time facilitates movement through space, ultimately allowing things to flow through a system. The amount of time an individual, or group of people, spends in a place will often shape their attachment and perspective to that place. Time constrains the possible paths that can be taken through space, given a starting point, possible routes, and rate of travel. Visualizing time over space is challenging in terms of cartography, and includes Space-Prism, advanced 3D geovisualizations, and animated maps.

Scale

Scale in the context of a map is the ratio between a distance measured on the map and the corresponding distance as measured on the ground. This concept is fundamental to the discipline of geography, not just cartography, in that phenomena being investigated appear different depending on the scale used. Scale is the frame that geographers use to measure space, and ultimately to try and understand a place.

Additional Information

Geography is the study of places and the relationships between people and their environments. Geographers explore both the physical properties of Earth’s surface and the human societies spread across it. They also examine how human culture interacts with the natural environment, and the way that locations and places can have an impact on people. Geography seeks to understand where things are found, why they are there, and how they develop and change over time.

Ancient Geographers

The term "geography" was coined by the Greek scholar Eratosthenes in the third century B.C.E. In Greek, geo- means “earth” and -graphy means “to write.” Using geography, Eratosthenes and other Greeks developed an understanding of where their homeland was located in relation to other places, what their own and other places were like, and how people and environments were distributed. These concerns have been central to geography ever since.

Of course, the Greeks were not the only people interested in geography, nor were they the first. Throughout human history, most societies have sought to understand something about their place in the world, and the people and environments around them. Mesopotamian societies inscribed maps on clay tablets, some of which survive to this day. The earliest known attempt at mapping the world is a Babylonian clay tablet known as the Imago Mundi. This map, created in the sixth century B.C.E., is more of a metaphorical and spiritual representation of Babylonian society rather than an accurate depiction of geography. Other Mesopotamian maps were more practical, marking irrigation networks and landholdings.

Indigenous peoples around the world developed geographic ideas and practices long before Eratosthenes. For example, Polynesian navigators embarked on long-range sea voyages across the Pacific Islands as early as 3000 years ago. The people of the Marshall Islands used navigation charts made of natural materials (“stick charts”) to visualize and memorize currents, wind patterns, and island locations.

Indeed, mapmaking probably came even before writing in many places, but ancient Greek geographers were particularly influential. They developed very detailed maps of Greek city-states, including parts of Europe, Africa, and Asia. More importantly, they also raised questions about how and why different human and natural patterns came into being on Earth’s surface, and why variations existed from place to place. The effort to answer these questions about patterns and distribution led them to figure out that the world was round, to calculate Earth’s circumference, and to develop explanations of everything from the seasonal flooding of the Nile to differences in population densities from place to place.

During the Middle Ages, geography ceased to be a major academic pursuit in Europe. Advances in geography were chiefly made by scientists of the Muslim world, based around the Middle East and North Africa. Geographers of this Islamic Golden Age created an early example of a rectangular map based on a grid, a map system that is still familiar today. Islamic scholars also applied their study of people and places to agriculture, determining which crops and livestock were most suited to specific habitats or environments.

In addition to the advances in the Middle East, the Chinese empire in Asia also contributed immensely to geography. Around 1000, Chinese navigators achieved one of the most important developments in the history of geography: They were the first to use the compass for navigational purposes. In the early 1400s, the explorer Zheng He embarked on seven voyages to the lands bordering the China Sea and the Indian Ocean, establishing China’s influence throughout Southeast Asia.

Age of Discovery

Through the 13th-century travels of the Italian explorer Marco Polo, European interest in spices from Asia grew. Acquiring spices from East Asian and Arab merchants was expensive, and a major land route for the European spice trade was lost with the conquering of Constantinople by the Ottoman Empire. These and other economic factors, in addition to competition between Christian and Islamic societies, motivated European nations to send explorers in search of a sea route to China. This period of time between the 15th and 17th centuries is known in the West as the Age of Exploration or the Age of Discovery.

With the dawn of the Age of Discovery, the study of geography regained popularity in Europe. The invention of the printing press in the mid-1400s helped spread geographic knowledge by making maps and charts widely available. Improvements in shipbuilding and navigation facilitated more exploring, greatly improving the accuracy of maps and geographic information.

Greater geographic understanding allowed European powers to extend their global influence. During the Age of Discovery, European nations established colonies around the world. Improved transportation, communication, and navigational technology allowed countries such as the United Kingdom to establish colonies as far away as the Americas, Asia, Australia, and Africa. This was lucrative for European powers, but the Age of Discovery brought about nightmarish change for the people already living in the territories they colonized. When Columbus landed in the Americas in 1492, millions of Indigenous peoples already lived there. By the 1600s, 90 percent of the Indigenous population of the Americas had been wiped out by violence and diseases brought over by European explorers.

Geography was not just a subject that enabled colonialism, however. It also helped people understand the planet on which they lived. Not surprisingly, geography became an important focus of study in schools and universities.

Geography also became an important part of other academic disciplines, such as chemistry, economics, and philosophy. In fact, every academic subject has some geographic connection. Chemists study where certain chemical elements, such as gold or silver, can be found. Economists examine which nations trade with other nations, and what resources are exchanged. Philosophers analyze the responsibility people have to take care of Earth.

Emergence of Modern Geography

Some people have trouble understanding the complete scope of the discipline of geography because geography is interdisciplinary, meaning that it is not defined by one particular topic. Instead, geography is concerned with many different topics—people, culture, politics, settlements, plants, landforms, and much more. Geography asks spatial questions—how and why things are distributed or arranged in particular ways on Earth’s surface. It looks at these different distributions and arrangements at many different scales. It also asks questions about how the interaction of different human and natural activities on Earth’s surface shape the characteristics of the world in which we live.

Geography seeks to understand where things are found and why they are present in those places; how things that are located in the same or distant places influence one another over time; and why places and the people who live in them develop and change in particular ways. Raising these questions is at the heart of the “geographic perspective.”

Exploration has long been an important part of geography, and it’s an important part of developing a geographic perspective. Exploration isn’t limited to visiting unfamiliar places; it also means documenting and connecting relationships between spatial, sociological, and ecological elements. t

The age-old practice of mapping still plays an important role in this type of exploration, but exploration can also be done by using images from satellites or gathering information from interviews. Discoveries can come by using computers to map and analyze the relationship among things in geographic space, or from piecing together the multiple forces, near and far, that shape the way individual places develop.

Applying a geographic perspective demonstrates geography’s concern not just with where things are, but with “the why of where”—a short but useful definition of geography’s central focus.

The insights that have come from geographic research show the importance of asking “the why of where” questions. Geographic studies comparing physical characteristics of continents on either side of the Atlantic Ocean, for instance, gave rise to the idea that Earth’s surface is comprised of large, slowly moving plates—plate tectonics.

Studies of the geographic distribution of human settlements have shown how economic forces and modes of transport influence the location of towns and cities. For example, geographic analysis has pointed to the role of the United States Interstate Highway System and the rapid growth of car ownership in creating a boom in U.S. suburban growth after World War II. The geographic perspective helped show where Americans were moving, why they were moving there, and how their new living places affected their lives, their relationships with others, and their interactions with the environment.

Geographic analyses of the spread of diseases have pointed to the conditions that allow particular diseases to develop and spread. Dr. John Snow’s cholera map stands out as a classic example. When cholera broke out in London, England, in 1854, Snow represented the deaths per household on a street map. Using the map, he was able to trace the source of the outbreak to a water pump on the corner of Broad Street and Cambridge Street. The geographic perspective helped identify the source of the problem (the water from a specific pump) and allowed people to avoid the disease (avoiding water from that pump).

Investigations of the geographic impact of human activities have advanced understanding of the role of humans in transforming the surface of Earth, exposing the spatial extent of threats such as water pollution by artificial waste. For example, geographic study has shown that a large mass of tiny pieces of plastic currently floating in the Pacific Ocean is approximately the size of Texas. Satellite images and other geographic technology identified the so-called “Great Pacific Garbage Patch.”

These examples of different uses of the geographic perspective help explain why geographic study and research is important as we confront many 21st century challenges, including environmental pollution, poverty, hunger, and ethnic or political conflict.

Because the study of geography is so broad, the discipline is typically divided into specialties. At the broadest level, geography is divided into physical geography, human geography, geographic techniques, and regional geography.

Physical Geography

The natural environment is the primary concern of physical geographers, although many physical geographers also look at how humans have altered natural systems. Physical geographers study Earth’s seasons, climate, atmosphere, soil, streams, landforms, and oceans. Some disciplines within physical geography include geomorphology, glaciology, pedology, hydrology, climatology, biogeography, and oceanography.

Geomorphology is the study of landforms and the processes that shape them. Geomorphologists investigate the nature and impact of wind, ice, rivers, erosion, earthquakes, volcanoes, living things, and other forces that shape and change the surface of Earth.

Glaciologists focus on Earth’s ice fields and their impact on the planet’s climate. Glaciologists document the properties and distribution of glaciers and icebergs. Data collected by glaciologists has demonstrated the retreat of Arctic and Antarctic ice in the past century.

Pedologists study soil and how it is created, changed, and classified. Soil studies are used by a variety of professions, from farmers analyzing field fertility to engineers investigating the suitability of different areas for building heavy structures.

Hydrology is the study of Earth’s water: its properties, distribution, and effects. Hydrologists are especially concerned with the movement of water as it cycles from the ocean to the atmosphere, then back to Earth’s surface. Hydrologists study the water cycle through rainfall into streams, lakes, the soil, and underground aquifers. Hydrologists provide insights that are critical to building or removing dams, designing irrigation systems, monitoring water quality, tracking drought conditions, and predicting flood risk.

Climatologists study Earth’s climate system and its impact on Earth’s surface. For example, climatologists make predictions about El Niño, a cyclical weather phenomenon of warm surface temperatures in the Pacific Ocean. They analyze the dramatic worldwide climate changes caused by El Niño, such as flooding in Peru, drought in Australia, and, in the United States, the oddities of heavy Texas rains or an unseasonably warm Minnesota winter.

Biogeographers study the impact of the environment on the distribution of plants and animals. For example, a biogeographer might document all the places in the world inhabited by a certain spider species, and what those places have in common.

Oceanography, a related discipline of physical geography, focuses on the creatures and environments of the world’s oceans. Observation of ocean tides and currents constituted some of the first oceanographic investigations. For example, 18th-century mariners figured out the geography of the Gulf Stream, a massive current flowing like a river through the Atlantic Ocean. The discovery and tracking of the Gulf Stream helped communications and travel between Europe and the Americas.

Today, oceanographers conduct research on the impacts of water pollution, track tsunamis, design offshore oil rigs, investigate underwater eruptions of lava, and study all types of marine organisms from toxic algae to friendly dolphins.

Human Geography

Human geography is concerned with the distribution and networks of people and cultures on Earth’s surface. A human geographer might investigate the local, regional, and global impact of rising economic powers China and India, which represent 37 percent of the world’s people. They also might look at how consumers in China and India adjust to new technology and markets, and how markets respond to such a huge consumer base.

Human geographers also study how people use and alter their environments. When, for example, people allow their animals to overgraze a region, the soil erodes and grassland is transformed into desert. The impact of overgrazing on the landscape as well as agricultural production is an area of study for human geographers.

Finally, human geographers study how political, social, and economic systems are organized across geographical space. These include governments, religious organizations, and trade partnerships. The boundaries of these groups constantly change.

The main divisions within human geography reflect a concern with different types of human activities or ways of living. Some examples of human geography include urban geography, economic geography, cultural geography, political geography, social geography, and population geography. Human geographers who study geographic patterns and processes in past times are part of the subdiscipline of historical geography. Those who study how people understand maps and geographic space belong to a subdiscipline known as behavioral geography.

Many human geographers interested in the relationship between humans and the environment work in the subdisciplines of cultural geography and political geography.

Cultural geographers study how the natural environment influences the development of human culture, such as how the climate affects the agricultural practices of a region. Political geographers study the impact of political circumstances on interactions between people and their environment, as well as environmental conflicts, such as disputes over water rights.

Some human geographers focus on the connection between human health and geography. For example, health geographers create maps that track the location and spread of specific diseases. They analyze the geographic disparities of health-care access. They are very interested in the impact of the environment on human health, especially the effects of environmental hazards such as radiation, lead poisoning, or water pollution.

Geographic Techniques

Specialists in geographic techniques study the ways in which geographic processes can be analyzed and represented using different methods and technologies. Mapmaking, or cartography, is perhaps the most basic of these. Cartography has been instrumental to geography throughout the ages.

Today, almost the entire surface of Earth has been mapped with remarkable accuracy, and much of this information is available instantly on the internet. One of the most remarkable of these websites is Google Earth, which “lets you fly anywhere on Earth to view satellite imagery, maps, terrain, 3D buildings, from galaxies in outer space to the canyons of the ocean.” In essence, anyone can be a virtual explorer from the comfort of home.

Technological developments during the past 100 years have given rise to a number of other specialties for scientists studying geographic techniques. The airplane made it possible to photograph land from above. Now, there are many satellites and other above-Earth vehicles that help geographers figure out what the surface of the planet looks like and how it is changing.

Geographers looking at what above-Earth cameras and sensors reveal are specialists in remote sensing. Pictures taken from space can be used to make maps, monitor ice melt, assess flood damage, track oil spills, predict weather, or perform endless other functions. For example, by comparing satellite photos taken from 1955 to 2007, scientists from the U.S. Geological Survey (USGS) discovered that the rate of coastal erosion along Alaska’s Beaufort Sea had doubled. Every year from 2002 to 2007, about 13.7 meters (45 feet) per year of coast, mostly icy permafrost, vanished into the sea.

Computerized systems that allow for precise calculations of how things are distributed and relate to one another have made the study of geographic information systems (GIS) an increasingly important specialty within geography. Geographic information systems are powerful databases that collect all types of information (maps, reports, statistics, satellite images, surveys, demographic data, and more) and link each piece of data to a geographic reference point, such as geographic coordinates. This data, called geospatial information, can be stored, analyzed, modeled, and manipulated in ways not possible before GIS computer technology existed.

The popularity and importance of GIS has given rise to a new science known as geographic information science (GISci). Geographic information scientists study patterns in nature as well as human development. They might study natural hazards, such as a fire that struck Los Angeles, California, United States, in 2008. A map posted on the internet showed the real-time spread of the fire, along with information to help people make decisions about how to evacuate quickly. GIS can also illustrate human struggles from a geographic perspective, such as the interactive online map published by the New York Times in May 2009 that showed building foreclosure rates in various regions around the New York City area.

The enormous possibilities for producing computerized maps and diagrams that can help us understand environmental and social problems have made geographic visualization an increasingly important specialty within geography. This geospatial information is in high demand by just about every institution, from government agencies monitoring water quality to entrepreneurs deciding where to locate new businesses.

Regional Geography

Regional geographers take a somewhat different approach to specialization, directing their attention to the general geographic characteristics of a region. A regional geographer might specialize in African studies, observing and documenting the people, nations, rivers, mountains, deserts, weather, trade, and other attributes of the continent. There are different ways you can define a region. You can look at climate zones, cultural regions, or political regions. Often regional geographers have a physical or human geography specialty as well as a regional specialty.

Regional geographers may also study smaller regions, such as urban areas. A regional geographer may be interested in the way a city like Shanghai, China, is growing. They would study transportation, migration, housing, and language use, as well as the human impact on elements of the natural environment, such as the Huangpu River.

Whether geography is thought of as a discipline or as a basic feature of our world, developing an understanding of the subject is important. Some grasp of geography is essential as people seek to make sense of the world and understand their place in it. Thinking geographically helps people to be aware of the connections among and between places and to see how important events are shaped by where they take place. Finally, knowing something about geography enriches people’s lives—promoting curiosity about other people and places and an appreciation of the patterns, environments, and peoples that make up the endlessly fascinating, varied planet on which we live.

shutterstock-1140854771.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2090 2024-03-14 00:06:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2092) Geology

Gist

The word geology means 'Study of the Earth'. Also known as geoscience or earth science, Geology is the primary Earth science and looks at how the earth formed, its structure and composition, and the types of processes acting on it.

Summary

Geology is the fields of study concerned with the solid Earth. Included are sciences such as mineralogy, geodesy, and stratigraphy.

An introduction to the geochemical and geophysical sciences logically begins with mineralogy, because Earth’s rocks are composed of minerals—inorganic elements or compounds that have a fixed chemical composition and that are made up of regularly aligned rows of atoms. Today one of the principal concerns of mineralogy is the chemical analysis of the some 3,000 known minerals that are the chief constituents of the three different rock types: sedimentary (formed by diagenesis of sediments deposited by surface processes); igneous (crystallized from magmas either at depth or at the surface as lavas); and metamorphic (formed by a recrystallization process at temperatures and pressures in the Earth’s crust high enough to destabilize the parent sedimentary or igneous material). Geochemistry is the study of the composition of these different types of rocks.

During mountain building, rocks became highly deformed, and the primary objective of structural geology is to elucidate the mechanism of formation of the many types of structures (e.g., folds and faults) that arise from such deformation. The allied field of geophysics has several subdisciplines, which make use of different instrumental techniques. Seismology, for example, involves the exploration of the Earth’s deep structure through the detailed analysis of recordings of elastic waves generated by earthquakes and man-made explosions. Earthquake seismology has largely been responsible for defining the location of major plate boundaries and of the dip of subduction zones down to depths of about 700 kilometres at those boundaries. In other subdisciplines of geophysics, gravimetric techniques are used to determine the shape and size of underground structures; electrical methods help to locate a variety of mineral deposits that tend to be good conductors of electricity; and paleomagnetism has played the principal role in tracking the drift of continents.

Geomorphology is concerned with the surface processes that create the landscapes of the world—namely, weathering and erosion. Weathering is the alteration and breakdown of rocks at the Earth’s surface caused by local atmospheric conditions, while erosion is the process by which the weathering products are removed by water, ice, and wind. The combination of weathering and erosion leads to the wearing down or denudation of mountains and continents, with the erosion products being deposited in rivers, internal drainage basins, and the oceans. Erosion is thus the complement of deposition. The unconsolidated accumulated sediments are transformed by the process of diagenesis and lithification into sedimentary rocks, thereby completing a full cycle of the transfer of matter from an old continent to a young ocean and ultimately to the formation of new sedimentary rocks. Knowledge of the processes of interaction of the atmosphere and the hydrosphere with the surface rocks and soils of the Earth’s crust is important for an understanding not only of the development of landscapes but also (and perhaps more importantly) of the ways in which sediments are created. This in turn helps in interpreting the mode of formation and the depositional environment of sedimentary rocks. Thus the discipline of geomorphology is fundamental to the uniformitarian approach to the Earth sciences according to which the present is the key to the past.

Geologic history provides a conceptual framework and overview of the evolution of the Earth. An early development of the subject was stratigraphy, the study of order and sequence in bedded sedimentary rocks. Stratigraphers still use the two main principles established by the late 18th-century English engineer and surveyor William Smith, regarded as the father of stratigraphy: (1) that younger beds rest upon older ones and (2) different sedimentary beds contain different and distinctive fossils, enabling beds with similar fossils to be correlated over large distances. Today biostratigraphy uses fossils to characterize successive intervals of geologic time, but as relatively precise time markers only to the beginning of the Cambrian Period, about 540,000,000 years ago. The geologic time scale, back to the oldest rocks, some 4,280,000,000 years ago, can be quantified by isotopic dating techniques. This is the science of geochronology, which in recent years has revolutionized scientific perception of Earth history and which relies heavily on the measured parent-to-daughter ratio of radiogenic isotopes.

Paleontology is the study of fossils and is concerned not only with their description and classification but also with an analysis of the evolution of the organisms involved. Simple fossil forms can be found in early Precambrian rocks as old as 3,500,000,000 years, and it is widely considered that life on Earth must have begun before the appearance of the oldest rocks. Paleontological research of the fossil record since the Cambrian Period has contributed much to the theory of evolution of life on Earth.

Several disciplines of the geologic sciences have practical benefits for society. The geologist is responsible for the discovery of minerals (such as lead, chromium, nickel, and tin), oil, gas, and coal, which are the main economic resources of the Earth; for the application of knowledge of subsurface structures and geologic conditions to the building industry; and for the prevention of natural hazards or at least providing early warning of their occurrence.

Astrogeology is important in that it contributes to understanding the development of the Earth within the solar system. The U.S. Apollo program of manned missions to the Moon, for example, provided scientists with firsthand information on lunar geology, including observations on such features as meteorite craters that are relatively rare on Earth. Unmanned space probes have yielded significant data on the surface features of many of the planets and their satellites. Since the 1970s even such distant planetary systems as those of Jupiter, Saturn, and Uranus have been explored by probes.

Details

Geology is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science.

Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates.

Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering.

Geological material

The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods.

Mineral

Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and ordered atomic composition.

Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for:

Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull.
Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color.
Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help name the mineral.
Hardness: The resistance of a mineral to scratching.
Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes.
Specific gravity: the weight of a specific volume of a mineral.
Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing.
Magnetism: Involves using a magnet to test for magnetism.
Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt).

Rock

The rock cycle shows the relationship between igneous, sedimentary, and metamorphic rocks.

A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle illustrates the relationships among them.

When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. It can then be turned into a metamorphic rock by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify. Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks.

To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric.

Unlithified material

Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock.[6] This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time.

Magma

Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization.

Whole-Earth structure:

Plate tectonics

In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading[8][9] and the global distribution of mountain terrain and seismicity.

There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics.

The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries.

For example:

Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart.

Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another.

Transform boundaries, such as the San Andreas Fault system, resulted in widespread powerful earthquakes. Plate tectonics also has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle.

Earth structure

The Earth's layered structure. (1) inner core; (2) outer core; (3) lower mantle; (4) upper mantle; (5) lithosphere; (6) crust (part of the lithosphere)

Earth layered structure. Typical wave paths from earthquakes like these gave early seismologists insights into the layered structure of the Earth

Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth.

Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a crust and lithosphere on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model.

Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth.

Additional Information

Geology is the study of the nonliving things that the Earth is made of.[1][2] Geology is the study of rocks in the Earth's crust. People who study geology are called geologists.[3] Some geologists study minerals (mineralogist) and the useful substances the rocks contain such as ores and fossil fuels. Geologists also study the history of the Earth.

Some of the important events in the Earth's history are floods, volcanic eruptions, earthquakes, orogeny (mountain building), and plate tectonics (movement of continents).

Subjects

Geology is divided into special subjects that study one part of geology. Some of these subjects and what they focus on are:

Geomorphology – the shape of Earth's surface (its morphology)
Historical geology – the events that shaped the Earth over the last 4.5 billion years
Hydrogeology – underground water
Palaeontology – fossils; evolutionary histories
Petrology – rocks, how they form, where they are from, and what that implies
Mineralogy – minerals
Sedimentology – sediments (clays, sands, gravels, soils, etc.)
Stratigraphy – layered sedimentary rocks and how they were deposited
Petroleum geology – petroleum deposits in sedimentary rocks
Structural geology – folds, faults, and mountain-building;
Volcanology – volcanoes on land or under the ocean
Seismology – earthquakes and strong ground-motion
Engineering geology – geologic hazards (such as landslides and earthquakes) applied to civil engineering[12][13]

Geotechnical engineering: It is also called Geotechnics. It is branch of geology that deals with the engineering behavior of earth materials.

Types of rock

Rocks can be very different from each other. Some are very hard and some are soft. Some rocks are very common, while others are rare. However, all the different rocks belong to three categories or types, igneous, sedimentary and metamorphic.

Igneous rock is rock that has been made by volcanic action. Igneous rock is made when the lava (melted rock on the surface of the Earth) or magma (melted rock below the surface of the Earth) cools and becomes hard.

Sedimentary rock is rock that has been made from sediment. Sediment is solid pieces of stuff that are moved by wind, water, or glaciers, and dropped somewhere. Sediment can be made from clay, sand, gravel and the bodies and shells of animals. The sediment gets dropped in a layer, usually in water at the bottom of a river or sea. As the sediment piles up, the lowers layers get squashed together. Slowly they set hard into rock.

Metamorphic rock is rock that has been changed. Sometimes an igneous or a sedimentary rock is heated or squashed under the ground, so that it changes. Metamorphic rock is often harder than the rock that it was before it changed. Marble and slate are among the metamorphic rocks that people use to make things.

Faults

All three kinds of rock can be changed by being heated and squeezed by forces in the Earth. When this happens, faults (cracks) may appear in the rock. Geologists can learn a lot about the history of the rock by studying the patterns of the fault lines. Earthquakes are caused when a fault breaks suddenly.

Soil

Soil is the stuff on the ground made of lots of particles (or tiny pieces). The particles of soil come from rocks that have broken down, and from rotting leaves and animals bodies. Soil covers a lot of the surface of the Earth. Plants of all sorts grow in soil.

To find out more about types of rocks, see the rock (geology) article. To find out more about soil, see the soil article.

Principles of Stratigraphy

Geologists use some simple ideas which help them to understand the rocks they are studying. The following ideas were worked out in the early days of stratigraphy by people like Nicolaus Steno, James Hutton and William Smith:

* Understanding the past: Geologist James Hutton said "The present is the key to the past". He meant that the sort of changes that are happening to the Earth's surface now are the same sorts of things that happened in the past.     *Geologists can understand things that happened millions of years ago, by looking at the changes which are happening today.
* Horizontal strata: The layers in a sedimentary rock must have been horizontal (flat) when they were deposited (laid down).
* The age of the strata: Layers at the bottom must be older than layers at the top, unless all the rocks have been turned over.
* In sedimentary rocks that are made of sand or gravel, the sand or gravel must have come from an older rock.
* The age of faults: If there is a crack or fault in a rock, then the fault is younger than the rock. Rocks are in strata (lots of layers). A geologist can see if the faults go through all the layer, or only some. This helps to tell the age of the rocks.
* The age of a rock which cuts through other rocks: If an igneous rock cuts across sedimentary layers, it must be younger than the sedimentary rock.
* The relative age of fossils: A fossil in one rock type must be about the same age as the same type of fossil in the same type of rock in a different place. Likewise, a fossil in a rock layer below must be earlier than one in a higher layer.

crust-e1406316183408.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2091 2024-03-15 00:05:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2093) Eugenics

Gist

Eugenics is a set of beliefs and practices that aim to improve the genetic quality of a human population.

Summary

Eugenics is the selection of desired heritable characteristics in order to improve future generations, typically in reference to humans. The term eugenics was coined in 1883 by British explorer and natural scientist Francis Galton, who, influenced by Charles Darwin’s theory of natural selection, advocated a system that would allow “the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable.” Social Darwinism, the popular theory in the late 19th century that life for humans in society was ruled by “survival of the fittest,” helped advance eugenics into serious scientific study in the early 1900s. By World War I many scientific authorities and political leaders supported eugenics. However, it ultimately failed as a science in the 1930s and ’40s, when the assumptions of eugenicists became heavily criticized and the Nazis used eugenics to support the extermination of entire races.

Early history

Although eugenics as understood today dates from the late 19th century, efforts to select matings in order to secure offspring with desirable traits date from ancient times. Plato’s Republic (c. 378 BCE) depicts a society where efforts are undertaken to improve human beings through selective breeding. Later, Italian philosopher and poet Tommaso Campanella, in City of the Sun (1623), described a utopian community in which only the socially elite are allowed to procreate. Galton, in Hereditary Genius (1869), proposed that a system of arranged marriages between men of distinction and women of wealth would eventually produce a gifted race. In 1865 the basic laws of heredity were discovered by the father of modern genetics, Gregor Mendel. His experiments with peas demonstrated that each physical trait was the result of a combination of two units (now known as genes) and could be passed from one generation to another. However, his work was largely ignored until its rediscovery in 1900. This fundamental knowledge of heredity provided eugenicists—including Galton, who influenced his cousin Charles Darwin—with scientific evidence to support the improvement of humans through selective breeding.

The advancement of eugenics was concurrent with an increasing appreciation of Darwin’s account for change or evolution within society—what contemporaries referred to as social Darwinism. Darwin had concluded his explanations of evolution by arguing that the greatest step humans could make in their own history would occur when they realized that they were not completely guided by instinct. Rather, humans, through selective reproduction, had the ability to control their own future evolution. A language pertaining to reproduction and eugenics developed, leading to terms such as positive eugenics, defined as promoting the proliferation of “good stock,” and negative eugenics, defined as prohibiting marriage and breeding between “defective stock.” For eugenicists, nature was far more contributory than nurture in shaping humanity.

During the early 1900s eugenics became a serious scientific study pursued by both biologists and social scientists. They sought to determine the extent to which human characteristics of social importance were inherited. Among their greatest concerns were the predictability of intelligence and certain deviant behaviours. Eugenics, however, was not confined to scientific laboratories and academic institutions. It began to pervade cultural thought around the globe, including the Scandinavian countries, most other European countries, North America, Latin America, Japan, China, and Russia. In the United States the eugenics movement began during the Progressive Era and remained active through 1940. It gained considerable support from leading scientific authorities such as zoologist Charles B. Davenport, plant geneticist Edward M. East, and geneticist and Nobel Prize laureate Hermann J. Muller. Political leaders in favour of eugenics included U.S. Pres. Theodore Roosevelt, Secretary of State Elihu Root, and Associate Justice of the Supreme Court John Marshall Harlan. Internationally, there were many individuals whose work supported eugenic aims, including British scientists J.B.S. Haldane and Julian Huxley and Russian scientists Nikolay K. Koltsov and Yury A. Filipchenko.

Galton had endowed a research fellowship in eugenics in 1904 and, in his will, provided funds for a chair of eugenics at University College, London. The fellowship and later the chair were occupied by Karl Pearson, a brilliant mathematician who helped to create the science of biometry, the statistical aspects of biology. Pearson was a controversial figure who believed that environment had little to do with the development of mental or emotional qualities. He felt that the high birth rate of the poor was a threat to civilization and that the “higher” races must supplant the “lower.” His views gave countenance to those who believed in racial and class superiority. Thus, Pearson shares the blame for the discredit later brought on eugenics.

In the United States, the Eugenics Record Office (ERO) was opened at Cold Spring Harbor, Long Island, New York, in 1910 with financial support from the legacy of railroad magnate Edward Henry Harriman. Whereas ERO efforts were officially overseen by Charles B. Davenport, director of the Station for Experimental Study of Evolution (one of the biology research stations at Cold Spring Harbor), ERO activities were directly superintended by Harry H. Laughlin, a professor from Kirksville, Missouri. The ERO was organized around a series of missions. These missions included serving as the national repository and clearinghouse for eugenics information, compiling an index of traits in American families, training fieldworkers to gather data throughout the United States, supporting investigations into the inheritance patterns of particular human traits and diseases, advising on the eugenic fitness of proposed marriages, and communicating all eugenic findings through a series of publications. To accomplish these goals, further funding was secured from the Carnegie Institution of Washington, John D. Rockefeller, Jr., the Battle Creek Race Betterment Foundation, and the Human Betterment Foundation.

Prior to the founding of the ERO, eugenics work in the United States was overseen by a standing committee of the American Breeder’s Association (eugenics section established in 1906), chaired by ichthyologist and Stanford University president David Starr Jordan. Research from around the globe was featured at three international congresses, held in 1912, 1921, and 1932. In addition, eugenics education was monitored in Britain by the English Eugenics Society (founded by Galton in 1907 as the Eugenics Education Society) and in the United States by the American Eugenics Society.

Following World War I, the United States gained status as a world power. A concomitant fear arose that if the healthy stock of the American people became diluted with socially undesirable traits, the country’s political and economic strength would begin to crumble. The maintenance of world peace by fostering democracy, capitalism, and, at times, eugenics-based schemes was central to the activities of “the Internationalists,” a group of prominent American leaders in business, education, publishing, and government. One core member of this group, the New York lawyer Madison Grant, aroused considerable pro-eugenic interest through his best-selling book The Passing of the Great Race (1916). Beginning in 1920, a series of congressional hearings was held to identify problems that immigrants were causing the United States. As the country’s “eugenics expert,” Harry Laughlin provided tabulations showing that certain immigrants, particularly those from Italy, Greece, and Eastern Europe, were significantly overrepresented in American prisons and institutions for the “feebleminded.” Further data were construed to suggest that these groups were contributing too many genetically and socially inferior people. Laughlin’s classification of these individuals included the feebleminded, the insane, the criminalistic, the epileptic, the inebriate, the diseased—including those with tuberculosis, leprosy, and syphilis—the blind, the deaf, the deformed, the dependent, chronic recipients of charity, paupers, and “ne’er-do-wells.” Racial overtones also pervaded much of the British and American eugenics literature. In 1923 Laughlin was sent by the U.S. secretary of labour as an immigration agent to Europe to investigate the chief emigrant-exporting nations. Laughlin sought to determine the feasibility of a plan whereby every prospective immigrant would be interviewed before embarking to the United States. He provided testimony before Congress that ultimately led to a new immigration law in 1924 that severely restricted the annual immigration of individuals from countries previously claimed to have contributed excessively to the dilution of American “good stock.”

Immigration control was but one method to control eugenically the reproductive stock of a country. Laughlin appeared at the centre of other U.S. efforts to provide eugenicists greater reproductive control over the nation. He approached state legislators with a model law to control the reproduction of institutionalized populations. By 1920, two years before the publication of Laughlin’s influential Eugenical Sterilization in the United States (1922), 3,200 individuals across the country were reported to have been involuntarily sterilized. That number tripled by 1929, and by 1938 more than 30,000 people were claimed to have met this fate. More than half of the states adopted Laughlin’s law, with California, Virginia, and Michigan leading the sterilization campaign. Laughlin’s efforts secured staunch judicial support in 1927. In the precedent-setting case of Buck v. Bell, Supreme Court Justice Oliver Wendell Holmes, Jr., upheld the Virginia statute and claimed, “It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind.”

Popular support for eugenics

During the 1930s eugenics gained considerable popular support across the United States. Hygiene courses in public schools and eugenics courses in colleges spread eugenic-minded values to many. A eugenics exhibit titled “Pedigree-Study in Man” was featured at the Chicago World’s Fair in 1933–34. Consistent with the fair’s “Century of Progress” theme, stations were organized around efforts to show how favourable traits in the human population could best be perpetuated. Contrasts were drawn between the emulative presidential Roosevelt family and the degenerate “Ishmael” family (one of several pseudonymous family names used, the rationale for which was not given). By studying the passage of ancestral traits, fairgoers were urged to adopt the progressive view that responsible individuals should pursue marriage ever mindful of eugenics principles. Booths were set up at county and state fairs promoting “fitter families” contests, and medals were awarded to eugenically sound families. Drawing again upon long-standing eugenic practices in agriculture, popular eugenic advertisements claimed it was about time that humans received the same attention in the breeding of better babies that had been given to livestock and crops for centuries.

Anti-eugenics sentiment

Anti-eugenics sentiment began to appear after 1910 and intensified during the 1930s. Most commonly it was based on religious grounds. For example, the 1930 papal encyclical Casti connubii condemned reproductive sterilization, though it did not specifically prohibit positive eugenic attempts to amplify the inheritance of beneficial traits. Many Protestant writings sought to reconcile age-old Christian warnings about the heritable sins of the father to pro-eugenic ideals. Indeed, most of the religion-based popular writings of the period supported positive means of improving the physical and moral makeup of humanity.

In the early 1930s Nazi Germany adopted American measures to identify and selectively reduce the presence of those deemed to be “socially inferior” through involuntary sterilization. A rhetoric of positive eugenics in the building of a master race pervaded Rassenhygiene (racial hygiene) movements. When Germany extended its practices far beyond sterilization in efforts to eliminate the Jewish and other non-Aryan populations, the United States became increasingly concerned over its own support of eugenics. Many scientists, physicians, and political leaders began to denounce the work of the ERO publicly. After considerable reflection, the Carnegie Institution formally closed the ERO at the end of 1939.

During the aftermath of World War II, eugenics became stigmatized such that many individuals who had once hailed it as a science now spoke disparagingly of it as a failed pseudoscience. Eugenics was dropped from organization and publication names. In 1954 Britain’s Annals of Eugenics was renamed Annals of Human Genetics. In 1972 the American Eugenics Society adopted the less-offensive name Society for the Study of Social Biology. Its publication, once popularly known as the Eugenics Quarterly, had already been renamed Social Biology in 1969.

U.S. Senate hearings in 1973, chaired by Sen. Ted Kennedy, revealed that thousands of U.S. citizens had been sterilized under federally supported programs. The U.S. Department of Health, Education, and Welfare proposed guidelines encouraging each state to repeal their respective sterilization laws. Other countries, most notably China, continue to support eugenics-directed programs openly in order to ensure the genetic makeup of their future.

The “new eugenics”

Despite the dropping of the term eugenics, eugenic ideas remained prevalent in many issues surrounding human reproduction. Medical genetics, a post-World War II medical specialty, encompasses a wide range of health concerns, from genetic screening and counseling to fetal gene manipulation and the treatment of adults suffering from hereditary disorders. Because certain diseases (e.g., hemophilia and Tay-Sachs disease) are now known to be genetically transmitted, many couples choose to undergo genetic screening, in which they learn the chances that their offspring have of being affected by some combination of their hereditary backgrounds. Couples at risk of passing on genetic defects may opt to remain childless or to adopt children. Furthermore, it is now possible to diagnose certain genetic defects in the unborn. Many couples choose to terminate a pregnancy that involves a genetically disabled offspring. These developments have reinforced the eugenic aim of identifying and eliminating undesirable genetic material.

Counterbalancing this trend, however, has been medical progress that enables victims of many genetic diseases to live fairly normal lives. Direct manipulation of harmful genes is also being studied. If perfected, it could obviate eugenic arguments for restricting reproduction among those who carry harmful genes. Such conflicting innovations have complicated the controversy surrounding what many call the “new eugenics.” Moreover, suggestions for expanding eugenics programs, which range from the creation of sperm banks for the genetically superior to the potential cloning of human beings, have met with vigorous resistance from the public, which often views such programs as unwarranted interference with nature or as opportunities for abuse by authoritarian regimes.

Applications of the Human Genome Project are often referred to as “Brave New World” genetics or the “new eugenics,” in part because they have helped to dramatically increase knowledge of human genetics. In addition, 21st-century technologies such as gene editing, which can potentially be used to treat disease or to alter traits, have further renewed concerns. However, the ethical, legal, and social implications of such tools are monitored much more closely than were early 20th-century eugenics programs. Applications generally are more focused on the reduction of genetic diseases than on improving intelligence.

Still, with or without the use of the term, many eugenics-related concerns are reemerging as a new group of individuals decide how to regulate the application of genetics science and technology. This gene-directed activity, in attempting to improve upon nature, may not be that distant from what Galton implied in 1909 when he described eugenics as the “study of agencies, under social control, which may improve or impair” future generations.

Details

Eugenics is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have attempted to alter human gene pools by excluding people and groups judged to be inferior or promoting those judged to be superior. In recent years, the term has seen a revival in bioethical discussions on the usage of new technologies such as CRISPR and genetic screening, with heated debate around whether these technologies should be considered eugenics or not.

The concept predates the term; Plato suggested applying the principles of selective breeding to humans around 400 BCE. Early advocates of eugenics in the 19th century regarded it as a way of improving groups of people. In contemporary usage, the term eugenics is closely associated with scientific racism. Modern bioethicists who advocate new eugenics characterize it as a way of enhancing individual traits, regardless of group membership.

While eugenic principles have been practiced as early as ancient Greece, the contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. , Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock. Such programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. Those deemed "unfit to reproduce" often included people with mental or physical disabilities, people who scored in the low ranges on different IQ tests, criminals and "deviants", and members of disfavored minority groups.

The eugenics movement became associated with Nazi Germany and the Holocaust when the defense of many of the defendants at the Nuremberg trials of 1945 to 1946 attempted to justify their human-rights abuses by claiming there was little difference between the Nazi eugenics programs and the US eugenics programs. In the decades following World War II, with more emphasis on human rights, many countries began to abandon eugenics policies, although some Western countries (the United States, Canada, and Sweden among them) continued to carry out forced sterilizations. Since the 1980s and 1990s, with new assisted reproductive technology procedures available, such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989), and cytoplasmic transfer (first performed in 1996), concern has grown about the possible revival of a more potent form of eugenics after decades of promoting human rights.

A criticism of eugenics policies is that, regardless of whether negative or positive policies are used, they are susceptible to abuse because the genetic selection criteria are determined by whichever group has political power at the time. Furthermore, many criticize negative eugenics in particular as a violation of basic human rights, seen since 1968's Proclamation of Tehran, as including the right to reproduce. Another criticism is that eugenics policies eventually lead to a loss of genetic diversity, thereby resulting in inbreeding depression due to a loss of genetic variation. Yet another criticism of contemporary eugenics policies is that they propose to permanently and artificially disrupt millions of years of human evolution, and that attempting to create genetic lines "clean" of "disorders" can have far-reaching ancillary downstream effects in the genetic ecology, including negative effects on immunity and on species resilience. Eugenics is commonly seen in popular media, as highlighted by series like Resident Evil.

Modern eugenics

Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products".

In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.

Lee Kuan Yew, the founding father of Singapore, promoted eugenics as late as 1983. A proponent of nature over nurture, he stated that "intelligence is 80% nature and 20% nurture", and attributed the successes of his children to genetics. In his speeches, Lee urged highly educated women to have more children, claiming that "social delinquents" would dominate unless their fertility rate increased. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. In 1985, incentives were significantly reduced after public uproar.

In October 2015, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology.

The National Human Genome Research Institute says that eugenics is "inaccurate", "scientifically erroneous and immoral".

Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

Prenatal screening has been called by some a contemporary form of eugenics because it may lead to abortions of fetuses with undesirable traits.

A system was proposed by California State Senator Nancy Skinner to compensate victims of the well-documented examples of prison sterilizations resulting from California's eugenics programs, but this did not pass by the bill's 2018 deadline in the Legislature.

Meanings and types

The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, drawing on the recent work of his half-cousin Charles Darwin. Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.

The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu ("good" or "well") and the suffix -genēs ("born"); Galton intended it to replace the word "stirpiculture", which he had used previously but which had come to be mocked due to its perceived sexual overtones. Galton defined eugenics as "the study of all agencies under human control which can improve or impair the racial quality of future generations".

The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience.

Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today.

Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. Black states the following about the pseudoscientific past of eugenics: "As American eugenic pseudoscience thoroughly infused the scientific journals of the first three decades of the twentieth century, Nazi-era eugenics placed its unmistakable stamp on the medical literature of the twenties, thirties and forties."  Black says that eugenics was the pseudoscience aimed at "improving" the human race, used by Adolf Hitler to "try to legitimize his anti- Semitism by medicalizing it, and wrapping it in the more palatable pseudoscientific facade of eugenics."

Early eugenicists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. These included Karl Pearson and Walter Weldon, who worked on this at the University College London. In his lecture "Darwinism, Medical Progress and Eugenics", Pearson claimed that everything concerning eugenics fell into the field of medicine.

Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit.

Eugenics-Congress-Logo.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2092 2024-03-16 00:03:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2094) Microeconomics

Gist

Microeconomics studies the decisions of individuals and firms to allocate resources of production, exchange, and consumption. Microeconomics deals with prices and production in single markets and the interaction between different markets but leaves the study of economy-wide aggregates to macroeconomics.

Summary

Microeconomics is a branch of economics that studies the behaviour of individual consumers and firms. Unlike macroeconomics, which attempts to understand how the collective behaviour of individual agents shapes aggregate economic outcomes, microeconomics focuses on the detailed study of the agents themselves, by using rigorous mathematical techniques to better describe and understand the decision-making mechanisms involved.

The branch of microeconomics that deals with household behaviour is called consumer theory. Consumer theory is built on the concept of utility: the economic measure of happiness, which increases as consumption of certain goods increases. What consumers want to consume is captured by their utility function, which measures the happiness derived from consuming a set of goods. Consumers, however, are also bound by a budget constraint, which limits the number or kinds of goods and services they can purchase. The consumers are modeled as utility maximizers: they will try to purchase the optimal number of goods that maximizes their utility, given their budget.

The branch of microeconomics that deals with firm behaviour is called producer theory. Producer theory views firms as entities that turn inputs—such as capital, land, and labour—into output by using a certain level of technology. Input prices and availability, as well as the level of production technology, bind firms to a certain production capacity. The goal of the firm is to produce the amount of output that maximizes its profits, subject to its input and technology constraints.

Consumers and firms interact with each other across several markets. One such market is the goods market, in which firms make up the supply side and consumers who buy their products make up the demand side. Different goods market structures require microeconomists to adopt different modeling strategies. For example, a firm operating as a monopoly will face different constraints than a firm operating with many competitors in a competitive market. The microeconomist must therefore take the structure of the goods market into account when describing a firm’s behaviour.

Microeconomists constantly strive to improve the accuracy of their models of consumer and firm behaviour. On the consumer side, their efforts include rigorous mathematical modeling of utility that incorporates altruism, habit formation, and other behavioral influences on decision making. Behavioral economics is a field within microeconomics that crosses interdisciplinary boundaries to study the psychological, social, and cognitive aspects of individual decision making by using sophisticated mathematical models and natural experiments.

On the producer side, industrial organization has grown into a field within microeconomics that focuses on the detailed study of the structure of firms and how they operate in different markets. Labour economics, another field of microeconomics, studies the interactions of workers and firms in the labour market.

Details

Microeconomics is the social science that studies the implications of incentives and decisions, specifically how those affect the utilization and distribution of resources on an individual level. Microeconomics shows how and why different goods have different values, how individuals and businesses conduct and benefit from efficient production and exchange, and how individuals best coordinate and cooperate with one another. Generally speaking, microeconomics provides a more detailed understanding of individuals, firms, and markets, whereas macroeconomics provides a more aggregate view of economies.

KEY TAKEAWAYS

* Microeconomics studies the decisions of individuals and firms to allocate resources of production, exchange, and consumption.
* Microeconomics deals with prices and production in single markets and the interaction between different markets but leaves the study of economy-wide aggregates to macroeconomics.
* Microeconomists formulate various types of models based on logic and observed human behavior and test the models against real-world observations.

Understanding Microeconomics

Microeconomics is the study of what is likely to happen—also known as tendencies—when individuals make choices in response to changes in incentives, prices, resources, or methods of production. Individual actors are often grouped into microeconomic subgroups, such as buyers, sellers, and business owners. These groups create the supply and demand for resources, using money and interest rates as a pricing mechanism for coordination.

The Uses of Microeconomics

Microeconomics can be applied in a positive or normative sense. Positive microeconomics describes economic behavior and explains what to expect if certain conditions change. If a manufacturer raises the prices of cars, positive microeconomics says consumers will tend to buy fewer than before. If a major copper mine collapses in South America, the price of copper will tend to increase, because supply is restricted. Positive microeconomics could help an investor see why Apple Inc. stock prices might fall if consumers buy fewer iPhones. It could also explain why a higher minimum wage might force The Wendy's Company to hire fewer workers.

These explanations, conclusions, and predictions of positive microeconomics can then also be applied normatively to prescribe what people, businesses, and governments should do in order to attain the most valuable or beneficial patterns of production, exchange, and consumption among market participants. This extension of the implications of microeconomics from what is to what ought to be or what people ought to do also requires at least the implicit application of some sort of ethical or moral theory or principles, which usually means some form of utilitarianism.

Method of Microeconomics

Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).

The Marshallian and Walrasian methods fall under the larger umbrella of neoclassical microeconomics. Neoclassical economics focuses on how consumers and producers make rational choices to maximize their economic well being, subject to the constraints of how much income and resources they have available.

Neoclassical economists make simplifying assumptions about markets—such as perfect knowledge, infinite numbers of buyers and sellers, homogeneous goods, or static variable relationships—in order to construct mathematical models of economic behavior. These methods attempt to represent human behavior in functional mathematical language, which allows economists to develop mathematically testable models of individual markets. Neoclassicals believe in constructing measurable hypotheses about economic events, then using empirical evidence to see which hypotheses work best. In this way, they follow in the “logical positivism” or “logical empiricism” branch of philosophy. Microeconomics applies a range of research methods, depending on the question being studied and the behaviors involved.

Basic Concepts of Microeconomics

The study of microeconomics involves several key concepts, including (but not limited to):

* Incentives and behaviors: How people, as individuals or in firms, react to the situations with which they are confronted.
* Utility theory: Consumers will choose to purchase and consume a combination of goods that will maximize their happiness or “utility,” subject to the constraint of how much income they have available to spend.
* Production theory: This is the study of production, or the process of converting inputs into outputs. Producers seek to choose the combination of inputs and methods of combining them that will minimize cost in order to maximize their profits.
* Price theory: Utility and production theory interact to produce the theory of supply and demand, which determine prices in a competitive market. In a perfectly competitive market, it concludes that the price demanded by consumers is the same supplied by producers. That results in economic equilibrium.

Where Is Microeconomics Used?

Microeconomics has a wide variety of uses. For example, policymakers may use microeconomics to understand the effect of setting a minimum wage or subsidizing production of certain commodities. Businesses may use it to analyze pricing or production choices. Individuals may use it to assess purchasing and spending decisions.

What is Utility in Microeconomics?

In the field of microeconomics, utility refers to the degree of satisfaction that an individual receives when making an economic decision. The concept is important because decision-makers are often assumed to seek maximum utility when making choices within a market.

How Important Is Microeconomics in Our Daily Life?

Microeconomics is critical to daily life, even in ways that may not be evident to those engaging in it. Take, for example, the case of someone who is looking to buy a car. Microeconomic principles play a central role in individual decision-making. They will likely consider various incentives, such as rebates or low interest rates, when assessing whether or not to purchase a vehicle. They will likely select a make and model based on maximizing utility while also staying within their income constraints. On the other side of the scenario, a car company will have made similar microeconomic considerations in the production and supply of cars into the market.

The Bottom Line

Microeconomics is a field of study focused on the decision-making of individuals and firms within economies. This is in contrast with macroeconomics, a field that examines economies on a broader level. Microeconomics may look at the incentives that may influence individuals to make certain purchases, how they seek to maximize utility, and how they react to restraints. For firms, microeconomics may look at how producers decide what to produce, in what quantities, and what inputs to use based on minimizing costs and maximizing profits. Microeconomists formulate various types of models based on logic and observed human behavior and test the models against real-world observations.

Additional Information

Microeconomics is a branch of economics that studies the behavior of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. Microeconomics focuses on the study of individual markets, sectors, or industries as opposed to the national economy as a whole, which is studied in macroeconomics.

One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses[citation needed]. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results.

While microeconomics focuses on firms and individuals, macroeconomics focuses on the sum total of economic activity, dealing with the issues of growth, inflation, and unemployment—and with national policies relating to these issues. Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e., based upon basic assumptions about micro-level behavior.

Assumptions and definitions

Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).

Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive.

The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable.

Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed.

The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well.

The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence.

The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as the primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as revealed preference theory.

The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions.

Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had previously existed.

This is studied in the field of collective action and public choice theory. "Optimal welfare" usually takes on a Paretian norm, which is a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory.

The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set.

Allocation of scarce resources

Individuals and firms need to allocate limited resources to ensure all agents in the economy are well off. Firms decide which goods and services to produce considering low costs involving labour, materials and capital as well as potential profit margins. Consumers choose the good and services they want that will maximize their happiness taking into account their limited wealth.

The government can make these allocation decisions or they can be independently made by the consumers and firms. For example, in the former Soviet Union, the government played a part in informing car manufacturers which cars to produce and which consumers will gain access to a car.

microeconomics2-1.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2093 2024-03-17 00:05:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2095) Macroeconomics

Gist

Macroeconomics focuses on the performance of economies – changes in economic output, inflation, interest and foreign exchange rates, and the balance of payments. Poverty reduction, social equity, and sustainable growth are only possible with sound monetary and fiscal policies.

Summary

Macroeconomics is the study of the behaviour of a national or regional economy as a whole. It is concerned with understanding economy-wide events such as the total amount of goods and services produced, the level of unemployment, and the general behaviour of prices.

Unlike microeconomics—which studies how individual economic actors, such as consumers and firms, make decisions—macroeconomics concerns itself with the aggregate outcomes of those decisions. For that reason, in addition to using the tools of microeconomics, such as supply-and-demand analysis, macroeconomists also utilize aggregate measures such as gross domestic product (GDP), unemployment rates, and the consumer price index (CPI) to study the large-scale repercussions of micro-level decisions.

Early history and the classical school

Although complex macroeconomic structures have been characteristic of human societies since ancient times, the discipline of macroeconomics is relatively new. Until the 1930s most economic analysis was focused on microeconomic phenomena and concentrated primarily on the study of individual consumers, firms and industries. The classical school of economic thought, which derived its main principles from Scottish economist Adam Smith’s theory of self-regulating markets, was the dominant philosophy. Accordingly, such economists believed that economy-wide events such as rising unemployment and recessions are like natural phenomena and cannot be avoided. If left undisturbed, market forces would eventually correct such problems; moreover, any intervention by the government in the operation of free markets would be ineffective at best and destructive at worst.

Keynesianism

The classical view of macroeconomics, which was popularized in the 19th century as laissez-faire, was shattered by the Great Depression, which began in the United States in 1929 and soon spread to the rest of the industrialized Western world. The sheer scale of the catastrophe, which lasted almost a decade and left a quarter of the U.S. workforce without jobs, threatening the economic and political stability of many countries, was sufficient to cause a paradigm shift in mainstream macroeconomic thinking, including a reevaluation of the belief that markets are self-correcting. The theoretical foundations for that change were laid in 1935–36, when the British economist John Maynard Keynes published his monumental work The General Theory of Employment, Interest, and Money. Keynes argued that most of the adverse effects of the Great Depression could have been avoided had governments acted to counter the depression by boosting spending through fiscal policy. Keynes thus ushered in a new era of macroeconomic thought that viewed the economy as something that the government should actively manage. Economists such as Paul Samuelson, Franco Modigliani, James Tobin, Robert Solow, and many others adopted and expanded upon Keynes’s ideas, and as a result the Keynesian school of economics was born.

In contrast to the hands-off approach of classical economists, the Keynesians argued that governments have a duty to combat recessions. Although the ups and downs of the business cycle cannot be completely avoided, they can be tamed by timely intervention. At times of economic crisis, the economy is crippled because there is almost no demand for anything. As businesses’ sales decline, they begin laying off more workers, which causes a further reduction in income and demand, resulting in a prolonged recessionary cycle. Keynesians argued that, because it controls tax revenues, the government has the means to generate demand simply by increasing spending on goods and services during such times of hardship.

Monetarism

In the 1950s the first challenge to the Keynesian school of thought came from the monetarists, who were led by the influential University of Chicago economist Milton Friedman. Friedman proposed an alternative explanation of the Great Depression: he argued that what had started as a recession was turned into a prolonged depression because of the disastrous monetary policies followed by the Federal Reserve System (the central bank of the United States). If the Federal Reserve had started to increase the money supply early on, instead of doing just the opposite, the recession could have been effectively tamed before it got out of control. Over time, Friedman’s ideas were refined and came to be known as monetarism. In contrast to the Keynesian strategy of boosting demand through fiscal policy, monetarists favoured controlled increases in the money supply as a means of fighting off recesssions. Beyond that, the government should avoid intervening in free markets and the rest of the economy, according to monetarists.

Later developments

A second challenge to the Keynesian school arose in the 1970s, when the American economist Robert E. Lucas, Jr., laid the foundations of what came to be known as the New Classical school of thought in economics. Lucas’s key introduced the rational-expectations hypothesis. As opposed to the ideas in earlier Keynesian and monetarist models that viewed the individual decision makers in the economy as shortsighted and backward-looking, Lucas argued that decision makers, insofar as they are rational, do not base their decisions solely on current and past data; they also form expectations about the future on the basis of a vast array of information available to them. That fact implies that a change in monetary policy, if it has been predicted by rational agents, will have no effect on real variables such as output and the unemployment rate, because the agents will have acted upon the implications of such a policy even before it is implemented. As a result, predictable changes in monetary policy will result in changes in nominal variables such as prices and wages but will not have any real effects.

Following Lucas’s pioneering work, economists including Finn E. Kydland and Edward C. Prescott developed rigorous macroeconomic models to explain the fluctuations of the business cycle, which came to be known in the macroeconomic literature as real-business-cycle (RBC) models. RBC models were based on strong mathematical foundations and utilized Lucas’s idea of rational expectations. An important outcome of the RBC models was that they were able to explain macroeconomic fluctuations as the product of a myriad of external and internal shocks (unpredictable events that hit the economy). Primarily, they argued that shocks that result from changes in technology can account for the majority of the fluctuations in the business cycle.

The tendency of RBC models to overemphasize technology-driven fluctuations as the primary cause of business cycles and to underemphasize the role of monetary and fiscal policy led to the development of a new Keynesian response in the 1980s. New Keynesians, including John B. Taylor and Stanley Fischer, adopted the rigorous modeling approach introduced by Kydland and Prescott in the RBC literature but expanded it by altering some key underlying assumptions. Previous models had relied on the fact that nominal variables such as prices and wages are flexible and respond very quickly to changes in supply and demand. However, in the real world, most wages and many prices are locked in by contractual agreements. That fact introduces “stickiness,” or resistance to change, in those economic variables. Because wages and prices tend to be sticky, economic decision makers may react to macroeconomic events by altering other variables. For example, if wages are sticky, businesses will find themselves laying off more workers than they would in an unrealistic environment in which every employee’s salary could be cut in half.

Introducing market imperfections such as wage and price stickiness helped Taylor and Fischer to build macroeconomic models that represented the business cycle more accurately. In particular, they were able to show that in a world of market imperfections such as stickiness, monetary policy will have a direct impact on output and on employment in the short run, until enough time has passed for wages and prices to adjust. Therefore, central banks that control the supply of money can very well influence the business cycle in the short run. In the long run, however, the imperfections become less binding, as contracts can be renegotiated, and monetary policy can influence only prices.

Following the new Keynesian revolution, macroeconomists seemed to reach a consensus that monetary policy is effective in the short run and can be used as a tool to tame business cycles. Many other macroeconomic models were developed to measure the extent to which monetary policy can influence output. More recently, the impact of the financial crisis of 2007–08 and the Great Recession that followed it, coupled with the fact that many governments adopted a very Keynesian response to those events, brought about a revival of interest in the new Keynesian approach to macroeconomics, which seemed likely to lead to improved theories and better macroeconomic models in the future.

Details

Macroeconomics is a branch of economics that studies the behavior of an overall economy, which encompasses markets, businesses, consumers, and governments. Macroeconomics examines economy-wide phenomena such as inflation, price levels, rate of economic growth, national income, gross domestic product (GDP), and changes in unemployment.

Some of the key questions addressed by macroeconomics include: What causes unemployment? What causes inflation? What creates or stimulates economic growth? Macroeconomics attempts to measure how well an economy is performing, understand what forces drive it, and project how performance can improve.

KEY TAKEAWAYS

* Macroeconomics is the branch of economics that deals with the structure, performance, behavior, and decision-making of the whole, or aggregate, economy.
* The two main areas of macroeconomic research are long-term economic growth and shorter-term business cycles.
* Macroeconomics in its modern form is often defined as starting with John Maynard Keynes and his theories about market behavior and governmental policies in the 1930s; several schools of thought have developed since.
* In contrast to macroeconomics, microeconomics is more focused on the influences on and choices made by individual actors—such as people, companies, and industries—in the economy.

Macroeconomics:

Understanding Macroeconomics

As the term implies, macroeconomics is a field of study that analyzes an economy through a wide lens. This includes looking at variables like unemployment, GDP, and inflation. In addition, macroeconomists develop models explaining the relationships between these factors.

These models, and the forecasts they produce, are used by government entities to aid in constructing and evaluating economic, monetary, and fiscal policy. Businesses use the models to set strategies in domestic and global markets, and investors use them to predict and plan for movements in various asset classes.

Properly applied, economic theories can illuminate how economies function and the long-term consequences of particular policies and decisions. Macroeconomic theory can also help individual businesses and investors make better decisions through a more thorough understanding of the effects of broad economic trends and policies on their own industries.

History of Macroeconomics

While the term "macroeconomics" dates back to the 1940s, many of the field's core concepts have been subjects of study for much longer. Topics like unemployment, prices, growth, and trade have concerned economists since the beginning of the discipline in the 1700s. Elements of earlier work from Adam Smith and John Stuart Mill addressed issues that would now be recognized as the domain of macroeconomics.

In its modern form, macroeconomics is often defined as starting with John Maynard Keynes and his book The General Theory of Employment, Interest, and Money in 1936. In it, Keynes explained the fallout from the Great Depression, when goods went unsold and workers were unemployed.

Throughout the 20th century, Keynesian economics, as Keynes' theories became known, diverged into several other schools of thought.

Before the popularization of Keynes' theories, economists generally did not differentiate between microeconomics and macroeconomics. The same microeconomic laws of supply and demand that operate in individual goods markets were understood to interact between individual markets to bring the economy into a general equilibrium, as described by Leon Walras.

The link between goods markets and large-scale financial variables such as price levels and interest rates was explained through the unique role that money plays in the economy as a medium of exchange by economists such as Knut Wicksell, Irving Fisher, and Ludwig von Mises.

Macroeconomics vs. Microeconomics

Macroeconomics differs from microeconomics, which focuses on smaller factors that affect choices made by individuals. Individuals are typically classified into subgroups, such as buyers, sellers, and business owners. These actors interact with each other according to the laws of supply and demand for resources, using money and interest rates as pricing mechanisms for coordination. Factors studied in both microeconomics and macroeconomics typically influence one another.

A key distinction between microeconomics and macroeconomics is that macroeconomic aggregates can sometimes behave in very different ways or even the opposite of similar microeconomic variables. For example, Keynes referenced the so-called Paradox of Thrift, which argues that individuals save money to build wealth on a microeconomic level. However, when everyone tries to increase their savings at once, it can contribute to a slowdown in the economy and less wealth in the aggregate, macroeconomic level. This is because there would be a reduction in spending, affecting business revenues, and lowering worker pay.

Limits of Macroeconomics

It is also important to understand the limitations of economic theory. Theories are often created in a vacuum and lack specific real-world details like taxation, regulation, and transaction costs. The real world is also decidedly complicated and includes matters of social preference and conscience that do not lend themselves to mathematical analysis.

It is common to find the phrase ceterus paribus, loosely translated as "all else being equal," in economic theories and discussions. Economists use this phrase to focus on specific relationships between variables being discussed, while assuming all other variables remain fixed.

Even with the limits of economic theory, it is important and worthwhile to follow significant macroeconomic indicators like GDP, inflation, and unemployment. This is because the performance of companies, and by extension their stocks, is significantly influenced by the economic conditions in which the companies operate.

Likewise, it can be invaluable to understand which theories are currently in favor, and how they may be influencing a particular government administration. Such economic theories can say much about how a government will approach taxation, regulation, government spending, and similar policies. By better understanding economics and the ramifications of economic decisions, investors can get at least a glimpse of the probable future and act accordingly with confidence.

Macroeconomic Schools of Thought

The field of macroeconomics is organized into many different schools of thought, with differing views on how the markets and their participants operate.

Classical

Classical economists held that prices, wages, and rates are flexible and markets tend to clear unless prevented from doing so by government policy; these ideas build on Adam Smith's original theories. The term “classical economists” is not actually a school of macroeconomic thought but a label applied first by Karl Marx and later by Keynes to denote previous economic thinkers with whom they disagreed.

Keynesian

Keynesian economics was founded mainly based on the works of John Maynard Keynes and was the beginning of macroeconomics as a separate area of study from microeconomics. Keynesians focus on aggregate demand as the principal factor in issues like unemployment and the business cycle.

Keynesian economists believe that the business cycle can be managed by active government intervention through fiscal policy, where governments spend more in recessions to stimulate demand or spend less in expansions to decrease it. They also believe in monetary policy, where a central bank stimulates lending with lower rates or restricts it with higher ones.

Keynesian economists also believe that certain rigidities in the system, particularly sticky prices, prevent the proper clearing of supply and demand.

Monetarist

The Monetarist school is a branch of Keynesian economics credited mainly to the works of Milton Friedman. Working within and extending Keynesian models, Monetarists argue that monetary policy is generally a more effective and desirable policy tool to manage aggregate demand than fiscal policy. However, monetarists also acknowledge limits to monetary policy that make fine-tuning the economy ill-advised and instead tend to prefer adherence to policy rules that promote stable inflation rates.

New Classical

The New Classical school, along with the New Keynesians, is mainly built on integrating microeconomic foundations into macroeconomics to resolve the glaring theoretical contradictions between the two subjects.

The New Classical school emphasizes the importance of microeconomics and models based on that behavior. New Classical economists assume that all agents try to maximize their utility and have rational expectations, which they incorporate into macroeconomic models. New Classical economists believe that unemployment is largely voluntary and that discretionary fiscal policy destabilizes, while inflation can be controlled with monetary policy.

New Keynesian

The New Keynesian school also attempts to add microeconomic foundations to traditional Keynesian economic theories. While New Keynesians accept that households and firms operate based on rational expectations, they still maintain that there are a variety of market failures, including sticky prices and wages. Because of this "stickiness," the government can improve macroeconomic conditions through fiscal and monetary policy.

Austrian

The Austrian School is an older school of economics that is seeing some resurgence in popularity. Austrian economic theories mainly apply to microeconomic phenomena. However, like the so-called classical economists, they never strictly separated microeconomics and macroeconomics.

Austrian theories also have important implications for what are otherwise considered macroeconomic subjects. In particular, the Austrian business cycle theory explains broadly synchronized (macroeconomic) swings in economic activity across markets due to monetary policy and the role that money and banking play in linking (microeconomic) markets to each other and across time.

Macroeconomic Indicators

Macroeconomics is a rather broad field, but two specific research areas dominate the discipline. The first area looks at the factors that determine long-term economic growth. The other looks at the causes and consequences of short-term fluctuations in national income and employment, also known as the business cycle.

Economic Growth

Economic growth refers to an increase in aggregate production in an economy. Macroeconomists try to understand the factors that either promote or retard economic growth to support economic policies that will support development, progress, and rising living standards.

Economists can use many indicators to measure economic performance. These indicators fall into 10 categories:

Gross Domestic Product indicators: Measure how much the economy produces
Consumer Spending indicators: Measure how much capital consumers feed back into the economy
Income and Savings indicators: Measure how much consumers make and save
Industry Performance indicators: Measure GDP by industry
International Trade and Investment indicators: Indicate the balance of payments between trade partners, how much is traded, and how much is invested internationally
Prices and Inflation indicators: Indicate fluctuations in prices paid for goods and services and changes in currency purchasing power
Investment in Fixed Assets indicators: Indicate how much capital is tied up in fixed assets
Employment indicators: Show employment by industry, state, county, and other areas
Government indicators: Show how much the government spends and receives
Special indicators: Include all other economic indicators, such as distribution of personal income, global value chains, healthcare spending, small business well-being, and more

The Business Cycle

Superimposed over long-term macroeconomic growth trends, the levels and rates of change of significant macroeconomic variables such as employment and national output go through fluctuations. These fluctuations are called expansions, peaks, recessions, and troughs—they also occur in that order. When charted on a graph, these fluctuations show that businesses perform in cycles; thus, it is called the business cycle.

The National Bureau of Economic Research (NBER) measures the business cycle, which uses GDP and Gross National Income to date the cycle.

The NBER is also the agency that declares the beginning and end of recessions and expansions.

How to Influence Macroeconomics

Because macroeconomics is such a broad area, positively influencing the economy is challenging and takes much longer than changing the individual behaviors within microeconomics. Therefore, economies need to have an entity dedicated to researching and identifying techniques that can influence large-scale changes.

In the U.S., the Federal Reserve is the central bank with a mandate of promoting maximum employment and price stability. These two factors have been identified as essential to positively influencing change at the macroeconomic level.

To influence change, the Fed implements monetary policy through tools it has developed over the years, which work to affect its dual mandates. It has the following tools it can use:

Federal Funds Rate Range: A target range set by the Fed that guides interest rates on overnight lending between depository institutions to boost short-term borrowing
Open Market Operations: Purchase and sell securities on the open market to change the supply of reserves
Discount Window and Rate: Lending to depository institutions to help banks manage liquidity
Reserve Requirements: Maintaining a reserve to help banks maintain liquidity
Interest on Reserve Balances: Encourages banks to hold reserves for liquidity and pays them interest for doing so
Overnight Repurchase Agreement Facility: A supplementary tool used to help control the federal funds rate by selling securities and repurchasing them the next day at a more favorable rate
Term Deposit Facility: Reserve deposits with a term, used to drain reserves from the banking system
Central Bank Liquidity Swaps: Established swap lines for central banks from select countries to improve liquidity conditions in the U.S. and participating countries' central banks
Foreign and International Monetary Authorities Repo Facility: A facility for institutions to enter repurchase agreements with the Fed to act as a backstop for liquidity
Standing Overnight Repurchase Agreement Facility: A facility to encourage or discourage borrowing above a set rate, which helps to control the effective federal funds rate

The Fed continuously updates the tools it uses to influence the economy, so it has a list of many other previously used tools it can implement again if needed.

Board of Governors of the Federal Reserve System. "Policy Tools | Expired Tools."

What is the most important concept in all of macroeconomics?

The most important concept in all of macroeconomics is said to be output, which refers to the total amount of good and services a country produces. Output is often considered a snapshot of an economy at a given moment.

What are the 3 Major Concerns of Macroeconomics?

Three major macroeconomic concerns are the unemployment level, inflation, and economic growth.

Why Is Macroeconomics Important?

Macroeconomics helps a government evaluate how an economy is performing and decide on actions it can take to increase or slow growth.

The Bottom Line

Macroeconomics is a field of study used to evaluate overall economic performance and develop actions that can positively affect an economy. Economists work to understand how specific factors and actions affect output, input, spending, consumption, inflation, and employment.

The study of economics began long ago, but the field didn't start evolving into its current form until the 1700s. Macroeconomics now plays a large part in government and business decision-making.

Additional Information

Macroeconomics is a branch of economics that deals with the performance, structure, behavior, and decision-making of an economy as a whole. This includes regional, national, and global economies. Macroeconomists study topics such as output/GDP (gross domestic product) and national income, unemployment (including unemployment rates), price indices and inflation, consumption, saving, investment, energy, international trade, and international finance.

Macroeconomics and microeconomics are the two most general fields in economics. The focus of macroeconomics is often on a country (or larger entities like the whole world) and how its markets interact to produce large-scale phenomena that economists refer to as aggregate variables. In microeconomics the focus of analysis is often a single market, such as whether changes in supply or demand are to blame for price increases in the oil and automotive sectors. From introductory classes in "principles of economics" through doctoral studies, the macro/micro divide is institutionalized in the field of economics. Most economists identify as either macro- or micro-economists.

Macroeconomics is traditionally divided into topics along different time frames: the analysis of short-term fluctuations over the business cycle, the determination of structural levels of variables like inflation and unemployment in the medium (i.e. unaffected by short-term deviations) term, and the study of long-term economic growth. It also studies the consequences of policies targeted at mitigating fluctuations like fiscal or monetary policy, using taxation and government expenditure or interest rates, respectively, and of policies that can affect living standards in the long term, e.g. by affecting growth rates.

Macroeconomics as a separate field of research and study is generally recognized to start in 1936, when John Maynard Keynes published his The General Theory of Employment, Interest and Money, but its intellectual predecessors are much older. Since World War II, various macroeconomic schools of thought like Keynesians, monetarists, new classical and new Keynesian economists have made contributions to the development of the macroeconomic research mainstream.

1677434234185?e=2147483647&v=beta&t=mK436jJFEFj7tnYHbTVGAi1FOMyHNGb9930SPc5ARek


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2094 2024-03-18 00:01:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2096) Astronomy

Gist

Astronomy is the study of everything in the universe beyond Earth's atmosphere. That includes objects we can see with our naked eyes, like the Sun , the Moon , the planets, and the stars . It also includes objects we can only see with telescopes or other instruments, like faraway galaxies and tiny particles.

Summary

Astronomy is the science that encompasses the study of all extraterrestrial objects and phenomena. Until the invention of the telescope and the discovery of the laws of motion and gravity in the 17th century, astronomy was primarily concerned with noting and predicting the positions of the Sun, Moon, and planets, originally for calendrical and astrological purposes and later for navigational uses and scientific interest. The catalog of objects now studied is much broader and includes, in order of increasing distance, the solar system, the stars that make up the Milky Way Galaxy, and other, more distant galaxies. With the advent of scientific space probes, Earth also has come to be studied as one of the planets, though its more-detailed investigation remains the domain of the Earth sciences.

The scope of astronomy

Since the late 19th century, astronomy has expanded to include astrophysics, the application of physical and chemical knowledge to an understanding of the nature of celestial objects and the physical processes that control their formation, evolution, and emission of radiation. In addition, the gases and dust particles around and between the stars have become the subjects of much research. Study of the nuclear reactions that provide the energy radiated by stars has shown how the diversity of atoms found in nature can be derived from a universe that, following the first few minutes of its existence, consisted only of hydrogen, helium, and a trace of lithium. Concerned with phenomena on the largest scale is cosmology, the study of the evolution of the universe. Astrophysics has transformed cosmology from a purely speculative activity to a modern science capable of predictions that can be tested.

Its great advances notwithstanding, astronomy is still subject to a major constraint: it is inherently an observational rather than an experimental science. Almost all measurements must be performed at great distances from the objects of interest, with no control over such quantities as their temperature, pressure, or chemical composition. There are a few exceptions to this limitation—namely, meteorites (most of which are from the asteroid belt, though some are from the Moon or Mars), rock and soil samples brought back from the Moon, samples of comet and asteroid dust returned by robotic spacecraft, and interplanetary dust particles collected in or above the stratosphere. These can be examined with laboratory techniques to provide information that cannot be obtained in any other way. In the future, space missions may return surface materials from Mars, or other objects, but much of astronomy appears otherwise confined to Earth-based observations augmented by observations from orbiting satellites and long-range space probes and supplemented by theory.

Determining astronomical distances

A central undertaking in astronomy is the determination of distances. Without a knowledge of astronomical distances, the size of an observed object in space would remain nothing more than an angular diameter and the brightness of a star could not be converted into its true radiated power, or luminosity. Astronomical distance measurement began with a knowledge of Earth’s diameter, which provided a base for triangulation. Within the inner solar system, some distances can now be better determined through the timing of radar reflections or, in the case of the Moon, through laser ranging. For the outer planets, triangulation is still used. Beyond the solar system, distances to the closest stars are determined through triangulation, in which the diameter of Earth’s orbit serves as the baseline and shifts in stellar parallax are the measured quantities. Stellar distances are commonly expressed by astronomers in parsecs (pc), kiloparsecs, or megaparsecs. (1 pc = 3.086 × {10}^{18} cm, or about 3.26 light-years [1.92 × {10}^{13} miles].) Distances can be measured out to around a kiloparsec by trigonometric parallax (see star: Determining stellar distances). The accuracy of measurements made from Earth’s surface is limited by atmospheric effects, but measurements made from the Hipparcos satellite in the 1990s extended the scale to stars as far as 650 parsecs, with an accuracy of about a thousandth of an arc second. The Gaia satellite is expected to measure stars as far away as 10 kiloparsecs to an accuracy of 20 percent. Less-direct measurements must be used for more-distant stars and for galaxies.

Two general methods for determining galactic distances are described here. In the first, a clearly identifiable type of star is used as a reference standard because its luminosity has been well determined. This requires observation of such stars that are close enough to Earth that their distances and luminosities have been reliably measured. Such a star is termed a “standard candle.” Examples are Cepheid variables, whose brightness varies periodically in well-documented ways, and certain types of supernova explosions that have enormous brilliance and can thus be seen out to very great distances. Once the luminosities of such nearer standard candles have been calibrated, the distance to a farther standard candle can be calculated from its calibrated luminosity and its actual measured intensity.

The second method for galactic distance measurements makes use of the observation that the distances to galaxies generally correlate with the speeds with which those galaxies are receding from Earth (as determined from the Doppler shift in the wavelengths of their emitted light). This correlation is expressed in the Hubble law: velocity = H × distance, in which H denotes Hubble’s constant, which must be determined from observations of the rate at which the galaxies are receding. There is widespread agreement that H lies between 67 and 73 kilometres per second per megaparsec (km/sec/Mpc). H has been used to determine distances to remote galaxies in which standard candles have not been found.

Details

Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry in order to explain their origin and their overall evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is a branch of astronomy that studies the universe as a whole.

Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars.

Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.

Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.

Etymology

Astronomy means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct.

Use of terms "astronomy" and "astrophysics"

"Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties", while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject. However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department, and many professional astronomers have physics rather than astronomy degrees. Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.

History

Ancient times

In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that may have had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.

Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Egypt, Mesopotamia, Greece, Persia, India, China, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.

A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.

Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model. In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.

Middle Ages

Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels. Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.

Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars. The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Iranian scholar Al-Biruni observed that, contrary to Ptolemy, the Sun's apogee (highest point in the heavens) was mobile, not fixed. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.

It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed astronomical observatories. In Post-classical West Africa, Astronomers studied the movement of stars and relation to seasons, crafting charts of the heavens as well as precise diagrams of orbits of the other planets based on complex mathematical calculations. Songhai historian Mahmud Kati documented a meteor shower in August 1583. Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.

For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.

Scientific revolution

During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.

Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars, More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.

During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.

Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.

The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe. Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere. In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.

Observational astronomy

The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation. Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.

Radio astronomy

Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range. Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.

Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.[9][47]

A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.

Infrared astronomy

Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous galactic protostars and their host star clusters. With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space. Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.

Optical astronomy

Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy. Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm), that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.

Ultraviolet astronomy

Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm). Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei. However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.

X-ray astronomy

X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above {10}^7 (10 million) kelvins, and thermal emission from thick gases above {10}^7 Kelvin. Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.

Gamma-ray astronomy

Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes.[47] The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.

Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.

Fields not based on the electromagnetic spectrum

In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.

In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.

Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole.[56] A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.

The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.

Astrometry and celestial mechanics

One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.

Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.

The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allow astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.

During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.

Additional Information

Astronomy is one of the oldest scientific disciplines that has evolved from the humble beginnings of counting stars and charting constellations with the naked eye to the impressive showcase of humankind's technological capabilities that we see today.

Despite the progress astronomy has made over millennia, astronomers are still working hard to understand the nature of the universe and humankind's place in it. That question has only gotten more complex as our understanding of the universe grew with our expanding technical capabilities.

As the depths of the sky opened in front of our increasingly sophisticated telescopes, and sensitive detectors enabled us to spot the weirdest types of signals, the star-studded sky that our ancestors gazed at turned into a zoo of mind-boggling objects including black holes, white dwarfs, neutron stars and supernovas.

At the same time, the two-dimensional constellations that inspired the imagination of early sky-watchers were reduced to an optical illusion, behind which the swirling of galaxies hurtling through spacetime reveals a story that began with the Big Bang some 13.8 billion years ago.

What is astronomy?

Astronomy uses mathematics, physics and chemistry to study celestial objects and phenomena.

What are the four types of astronomy?

Astronomy cannot be divided solely into four types. It is a broad discipline encompassing many subfields including observational astronomy, theoretical astronomy, planetary science, astrophysics, cosmology and astrobiology.

What do you study in astronomy?

Those who study astronomy explore the structure and origin of the universe including the stars, planets, galaxies and black holes that reside in it. Astronomers aim to answer fundamental questions about our universe through theory and observation.

What's the difference between astrology and astronomy?

Astrology is widely considered to be a pseudoscience that attempts to explain how the position and motion of celestial objects such as planets affect people and events on Earth. Astronomy is the scientific study of the universe using mathematics, physics, and chemistry.

Most of today's citizens of planet Earth live surrounded by the inescapable glow of modern urban lighting and can hardly imagine the awe-inspiring presence of the pristine star-studded sky that illuminated the nights for ancient tribes and early civilizations. We can guess how drawn our ancestors were to that overwhelming sight from the role that sky-watching played in their lives. 

Ancient monuments, such as the 5,000 years old Stonehenge in the U.K., were built to reflect the journey of the sun in the sky, which helped keep track of time and organize life in an age that solely depended on seasons. Art pieces depicting the moon and stars were discovered dating back several thousand years, such as the "world's oldest star map," the bronze-age Nebra disk.

Ancient Assyro-Babylonians around 1,000 B.C. systematically observed and recorded periodical motions of celestial bodies, according to the European Space Agency (ESA), and similar records exist also from early China. In fact, according to the University of Oregon, astronomy can be considered the first science as it's the one for which the oldest written records exist.

Ancient Greeks elevated sky-watching to a new level. Aristarchus of Samos made the first (highly inaccurate) attempt to calculate the distance of Earth to the sun and moon, and Hipparchus sometimes considered the father of empirical astronomy, cataloged the positions of over 800 stars using just the naked eye. He also developed the brightness scale that is still in use today, according to ESA.

20-intriguing-facts-about-astronomers-1692851238.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2095 2024-03-19 00:05:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2097) Neurology

Gist

Neurology is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves.

Summary

Neurology is a medical specialty concerned with the nervous system and its functional or organic disorders. Neurologists diagnose and treat diseases and disorders of the brain, spinal cord, and nerves.

The first scientific studies of nerve function in animals were performed in the early 18th century by English physiologist Stephen Hales and Scottish physiologist Robert Whytt. Knowledge was gained in the late 19th century about the causes of aphasia, epilepsy, and motor problems arising from brain damage. French neurologist Jean-Martin Charcot and English neurologist William Gowers described and classified many diseases of the nervous system. The mapping of the functional areas of the brain through selective electrical stimulation also began in the 19th century. Despite these contributions, however, most knowledge of the brain and nervous functions came from studies in animals and from the microscopic analysis of nerve cells.

The electroencephalograph (EEG), which records electrical brain activity, was invented in the 1920s by Hans Berger. Development of the EEG, analysis of cerebrospinal fluid obtained by lumbar puncture (spinal tap), and development of cerebral angiography allowed neurologists to increase the precision of their diagnoses and develop specific therapies and rehabilitative measures. Further aiding the diagnosis and treatment of brain disorders were the development of computerized axial tomography (CT) scanning in the early 1970s and magnetic resonance imaging (MRI) in the 1980s, both of which yielded detailed, noninvasive views of the inside of the brain. (See brain scanning.) The identification of chemical agents in the central nervous system and the elucidation of their roles in transmitting and blocking nerve impulses have led to the introduction of a wide array of medications that can correct or alleviate various neurological disorders including Parkinson disease, multiple sclerosis, and epilepsy. Neurosurgery, a medical specialty related to neurology, has also benefited from CT scanning and other increasingly precise methods of locating lesions and other abnormalities in nervous tissues.

Details

Neurology is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves. Neurological practice relies heavily on the field of neuroscience, the scientific study of the nervous system.

A neurologist is a physician specializing in neurology and trained to investigate, diagnose and treat neurological disorders. Neurologists diagnose and treat myriad neurologic conditions, including stroke, epilepsy, movement disorders such as Parkinson's disease, brain infections, autoimmune neurologic disorders such as multiple sclerosis, sleep disorders, brain injury, headache disorders like migraine, tumors of the brain and dementias such as Alzheimer's disease. Neurologists may also have roles in clinical research, clinical trials, and basic or translational research. Neurology is a nonsurgical specialty, its corresponding surgical specialty is neurosurgery.

History

The academic discipline began between the 15th and 16th centuries with the work and research of many neurologists such as Thomas Willis, Robert Whytt, Matthew Baillie, Charles Bell, Moritz Heinrich Romberg, Duchenne de Boulogne, William A. Hammond, Jean-Martin Charcot, C. Miller Fisher and John Hughlings Jackson. Neo-Latin neurologia appeared in various texts from 1610 denoting an anatomical focus on the nerves (variably understood as vessels), and was most notably used by Willis, who preferred Greek νευρολογία.

Many neurologists also have additional training or interest in one area of neurology, such as stroke, epilepsy, headache, neuromuscular disorders, sleep medicine, pain management, or movement disorders.

In the United States and Canada, neurologists are physicians who have completed a postgraduate training period known as residency specializing in neurology after graduation from medical school. This additional training period typically lasts four years, with the first year devoted to training in internal medicine. On average, neurologists complete a total of eight to ten years of training. This includes four years of medical school, four years of residency and an optional one to two years of fellowship.

While neurologists may treat general neurologic conditions, some neurologists go on to receive additional training focusing on a particular subspecialty in the field of neurology. These training programs are called fellowships, and are one to two years in duration. Subspecialties include brain injury medicine, clinical neurophysiology, epilepsy, neurodevelopmental disabilities, neuromuscular medicine, pain medicine, sleep medicine, neurocritical care, vascular neurology (stroke), behavioral neurology, child neurology, headache, multiple sclerosis, neuroimaging, neurooncology, and neurorehabilitation.

In Germany, a compulsory year of psychiatry must be done to complete a residency of neurology.

In the United Kingdom and Ireland, neurology is a subspecialty of general (internal) medicine. After five years of medical school and two years as a Foundation Trainee, an aspiring neurologist must pass the examination for Membership of the Royal College of Physicians (or the Irish equivalent) and complete two years of core medical training before entering specialist training in neurology. Up to the 1960s, some intending to become neurologists would also spend two years working in psychiatric units before obtaining a diploma in psychological medicine. However, that was uncommon and, now that the MRCPsych takes three years to obtain, would no longer be practical. A period of research is essential, and obtaining a higher degree aids career progression. Many found it was eased after an attachment to the Institute of Neurology at Queen Square, London. Some neurologists enter the field of rehabilitation medicine (known as physiatry in the US) to specialise in neurological rehabilitation, which may include stroke medicine, as well as traumatic brain injuries.

Physical examination

During a neurological examination, the neurologist reviews the patient's health history with special attention to the patient's neurologic complaints. The patient then takes a neurological exam. Typically, the exam tests mental status, function of the cranial nerves (including vision), strength, coordination, reflexes, sensation and gait. This information helps the neurologist determine whether the problem exists in the nervous system and the clinical localization. Localization of the pathology is the key process by which neurologists develop their differential diagnosis. Further tests may be needed to confirm a diagnosis and ultimately guide therapy and appropriate management. Useful adjunct imaging studies in neurology include CT scanning and MRI. Other tests used to assess muscle and nerve function include nerve conduction studies and electromyography.

Clinical tasks

Neurologists examine patients who are referred to them by other physicians in both the inpatient and outpatient settings. Neurologists begin their interactions with patients by taking a comprehensive medical history, and then performing a physical examination focusing on evaluating the nervous system. Components of the neurological examination include assessment of the patient's cognitive function, cranial nerves, motor strength, sensation, reflexes, coordination, and gait.

In some instances, neurologists may order additional diagnostic tests as part of the evaluation. Commonly employed tests in neurology include imaging studies such as computed axial tomography (CAT) scans, magnetic resonance imaging (MRI), and ultrasound of major blood vessels of the head and neck. Neurophysiologic studies, including electroencephalography (EEG), needle electromyography (EMG), nerve conduction studies (NCSs) and evoked potentials are also commonly ordered. Neurologists frequently perform lumbar punctures to assess characteristics of a patient's cerebrospinal fluid. Advances in genetic testing have made genetic testing an important tool in the classification of inherited neuromuscular disease and diagnosis of many other neurogenetic diseases. The role of genetic influences on the development of acquired neurologic diseases is an active area of research.

Some of the commonly encountered conditions treated by neurologists include headaches, radiculopathy, neuropathy, stroke, dementia, seizures and epilepsy, Alzheimer's disease, attention deficit/hyperactivity disorder, Parkinson's disease, Tourette's syndrome, multiple sclerosis, head trauma, sleep disorders, neuromuscular diseases, and various infections and tumors of the nervous system. Neurologists are also asked to evaluate unresponsive patients on life support to confirm brain death.

Treatment options vary depending on the neurological problem. They can include referring the patient to a physiotherapist, prescribing medications, or recommending a surgical procedure.

Some neurologists specialize in certain parts of the nervous system or in specific procedures. For example, clinical neurophysiologists specialize in the use of EEG and intraoperative monitoring to diagnose certain neurological disorders. Other neurologists specialize in the use of electrodiagnostic medicine studies – needle EMG and NCSs. In the US, physicians do not typically specialize in all the aspects of clinical neurophysiology – i.e. sleep, EEG, EMG, and NCSs. The American Board of Clinical Neurophysiology certifies US physicians in general clinical neurophysiology, epilepsy, and intraoperative monitoring. The American Board of Electrodiagnostic Medicine certifies US physicians in electrodiagnostic medicine and certifies technologists in nerve-conduction studies. Sleep medicine is a subspecialty field in the US under several medical specialties including anesthesiology, internal medicine, family medicine, and neurology. Neurosurgery is a distinct specialty that involves a different training path, and emphasizes the surgical treatment of neurological disorders.

Also, many nonmedical doctors, those with doctoral degrees (usually PhDs) in subjects such as biology and chemistry, study and research the nervous system. Working in laboratories in universities, hospitals, and private companies, these neuroscientists perform clinical and laboratory experiments and tests to learn more about the nervous system and find cures or new treatments for diseases and disorders.

A great deal of overlap occurs between neuroscience and neurology. Many neurologists work in academic training hospitals, where they conduct research as neuroscientists in addition to treating patients and teaching neurology to medical students.

General caseload

Neurologists are responsible for the diagnosis, treatment, and management of all the conditions mentioned above. When surgical or endovascular intervention is required, the neurologist may refer the patient to a neurosurgeon or an interventional neuroradiologist. In some countries, additional legal responsibilities of a neurologist may include making a finding of brain death when it is suspected that a patient has died. Neurologists frequently care for people with hereditary (genetic) diseases when the major manifestations are neurological, as is frequently the case. Lumbar punctures are frequently performed by neurologists. Some neurologists may develop an interest in particular subfields, such as stroke, dementia, movement disorders, neurointensive care, headaches, epilepsy, sleep disorders, chronic pain management, multiple sclerosis, or neuromuscular diseases.

Overlapping areas

Some overlap also occurs with other specialties, varying from country to country and even within a local geographic area. Acute head trauma is most often treated by neurosurgeons, whereas sequelae of head trauma may be treated by neurologists or specialists in rehabilitation medicine. Although stroke cases have been traditionally managed by internal medicine or hospitalists, the emergence of vascular neurology and interventional neuroradiology has created a demand for stroke specialists. The establishment of Joint Commission-certified stroke centers has increased the role of neurologists in stroke care in many primary, as well as tertiary, hospitals. Some cases of nervous system infectious diseases are treated by infectious disease specialists. Most cases of headache are diagnosed and treated primarily by general practitioners, at least the less severe cases. Likewise, most cases of sciatica are treated by general practitioners, though they may be referred to neurologists or surgeons (neurosurgeons or orthopedic surgeons). Sleep disorders are also treated by pulmonologists and psychiatrists. Cerebral palsy is initially treated by pediatricians, but care may be transferred to an adult neurologist after the patient reaches a certain age. Physical medicine and rehabilitation physicians may treat patients with neuromuscular diseases with electrodiagnostic studies (needle EMG and nerve-conduction studies) and other diagnostic tools. In the United Kingdom and other countries, many of the conditions encountered by older patients such as movement disorders, including Parkinson's disease, stroke, dementia, or gait disorders, are managed predominantly by specialists in geriatric medicine.

Clinical neuropsychologists are often called upon to evaluate brain-behavior relationships for the purpose of assisting with differential diagnosis, planning rehabilitation strategies, documenting cognitive strengths and weaknesses, and measuring change over time (e.g., for identifying abnormal aging or tracking the progression of a dementia)

Relationship to clinical neurophysiology

In some countries such as the United States and Germany, neurologists may subspecialize in clinical neurophysiology, the field responsible for EEG and intraoperative monitoring, or in electrodiagnostic medicine nerve conduction studies, EMG, and evoked potentials. In other countries, this is an autonomous specialty (e.g., United Kingdom, Sweden, Spain).

Overlap with psychiatry

In the past, prior to the advent of more advanced diagnostic techniques such as MRI some neurologists have considered psychiatry and neurology to overlap. Although mental illnesses are believed by many to be neurological disorders affecting the central nervous system, traditionally they are classified separately, and treated by psychiatrists. In a 2002 review article in the American Journal of Psychiatry, Professor Joseph B. Martin, Dean of Harvard Medical School and a neurologist by training, wrote, "the separation of the two categories is arbitrary, often influenced by beliefs rather than proven scientific observations. And the fact that the brain and mind are one makes the separation artificial anyway".

Neurological disorders often have psychiatric manifestations, such as post-stroke depression, depression and dementia associated with Parkinson's disease, mood and cognitive dysfunctions in Alzheimer's disease, and Huntington disease, to name a few. Hence, the sharp distinction between neurology and psychiatry is not always on a biological basis. The dominance of psychoanalytic theory in the first three-quarters of the 20th century has since then been largely replaced by a focus on pharmacology. Despite the shift to a medical model, brain science has not advanced to a point where scientists or clinicians can point to readily discernible pathological lesions or genetic abnormalities that in and of themselves serve as reliable or predictive biomarkers of a given mental disorder.

Neurological enhancement

The emerging field of neurological enhancement highlights the potential of therapies to improve such things as workplace efficacy, attention in school, and overall happiness in personal lives. However, this field has also given rise to questions about neuroethics.

Additional Information

Neurology is a branch of medical science that is concerned with disorders and diseases of the nervous system. The term neurology comes from a combination of two words - "neuron" meaning nerve and "logia" meaning "the study of".

There are around a hundred billion neurons in the brain, capable of generating their own impulses and of receiving and transmitting impulses from neighbouring cells. Neurology involves the study of:

* The central nervous system, the peripheral nervous system and the autonomic nervous system.
* Structural and functional disorders of the nervous system ranging from birth defects through to degenerative diseases such as Parkinson's disease and Alzheimer's disease.

Mankind has been familiar with disorders of the nervous system for centuries. Parkinson's disease, for example, was described as the ‘shaking palsy' in 1817. It was only late into the 20th century, however, that a deficiency in the neurotransmitter dopamine was identified as the cause of Parkinson's disease and its symptoms such as tremors and muscle rigidity. Alzheimer's disease was first described in 1906.

Neurology also involves understanding and interpreting imaging and electrical studies. Examples of the imaging studies used include computed tomography (CT) scans and magnetic resonance imaging (MRI) scans. An electroencephalogram (EEG) can be used to assess the electrical activity of the brain in the diagnosis of conditions such as epilepsy. Neurologists also diagnose infections of the nervous system by analysing the cerebrospinal fluid (CSF), a clear fluid that surrounds the brain and spinal cord.

Neurologists study an undergraduate degree, spend four years at medical school and complete a one year internship. This is followed by three years of specialized training and, often, additional training in a particular area of the discipline such as stroke, epilepsy or movement disorders. Neurologists are usually physicians but they may also refer their patients to surgeons specializing in neurology called neurosurgeons.

Some examples of the diseases and disorders neurologists may treat include stroke, Alzheimer's disease, Parkinson's disease, multiple sclerosis, amyotropic lateral sclerosis, migraine, epilepsy, sleep disorders, pain, tremors, brain and spinal cord injury, peripheral nerve disease and brain tumors.

neurology-treatment.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2096 2024-03-20 00:03:36

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2098) Morphology

Gist

Morphology is the study of how things are put together, like the make-up of animals and plants, or the branch of linguistics that studies the structure of words.

In morphology, the word part morph- means "form" and -ology means "the study of." So, those who study how something is made or formed are engaged in morphology. In biology, the morphology of fish might investigate how the gills work as part of the respiratory system.

Summary

Morphology, in biology, is the study of the size, shape, and structure of animals, plants, and microorganisms and of the relationships of their constituent parts. The term refers to the general aspects of biological form and arrangement of the parts of a plant or an animal. The term anatomy also refers to the study of biological structure but usually suggests study of the details of either gross or microscopic structure. In practice, however, the two terms are used almost synonymously.

Typically, morphology is contrasted with physiology, which deals with studies of the functions of organisms and their parts; function and structure are so closely interrelated, however, that their separation is somewhat artificial. Morphologists were originally concerned with the bones, muscles, blood vessels, and nerves comprised by the bodies of animals and the roots, stems, leaves, and flower parts comprised by the bodies of higher plants. The development of the light microscope made possible the examination of some structural details of individual tissues and single cells; the development of the electron microscope and of methods for preparing ultrathin sections of tissues created an entirely new aspect of morphology—that involving the detailed structure of cells. Electron microscopy has gradually revealed the amazing complexity of the many structures of the cells of plants and animals. Other physical techniques have permitted biologists to investigate the morphology of complex molecules such as hemoglobin, the gas-carrying protein of blood, and deoxyribonucleic acid (DNA), of which most genes are composed. Thus, morphology encompasses the study of biological structures over a tremendous range of sizes, from the macroscopic to the molecular.

A thorough knowledge of structure (morphology) is of fundamental importance to the physician, to the veterinarian, and to the plant pathologist, all of whom are concerned with the kinds and causes of the structural changes that result from specific diseases.

Details

Morphology is a branch of biology dealing with the study of the form and structure of organisms and their specific structural features.

This includes aspects of the outward appearance (shape, structure, colour, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of the internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of gross structure of an organism or taxon and its component parts.

History

The etymology of the word "morphology" is from the Ancient Greek (morphḗ), meaning "form", and (lógos), meaning "word, study, research".

While the concept of form in biology, opposed to function, dates back to Aristotle, the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800).

Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Carl Gegenbaur and Ernst Haeckel.

In 1830, Cuvier and E.G.Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution.

Divisions of morphology

Comparative morphology is analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.

Functional morphology is the study of the relationship between the structure and function of morphological features.
Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.

Anatomy is a "branch of morphology that deals with the structure of organisms".

Molecular morphology is a rarely used term, usually referring to the superstructure of polymers such as fiber formation or to larger composite assemblies. The term is commonly not applied to the spatial structure of individual molecules.

Gross morphology refers to the collective structures of an organism as a whole as a general description of the form and structure of an organism, taking into account all of its structures without specifying an individual structure.

Morphology and classification

Most taxa differ morphologically from other taxa. Typically, closely related taxa differ much less than more distantly related ones, but there are exceptions to this. Cryptic species are species which look very similar, or perhaps even outwardly identical, but are reproductively isolated. Conversely, sometimes unrelated taxa acquire a similar appearance as a result of convergent evolution or even mimicry. In addition, there can be morphological differences within a species, such as in Apoica flavissima where queens are significantly smaller than workers. A further problem with relying on morphological data is that what may appear, morphologically speaking, to be two distinct species, may in fact be shown by DNA analysis to be a single species. The significance of these differences can be examined through the use of allometric engineering in which one or both species are manipulated to phenocopy the other species.

A step relevant to the evaluation of morphology between traits/features within species, includes an assessment of the terms: homology and homoplasy. Homology between features indicate that those features have been derived from a common ancestor. Alternatively, homoplasy between features describes those that can resemble each other, but derive independently via parallel or convergent evolution.

3D cell morphology: classification

The invention and development of microscopy enable the observation of 3-D cell morphology with both high spatial and temporal resolution. The dynamic processes of this cell morphology which are controlled by a complex system play an important role in varied important biological process, such as immune and invasive responses.

Additional Information

Morphology is the study of biological organisms’ structure and organization. Whether one is admiring an organism’s structure or studying individual cells under a microscope, morphology holds the key to understanding life’s numerous structures. Morphology is the study of the physical characteristics of living things.

Examining, assessing, and classifying the shapes, sizes, and forms of individual cells as well as tissues, organs and entire organisms are all part of it. By analyzing morphology, scientists can discover more about the relationships and functions of different parts of a living system.

Definition of Morphology

Morphology is a branch of biology that studies the shape and structure of living organisms.

Morphology Meaning

Morphology is a field of biology that examines an organism’s form, structure, and unique structural characteristics. This encompasses the external morphology (also known as eidonomy) which is the shape, structure, colour, pattern, and size of an object, as well as the internal morphology (also known as anatomy) which is the shape and structure of the internal parts such as bones and organs. In contrast, physiology is concerned largely with function. The study of an organism’s or taxon’s gross structure and its constituent elements is known as morphology in the field of life sciences.

Principles of Morphology

Morphology is an important part of taxonomy as it uses different characteristics and features to identify various species. Some of the basis on which organsim are morphologically classified are as follows:

Structural Organisation

A fundamental principle of morphology is that organisms possess a certain structural arrangement. This organisation could have a hierarchical structure with smaller divisions ascending to the top to build larger organisations. For example, tissues are made up of cells and organs are made up of tissues and organs which together make a body.

Adaptation and Evolution

Another aspect of morphology is the study of how structures have evolved to adapt to their environments. By examining adaptations, scientists can gain a better understanding of how organisms have evolved specific qualities to survive in their environments. This evolutionary approach highlights the ongoing changes that result in the diversity of life which gives morphology an ever-evolving character.

Function and Form

Form and function and their interaction is another key concept. The form and structure of an organ or organism are strongly tied to its function. By examining these interactions, scientists can determine the purpose of each element, providing insight into its complexity

Category of Morphology

Within the field of morphology, there are multiple levels of study, each concentrating on a different aspect of form and structure. Let’s examine these categories in more detail.

Tissue Morphology

Tissues are groups of cells that work together to provide specific functions. Morphologists carefully study tissues to understand how different cell types cooperate to carry out tasks that are essential to the organism’s existence. For instance, muscle tissue contracts to enable movement, but nerve tissue transmits messages for communication.

Organ Morphology

Moving up the organisational hierarchy, we encounter organs, which are composed of various tissues that work well together. Organ morphology is the study of how these tissues come together to form organs such as the liver, heart, or lungs. Organ morphology provides crucial information on the mechanisms that sustain life.

Cellular Morphology

The cellular study of individual cells and their structures is known as cellular morphology. This requires examining the shapes, sizes and organelle arrangements of individual cells. Having an understanding of cellular morphology is crucial for understanding both the building blocks of tissues and organs

The Whole Organism

Morphologists examine how each part functions as a whole to create a living, breathing organism by looking at the bigger image of the whole thing. This means breaking down the characteristics that differentiate each species like its external appearance, internal structure and internal function. This is the highest level of study in morphology.

Comparative Morphology

Comparative morphology studies how different species differ and are similar structurally. Scientists can discover common ancestry and evolutionary links between various organisms by comparing morphological features. Comparing the wing structure of birds and bats. For example, Comparative morphology shows how convergent evolution occurs when distinct species evolve similar flying capabilities despite having different genetic foundations.

Developmental Morphology

Developmental morphology is the study of how characteristics develop and change throughout an organism’s life cycle. Studying embryonic development may result in important insights into how animals develop from a single cell to a complex multicellular organisation. This branch of morphology increases our knowledge of the genetic and environmental factors affecting the variations and adaptations in morphology observed throughout life.

Conclusion

In summary, understanding the complex framework of life’s forms and structures requires an understanding of morphology. Researchers look into everything from entire species to individual cells to try and find answers to the mysteries of adaptation, evolution and the dynamic interplay of form and function. The morphological categorization principles which provide a framework for understanding the hierarchical structure of living systems, serve as the direction for this inquiry.

As we study tissue morphology, organ morphology, cellular morphology and the study of the complete organism, we discover a great deal about nature. Morphology improves scientific knowledge while igniting curiosity.

Morphology-vs-anatomy.jpg?v=1704286873


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2097 2024-03-21 00:02:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2099) JPG

Joint Photographic Group

Gist

The JPG, typically pronounced “jay-peg”, was developed by the Joint Photographic Experts Group (JPEG) in 1992. The group recognized a need to make large photo files smaller so that they could be shared more easily. Some quality is compromised when an image is converted to a JPG.

JPGs and JPEGs are the same file format. JPG and JPEG both stand for Joint Photographic Experts Group and are both raster image file types. The only reason JPG is three characters long as opposed to four is that early versions of Windows required a three-letter extension for file names.

JPEG (short for Joint Photographic Experts Group) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality.

Summary

The Joint Photographic Experts Group launched the JPG in 1992 with the aim of easing image sharing without compromising on quality. Digital cameras and image-capturing devices widely use the JPG format to save pictures.

So, what is a jpg file? It is an image file with the extension ‘.jpg’ or ‘.jpeg.’ Images in the form of screenshots, memes, quotes, profile pictures, passport-size pictures, infographics, and several other types can be saved in JPG format. Several social media platforms support the JPG format for image sharing, including Instagram, Facebook, Twitter, Snapchat, and WhatsApp.

JPEG files use lossy compression, meaning some information from the original image is discarded for the reduced file size. This compression, if done at a high level or multiple times, can result in a loss of image quality.

Content creators, photographers, designers, and artists use the JPG format to upload quality images on their social media platforms. The file format requires less loading time, making it one of the most used image formats among users.

JPG images serve multiple other advantages:

* The portability and compatibility allow easy uploading on web pages and apps regardless of the device and software.
* The support for 24-bit color with numbers ranging up to 16 million provides for high-definition and vibrant images.
* The variable compression enables flexible image size.
* JPEG provides for effortless decompression, thus retaining the original quality.
* JPG file format pictures can be converted to multiple other formats such as ‘.jpe,’ ‘.jiff.’‘.jif,’ and others.

Details

JPEG (short for Joint Photographic Experts Group) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015.

The Joint Photographic Experts Group created the standard in 1992. JPEG was largely responsible for the proliferation of digital images and digital photos across the Internet and later social media. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished and are simply called JPEG.

The MIME media type for JPEG is "image/jpeg," except in older Internet Explorer versions, which provide a MIME type of "image/pjpeg" when uploading JPEG images. JPEG files usually have a filename extension of "jpg" or "jpeg". JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels, hence up to 4 gigapixels for an aspect ratio of 1:1. In 2000, the JPEG group introduced a format intended to be a successor, JPEG 2000, but it was unable to replace the original JPEG as the dominant image standard.

JPEG standard

"JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and also other still picture coding standards. The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. Founded in 1986, the group developed the JPEG standard during the late 1980s. The group published the JPEG standard in 1992.

In 1987, ISO TC 97 became ISO/IEC JTC 1 and, in 1992, CCITT became ITU-T. Currently on the JTC1 side, JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC 1/SC 29/WG 1) – titled as Coding of still pictures. On the ITU-T side, ITU-T SG16 is the respective body. The original JPEG Group was organized in 1986,[17] issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81 and, in 1994, as ISO/IEC 10918-1.

The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream. The Exif and JFIF standards define the commonly used file formats for interchange of JPEG-compressed images.

JPEG2.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2098 2024-03-22 00:04:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2100) Cardiomyopathy

Gist

Any disorder that affects the heart muscle is called a cardiomyopathy. Cardiomyopathy causes the heart to lose its ability to pump blood well. In some cases, the heart rhythm also becomes disturbed. This leads to arrhythmias (irregular heartbeats).

Summary

Cardiomyopathy refers to problems with your heart muscle that can make it harder for your heart to pump blood. There are many types and causes of cardiomyopathy, and it can affect people of all ages.

Depending on the type of cardiomyopathy that you have, your heart muscle may become thicker, stiffer, or larger than normal. This can weaken your heart and cause an irregular heartbeat, heart failure, or a life-threatening condition called cardiac arrest.

Some people have no symptoms and do not need treatment. Others may have shortness of breath, tiredness, dizziness and fainting, or chest pain as the disease gets worse. Your doctor will ask about your symptoms and do tests to diagnose cardiomyopathy. Echocardiography is the most common test.

Cardiomyopathy can be caused by your genes , other medical conditions, or extreme stress. It can also happen or get worse during pregnancy. Many times, the cause is not known.

Treatments include medicines, procedures, and implanted devices. These treatments might not fix the problem with your heart but can help control your symptoms and prevent the disease from getting worse.

Details:

Overview

Cardiomyopathy is a disease of the heart muscle. It causes the heart to have a harder time pumping blood to the rest of the body, which can lead to symptoms of heart failure. Cardiomyopathy also can lead to some other serious heart conditions.

There are various types of cardiomyopathy. The main types include dilated, hypertrophic and restrictive cardiomyopathy. Treatment includes medicines and sometimes surgically implanted devices and heart surgery. Some people with severe cardiomyopathy need a heart transplant. Treatment depends on the type of cardiomyopathy and how serious it is.

Types

Dilated cardiomyopathy
Hypertrophic cardiomyopathy

Symptoms

Some people with cardiomyopathy don't ever get symptoms. For others, symptoms appear as the condition becomes worse. Cardiomyopathy symptoms can include:

* Shortness of breath or trouble breathing with activity or even at rest.
* Chest pain, especially after physical activity or heavy meals.
* Heartbeats that feel rapid, pounding or fluttering.
* Swelling of the legs, ankles, feet, stomach area and neck veins.
* Bloating of the stomach area due to fluid buildup.
* Cough while lying down.
* Trouble lying flat to sleep.
* Fatigue, even after getting rest.
* Dizziness.
* Fainting.

Symptoms tend to get worse unless they are treated. In some people, the condition becomes worse quickly. In others, it might not become worse for a long time.

When to see a doctor

See your healthcare professional if you have any symptoms of cardiomyopathy. Call your local emergency number if you faint, have trouble breathing or have chest pain that lasts for more than a few minutes.

Some types of cardiomyopathy can be passed down through families. If you have the condition, your healthcare professional might recommend that your family members be checked.

Causes

Often, the cause of the cardiomyopathy isn't known. But some people get it due to another condition. This is known as acquired cardiomyopathy. Other people are born with cardiomyopathy because of a gene passed on from a parent. This is called inherited cardiomyopathy.

Certain health conditions or behaviors that can lead to acquired cardiomyopathy include:

* Long-term high blood pressure.
* Heart tissue damage from a heart attack.
* Long-term rapid heart rate.
* Heart valve problems.
* COVID-19 infection.
* Certain infections, especially those that cause inflammation of the heart.
* Metabolic disorders, such as obesity, thyroid disease or diabetes.
* Lack of essential vitamins or minerals in the diet, such as thiamin (vitamin B-1).
* Pregnancy complications.
* Iron buildup in the heart muscle, called hemochromatosis.
* The growth of tiny lumps of inflammatory cells called granulomas in any part of the body. When this happens in the heart or lungs, it's called sarcoidosis.
* The buildup of irregular proteins in the organs, called amyloidosis.
* Connective tissue disorders.
* Drinking too much alcohol over many years.
* Use of drugs, amphetamines or anabolic steroids.
* Use of some chemotherapy medicines and radiation to treat cancer.

Types of cardiomyopathy include:

* Dilated cardiomyopathy. In this type of cardiomyopathy, the heart's chambers thin and stretch, growing larger. The condition tends to start in the heart's main pumping chamber, called the left ventricle. As a result, the heart has trouble pumping blood to the rest of the body.

This type can affect people of all ages. But it happens most often in people younger than 50 and is more likely to affect men. Conditions that can lead to a dilated heart include coronary artery disease and heart attack. But for some people, gene changes play a role in the disease.

* Hypertrophic cardiomyopathy. In this type, the heart muscle becomes thickened. This makes it harder for the heart to work. The condition mostly affects the muscle of the heart's main pumping chamber.

Hypertrophic cardiomyopathy can start at any age. But it tends to be worse if it happens during childhood. Most people with this type of cardiomyopathy have a family history of the disease. Some gene changes have been linked to hypertrophic cardiomyopathy. The condition doesn't happen due to a heart problem.

* Restrictive cardiomyopathy. In this type, the heart muscle becomes stiff and less flexible. As a result, it can't expand and fill with blood between heartbeats. This least common type of cardiomyopathy can happen at any age. But it most often affects older people.

Restrictive cardiomyopathy can occur for no known reason, also called an idiopathic cause. Or it can by caused by a disease elsewhere in the body that affects the heart, such as amyloidosis.

* Arrhythmogenic right ventricular cardiomyopathy (ARVC). This is a rare type of cardiomyopathy that tends to happen between the ages of 10 and 50. It mainly affects the muscle in the lower right heart chamber, called the right ventricle. The muscle is replaced by fat that can become scarred. This can lead to heart rhythm problems. Sometimes, the condition involves the left ventricle as well. ARVC often is caused by gene changes.

* Unclassified cardiomyopathy. Other types of cardiomyopathy fall into this group.

Risk factors

Many things can raise the risk of cardiomyopathy, including:

* Family history of cardiomyopathy, heart failure and sudden cardiac arrest.
* Long-term high blood pressure.
* Conditions that affect the heart. These include a past heart attack, coronary artery disease or an infection in the heart.
* Obesity, which makes the heart work harder.
* Long-term alcohol misuse.
* Illicit drug use, such as amphetamines and anabolic steroids.
* Treatment with certain chemotherapy medicines and radiation for cancer.

Many diseases also raise the risk of cardiomyopathy, including:

* Diabetes.
* Thyroid disease.
* Storage of excess iron in the body, called hemochromatosis.
* Buildup of a certain protein in organs, called amyloidosis.
* The growth of small patches of inflamed tissue in organs, called sarcoidosis.
* Connective tissue disorders.

Complications

* An enlarged heart
* Enlarged heart, in heart failure

Cardiomyopathy can lead to serious medical conditions, including:

* Heart failure. The heart can't pump enough blood to meet the body's needs. Without treatment, heart failure can be life-threatening.
* Blood clots. Because the heart can't pump well, blood clots might form in the heart. If clots enter the bloodstream, they can block the blood flow to other organs, including the heart and brain.
* Heart valve problems. Because cardiomyopathy can cause the heart to become larger, the heart valves might not close properly. This can cause blood to flow backward in the valve.
* Cardiac arrest and sudden death. Cardiomyopathy can trigger irregular heart rhythms that cause fainting. Sometimes, irregular heartbeats can cause sudden death if the heart stops beating effectively.

Prevention

Inherited types of cardiomyopathy can't be prevented. Let your healthcare professional know if you have a family history of the condition.

You can help lower the risk of acquired types of cardiomyopathy, which are caused by other conditions. Take steps to lead a heart-healthy lifestyle, including:

* Stay away from alcohol or illegal drugs.
* Control any other conditions you have, such as high blood pressure, high cholesterol or diabetes.
* Eat a healthy diet.
* Get regular exercise.
* Get enough sleep.
* Lower your stress.

These healthy habits also can help people with inherited cardiomyopathy control their symptoms.

Additional Information

Cardiomyopathy is a group of primary diseases of the heart muscle. Early on there may be few or no symptoms. As the disease worsens, shortness of breath, feeling tired, and swelling of the legs may occur, due to the onset of heart failure. An irregular heart beat and fainting may occur. Those affected are at an increased risk of sudden cardiac death.

As of 2013, cardiomyopathies are defined as "disorders characterized by morphologically and functionally abnormal myocardium in the absence of any other disease that is sufficient, by itself, to cause the observed phenotype." Types of cardiomyopathy include hypertrophic cardiomyopathy, dilated cardiomyopathy, restrictive cardiomyopathy, arrhythmogenic right ventricular dysplasia, and Takotsubo cardiomyopathy (broken heart syndrome). In hypertrophic cardiomyopathy the heart muscle enlarges and thickens. In dilated cardiomyopathy the ventricles enlarge and weaken. In restrictive cardiomyopathy the ventricle stiffens.

In many cases, the cause cannot be determined. Hypertrophic cardiomyopathy is usually inherited, whereas dilated cardiomyopathy is inherited in about one third of cases. Dilated cardiomyopathy may also result from alcohol, heavy metals, coronary artery disease, cocaine use, and viral infections. Restrictive cardiomyopathy may be caused by amyloidosis, hemochromatosis, and some cancer treatments. Broken heart syndrome is caused by extreme emotional or physical stress.

Treatment depends on the type of cardiomyopathy and the severity of symptoms. Treatments may include lifestyle changes, medications, or surgery. Surgery may include a ventricular assist device or heart transplant. In 2015 cardiomyopathy and myocarditis affected 2.5 million people. Hypertrophic cardiomyopathy affects about 1 in 500 people while dilated cardiomyopathy affects 1 in 2,500. They resulted in 354,000 deaths up from 294,000 in 1990. Arrhythmogenic right ventricular dysplasia is more common in young people.

Signs and symptoms

Signs and symptoms of cardiomyopathy include:

* Shortness of breath or trouble breathing, especially with physical exertion
* Fatigue
* Swelling in the ankles, feet, legs, abdomen and veins in the neck
* Dizziness
* Lightheadedness
* Fainting during physical activity
* Arrhythmias (abnormal heartbeats)
* Chest pain, especially after physical exertion or heavy meals
* Heart murmurs (unusual sounds associated with heartbeats)

Causes

Cardiomyopathies can be of genetic (familial) or non-genetic (acquired) origin. Genetic cardiomyopathies usually are caused by sarcomere or cytoskeletal diseases, neuromuscular disorders, inborn errors of metabolism, malformation syndromes and sometimes are unidentified. Non-genetic cardiomyopathies can have definitive causes such as viral infections, myocarditis and others.

Cardiomyopathies are either confined to the heart or are part of a generalized systemic disorder, both often leading to cardiovascular death or progressive heart failure-related disability. Other diseases that cause heart muscle dysfunction are excluded, such as coronary artery disease, hypertension, or abnormalities of the heart valves. Often, the underlying cause remains unknown, but in many cases the cause may be identifiable. Alcoholism, for example, has been identified as a cause of dilated cardiomyopathy, as has drug toxicity, and certain infections (including Hepatitis C). Untreated celiac disease can cause cardiomyopathies, which can completely reverse with a timely diagnosis. In addition to acquired causes, molecular biology and genetics have given rise to the recognition of various genetic causes.

A more clinical categorization of cardiomyopathy as 'hypertrophied', 'dilated', or 'restrictive', has become difficult to maintain because some of the conditions could fulfill more than one of those three categories at any particular stage of their development.

The current American Heart Association (AHA) definition divides cardiomyopathies into primary, which affect the heart alone, and secondary, which are the result of illness affecting other parts of the body. These categories are further broken down into subgroups which incorporate new genetic and molecular biology knowledge.

Mechanism

The pathophysiology of cardiomyopathies is better understood at the cellular level with advances in molecular techniques. Mutant proteins can disturb cardiac function in the contractile apparatus (or mechanosensitive complexes). Cardiomyocyte alterations and their persistent responses at the cellular level cause changes that are correlated with sudden cardiac death and other cardiac problems.

Cardiomyopathies are generally varied individually. Different factors can cause Cardiomyopathies in adults as well as children. To exemplify, Dilated Cardiomyopathy in adults is associated with Ischemic Cardiomyopathy, Hypertension, Valvular diseases, and Genetics. While in Children, Neuromuscular diseases such as Becker muscular dystrophy, including X-linked genetic disorder, are directly linked with their Cardiomyopathies.

Diagnosis

Among the diagnostic procedures done to determine a cardiomyopathy are:

* Physical exam
* Family history
* Blood test
* ECG
* Echocardiogram
* Stress test
* Genetic testing.

GettyImages-974257122-84c7a52082244f08a59c635f64d4eb34.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2099 2024-03-23 00:05:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2101) Graphics Interchange Format

Gist

A GIF is a kind of image file that showcases brief animations. Mainly utilized to produce straightforward, repeating animations, GIFs are commonly utilized for sharing compact video clips on the web or for giving movement to graphics and text. GIF can be produced from a sequence of static pictures or from video footage and is frequently utilized to spread amusing or entertaining content on social media platforms and across the web.

GIF stands for "Graphics Interchange Format."

Summary

GIF is a digital file format devised in 1987 by the Internet service provider CompuServe as a means of reducing the size of images and short animations. Because GIF is a lossless data compression format, meaning that no information is lost in the compression, it quickly became a popular format for transmitting and storing graphic files.

At the time of its creation, GIF’s support of 256 different colours was considered vast, as many computer monitors had the same limit (in 8-bit systems, or 28 colours). The method used to keep file size to a minimum is a compression algorithm commonly referred to as LZW, named after its inventors, Abraham Lempel and Jacob Ziv of Israel and Terry Welch of the United States. LZW was the source of a controversy started by the American Unisys Corporation in 1994, when it was revealed that they owned a patent for LZW and were belatedly seeking royalties from several users. Although the relevant patents expired by 2004, the controversy resulted in the creation of the portable network graphics (PNG) format, an alternative to GIF that offered a wider array of colours and different compression methods. JPEG (joint photographic experts group), a digital file format that supports millions of different colour options, is often used to transmit better-quality images, such as digital photographs, at the cost of greater size. Despite the competition, GIF remains popular.

Details

GIF stands for Graphics Interchange Format and is a type of digital image file that is often used for short, looping animations. They’re often used to create memes and other types of social content.

What is a GIF?

A GIF file contains a series of static images that are displayed in rapid succession, creating the illusion of a short animation. Unlike traditional video files, GIFs have a limited color palette and don't support sound, but they are small in size and easy to share online.

Where are GIFs used?

GIFs are used online, mostly on social media, to express emotions, reactions, or just to add humor to a message. Unlike traditional video formats, GIFs only support 256 colors, which makes them smaller in file size and ideal for use in web-based applications, website pages, and online messaging.

Where do GIFs come from?

GIFs can be created by taking a short clip from a TV show, movie, or other media, and looping it. People can add or create GIFs to make a point or add a funny spin to a current event or cultural reference. For example, you might see a GIF of a politician making a ridiculous statement, accompanied by a caption or meme.

Why are GIFs so popular?

GIFs have become an integral part of online culture and social commentary. They’re used for visual communication, allowing users to express themselves in a quick and playful way without using words. For example, using a GIF of a cat waving its paw to say "goodbye" or a GIF of a person rolling their eyes to show sarcasm.

In addition to their emotional and communicative uses, GIFs are also very useful for technical design purposes. GIFs' small file size and seamless looping make them really easy to use in web design, video game development, and other applications where small, animated images are needed.

graphics-interchange-format-graphics-interchange-format-blue-179430879.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2100 2024-03-24 00:13:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,422

Re: Miscellany

2102) Portable Document Format

Gist

Adobe PDF files—short for portable document format files—are one of the most commonly used file types today. If you've ever downloaded a printable form or document from the Web, such as an IRS tax form, there's a good chance it was a PDF file. Whenever you see a file that ends with .pdf, that means it's a PDF file.

Why use PDF files?

Let's say you create a newsletter in Microsoft Word and share it as a .docx file, which is the default file format for Word documents. Unless everyone has Microsoft Word installed on their computers, there's no guarantee that they would be able to open and view the newsletter. And because Word documents are meant to be edited, there's a chance that some of the formatting and text in your document may be shifted around.

By contrast, PDF files are primarily meant for viewing, not editing. One reason they're so popular is that PDFs can preserve document formatting, which makes them more shareable and helps them to look the same on any device. Sharing the newsletter as a PDF file would help ensure everyone is able to view it as you intended.

Summary

Portable Document Format (PDF) is a file format that has captured all the elements of a printed document as an electronic image that users can view, navigate, print or forward to someone else.

However, PDF files are more than images of documents. Files can embed type fonts so that they're available at any viewing location. They can also include interactive elements, such as buttons for forms entry and for triggering sound or video.

How are PDFs created?

PDF files are created using tools such as Adobe Acrobat or other software that can save documents in PDF.

To view saved PDF files, users need either the full Acrobat program, which is not free, or a less expensive program, such as Adobe Reader, which is available for free from Adobe. PDF files can also be viewed in most web browsers.

A PDF file contains one or more page images, each of which users can zoom in on or out from. They can also scroll backward and forward through the pages of a PDF.

What are the use cases for PDF documents?

Some situations in which PDF files are desirable are the following:

* When users want to preserve the original formatting of a document. For example, if they created a resume using a word processing program and saved it as a PDF, the recipient sees the same fonts and layout that the sender used, for example.
* When users want to send a document electronically but be sure that the recipient sees it exactly as the sender intended it to look.
* When a user wants to create a document that cannot be easily edited. For example, if they wanted to send someone a contract but didn't want them to change it, the creator could save it as a PDF.

What are the benefits of using PDF?

PDF files are useful for documents such as magazine articles, product brochures or flyers, in which the creator wants to preserve the original graphic appearance online.

PDFs are also useful for documents that are downloaded and printed, such as resumes, contracts and application forms.

PDFs also support embedding digital signatures in documents for authenticating the integrity of a digital document.

What are the disadvantages of PDFs?

The main disadvantage of PDFs is that they are not editable. If an individual needs to change a document after it has been saved as a PDF, they must return to the original program used to create it and make the changes there. Then, they need to resave the new PDF image.

Another disadvantage is that some older versions of software cannot read PDFs. In order to open a PDF, recipients must have a PDF reader installed on their computer.

Are there security risks associated with PDFs?

PDFs can contain viruses, so it's important to be sure that recipients trust the source of any PDF files they download. In addition, PDFs can be password-protected so that anyone who tries to open the file needs a password in order to access it.

Can PDFs be converted to other formats?

PDF files can be converted to other file formats, such as Microsoft Word, Excel or image formats, such as JPG. However, the format of the original document may not be perfectly preserved in the conversion process.

Details

Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF has its roots in "The Camelot Project" initiated by Adobe co-founder John Warnock in 1991. PDF was standardized as ISO 32000 in 2008. The last edition as ISO 32000-2:2020 was published in December 2020.

PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats. The PDF specification also provides for encryption and digital signatures, file attachments, and metadata to enable workflows requiring these features.

History

Adobe Systems made the PDF specification available free of charge in 1993. In the early years PDF was popular mainly in desktop publishing workflows, and competed with several other formats, including DjVu, Envoy, Common Ground Digital Paper, Farallon Replica and even Adobe's own PostScript format.

PDF was a proprietary format controlled by Adobe until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008, at which time control of the specification passed to an ISO Committee of volunteer industry experts. In 2008, Adobe published a Public Patent License to ISO 32000-1 granting royalty-free rights for all patents owned by Adobe necessary to make, use, sell, and distribute PDF-compliant implementations.

PDF 1.7, the sixth edition of the PDF specification that became ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification. These proprietary technologies are not standardized, and their specification is published only on Adobe's website. Many of them are not supported by popular third-party implementations of PDF.

ISO published ISO 32000-2 in 2017, available for purchase, replacing the free specification provided by Adobe. In December 2020, the second edition of PDF 2.0, ISO 32000-2:2020, was published, with clarifications, corrections, and critical updates to normative references (ISO 32000-2 does not include any proprietary technologies as normative references). In April 2023 the PDF Association made ISO 32000-2 available for download free of charge.

Technical details

A PDF file is often a combination of vector graphics, text, and bitmap graphics. The basic types of content in a PDF are:

* Typeset text stored as content streams (i.e., not encoded in plain text);
* Vector graphics for illustrations and designs that consist of shapes and lines;
* Raster graphics for photographs and other types of images; and
* Other multimedia objects.

In later PDF revisions, a PDF document can also support links (inside document or web page), forms, JavaScript (initially available as a plugin for Acrobat 3.0), or any other types of embedded contents that can be handled using plug-ins.

PDF combines three technologies:

* An equivalent subset of the PostScript page description programming language but in declarative form, for generating the layout and graphics.
* A font-embedding/replacement system to allow fonts to travel with the documents.
* A structured storage system to bundle these elements and any associated content into a single file, with data compression where appropriate.

PostScript language

PostScript is a page description language run in an interpreter to generate an image. It can handle graphics and has standard features of programming languages such as branching and looping. PDF is a subset of PostScript, simplified to remove such flow control features, while graphics commands remain.

PostScript was originally designed for a drastically different use case: transmission of one-way linear print jobs in which the PostScript interpreter would collect up commands until it encountered the showpage command, then evaluate the commands to render a page to a printing device. PostScript was not intended for long-term storage and interactive rendering of electronic documents, so there was no need to scroll back to previous pages. Thus, to accurately render any given page, it was necessary to evaluate all the commands before that point.

Traditionally, the PostScript-like PDF code is generated from a source PostScript file (that is, an executable program), with standard compiler techniques like loop unrolling, inlining and removing unused branches, resulting in code that is purely declarative and static. This is then packaged into a container format, together with all necessary dependencies for correct rendering (external files, graphics, or fonts to which the document refers), and compressed.

As a document format, PDF has several advantages over PostScript:

* PDF contains only static declarative PostScript code, that can be processed as data, and does not require a full program interpreter or compiler. This avoids the complexity and security risks of an engine with such a higher complexity level.
* Like Display PostScript, since version 1.4 PDF supports transparent graphics, while standard PostScript does not.
* PDF enforces the rule that the code for a page cannot affect any other pages. That rule is strongly recommended for PostScript code too, but has to be implemented explicitly (see, e.g., the Document Structuring Conventions), as PostScript is a full programming language that allows for such greater flexibilities and is not limited to the concepts of pages and documents.
* All data required for rendering is included on the file itself, improving portability.

Its disadvantages are:

* Loss of flexibility, and limitation to a single use case.
* A (sometimes much) larger size. Although for trivially repetitive content, this is mitigated with compression. (Overall, compared to e.g. a bitmap image, it is still orders of magnitude smaller.)

PDF since v1.6 supports embedding of interactive 3D documents: 3D drawings can be embedded using U3D or PRC and various other data formats.

Additional Information

PDF is an abbreviation that stands for Portable Document Format. It's a versatile file format created by Adobe that gives people an easy, reliable way to present and exchange documents - regardless of the software, hardware or operating systems being used by anyone who views the document.

The PDF format is now an open standard, maintained by the International Organisation for Standardisation (ISO). PDF docs can contain links and buttons, form fields, audio, video and business logic. They can be signed electronically and can easily be viewed on Windows or MacOS using the free Adobe Acrobat Reader software.

In 1991, Adobe co-founder Dr John Warnock launched the paper-to-digital revolution with an idea he called, The Camelot Project. The goal was to enable anyone to capture documents from any application, send electronic versions of these documents anywhere and view and print them on any machine. By 1992, Camelot had developed into PDF. Today, it is the file format trusted by businesses around the world.

Warnock’s vision continues to shape the way we work. When you create an Adobe PDF from documents or images, it looks just the way you intended it to. While many PDFs are simply pictures of pages, Adobe PDFs preserve all the data in the original file format—even when text, graphics, spreadsheets and more are combined in a single file.

You can be confident your PDF file meets ISO 32000 standards for electronic document exchange, including special-purpose standards such as PDF/A for archiving, PDF/E for engineering and PDF/X for printing. You can also create PDFs to meet a range of accessibility standards that make content more usable by people with disabilities.

When you need to electronically sign a PDF, it’s easy using the Adobe Acrobat Reader mobile app or the Acrobat Sign mobile app. For managing legally-binding electronic or digital signature processes, try Adobe Acrobat or Acrobat Sign.

When you need to electronically sign a PDF, it’s easy using the Adobe Acrobat Reader mobile app or the Acrobat Sign mobile app. Access your PDFs from any web browser or operating system. For managing legally-binding electronic or digital signature processes, try Adobe Acrobat or Acrobat Sign.

Pdf_by_mimooh.svg-56a9d1943df78cf772aaca04.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB