Gist
The muscular system is composed of specialized cells called muscle fibers. Their predominant function is contractibility. Muscles, attached to bones or internal organs and blood vessels, are responsible for movement. Nearly all movement in the body is the result of muscle contraction.
Summary
The muscular system is composed of specialized cells called muscle fibers. Their predominant function is contractibility. Muscles, attached to bones or internal organs and blood vessels, are responsible for movement. Nearly all movement in the body is the result of muscle contraction. Exceptions to this are the action of cilia, the flagellum on sperm cells, and amoeboid movement of some white blood cells.
The integrated action of joints, bones, and skeletal muscles produces obvious movements such as walking and running. Skeletal muscles also produce more subtle movements that result in various facial expressions, eye movements, and respiration.
In addition to movement, muscle contraction also fulfills some other important functions in the body, such as posture, joint stability, and heat production. Posture, such as sitting and standing, is maintained as a result of muscle contraction. The skeletal muscles are continually making fine adjustments that hold the body in stationary positions. The tendons of many muscles extend over joints and in this way contribute to joint stability. This is particularly evident in the knee and shoulder joints, where muscle tendons are a major factor in stabilizing the joint. Heat production, to maintain body temperature, is an important by-product of muscle metabolism. Nearly 85 percent of the heat produced in the body is the result of muscle contraction.
Details
Muscles play a part in every function of the body. The muscular system is made up of over 600 muscles. These include three muscle types: smooth, skeletal, and cardiac.
Only skeletal muscles are voluntary, meaning you can control them consciously. Smooth and cardiac muscles act involuntarily.
Each muscle type in the muscular system has a specific purpose. You’re able to walk because of your skeletal muscles. You can digest because of your smooth muscles. And your heart beats because of your cardiac muscle.
The different muscle types also work together to make these functions possible. For instance, when you run (skeletal muscles), your heart pumps harder (cardiac muscle), and causes you to breathe heavier (smooth muscles).
Keep reading to learn more about your muscular system’s functions.
1. Mobility
Your skeletal muscles are responsible for the movements you make. Skeletal muscles are attached to your bones and partly controlled by the central nervous system (CNS).
You use your skeletal muscles whenever you move. Fast-twitch skeletal muscles cause short bursts of speed and strength. Slow-twitch muscles function better for longer movements.
2. Circulation
The involuntary cardiac and smooth muscles help your heart beat and blood flow through your body by producing electrical impulses. The cardiac muscle (myocardium) is found in the walls of the heart. It’s controlled by the autonomic nervous system responsible for most bodily functions.
The myocardium also has one central nucleus like a smooth muscle.
Your blood vessels are made up of smooth muscles, and also controlled by the autonomic nervous system.
3. Respiration
Your diaphragm is the main muscle at work during quiet breathing. Heavier breathing, like what you experience during exercise, may require accessory muscles to help the diaphragm. These can include the abdominal, neck, and back muscles.
4. Digestion
Digestion is controlled by smooth muscles found in your gastrointestinal tract. This comprises the:
* mouth
* esophagus
* stomach
* small and large intestines
* rectum
* the last part of the digestive tract
The digestive system also includes the liver, pancreas, and gallbladder.
Your smooth muscles contract and relax as food passes through your body during digestion. These muscles also help push food out of your body through defecation, or vomiting when you’re sick.
5. Urination
Smooth and skeletal muscles make up the urinary system. The urinary system includes the:
* kidneys
* bladder
* ureters
* urethra
* male or female reproductive organs
* prostate
All the muscles in your urinary system work together so you can urinate. The dome of your bladder is made of smooth muscles. You can release urine when those muscles tighten. When they relax, you can hold in your urine.
6. Childbirth
Smooth muscles are found in the uterus. During pregnancy, these muscles grow and stretch as the baby grows. When a woman goes into labor, the smooth muscles of the uterus contract and relax to help push the baby through the math.
7. Vision
Your eye sockets are made up of six skeletal muscles that help you move your eyes. And the internal muscles of your eyes are made up of smooth muscles. All these muscles work together to help you see. If you damage these muscles, you may impair your vision.
8. Stability
The skeletal muscles in your core help protect your spine and help with stability. Your core muscle group includes the abdominal, back, and pelvic muscles. This group is also known as the trunk. The stronger your core, the better you can stabilize your body. The muscles in your legs also help steady you.
9. Posture
Your skeletal muscles also control posture. Flexibility and strength are keys to maintaining proper posture. Stiff neck muscles, weak back muscles, or tight hip muscles can throw off your alignment. Poor posture can affect parts of your body and lead to joint pain and weaker muscles. These parts include the:
* shoulders
* spine
* hips
* knees
The bottom line
The muscular system is a complex network of muscles vital to the human body. Muscles play a part in everything you do. They control your heartbeat and breathing, help digestion, and allow movement.
Muscles, like the rest of your body, thrive when you exercise and eat healthily. But too much exercise can cause sore muscles. Muscle pain can also be a sign that something more serious is affecting your body.
The following conditions can affect your muscular system:
* myopathy (muscle disease)
* muscular dystrophy
* multiple sclerosis (MS)
* Parkinson’s disease
* fibromyalgia
Talk to your doctor if you have one of these conditions. They can help you find ways to manage your health. It’s important to take care of your muscles so they stay healthy and strong.
Additional Information
The muscular system is an organ system consisting of skeletal, smooth, and cardiac muscle. It permits movement of the body, maintains posture, and circulates blood throughout the body. The muscular systems in vertebrates are controlled through the nervous system although some muscles (such as the cardiac muscle) can be completely autonomous. Together with the skeletal system in the human, it forms the musculoskeletal system, which is responsible for the movement of the body.
Types
There are three distinct types of muscle: skeletal muscle, cardiac or heart muscle, and smooth (non-striated) muscle. Muscles provide strength, balance, posture, movement, and heat for the body to keep warm.
There are approximately 640 muscles in an adult male human body. A kind of elastic tissue makes up each muscle, which consists of thousands, or tens of thousands, of small muscle fibers. Each fiber comprises many tiny strands called fibrils, impulses from nerve cells control the contraction of each muscle fiber.
Skeletal
Skeletal muscle, is a type of striated muscle, composed of muscle cells, called muscle fibers, which are in turn composed of myofibrils. Myofibrils are composed of sarcomeres, the basic building blocks of striated muscle tissue. Upon stimulation by an action potential, skeletal muscles perform a coordinated contraction by shortening each sarcomere. The best proposed model for understanding contraction is the sliding filament model of muscle contraction. Within the sarcomere, actin and myosin fibers overlap in a contractile motion towards each other. Myosin filaments have club-shaped myosin heads that project toward the actin filaments, and provide attachment points on binding sites for the actin filaments. The myosin heads move in a coordinated style; they swivel toward the center of the sarcomere, detach and then reattach to the nearest active site of the actin filament. This is called a ratchet type drive system.
This process consumes large amounts of adenosine triphosphate (ATP), the energy source of the cell. ATP binds to the cross-bridges between myosin heads and actin filaments. The release of energy powers the swiveling of the myosin head. When ATP is used, it becomes adenosine diphosphate (ADP), and since muscles store little ATP, they must continuously replace the discharged ADP with ATP. Muscle tissue also contains a stored supply of a fast-acting recharge chemical, creatine phosphate, which when necessary can assist with the rapid regeneration of ADP into ATP.
Calcium ions are required for each cycle of the sarcomere. Calcium is released from the sarcoplasmic reticulum into the sarcomere when a muscle is stimulated to contract. This calcium uncovers the actin-binding sites. When the muscle no longer needs to contract, the calcium ions are pumped from the sarcomere and back into storage in the sarcoplasmic reticulum.
There are approximately 639 skeletal muscles in the human body.
Cardiac
Heart muscle is striated muscle but is distinct from skeletal muscle because the muscle fibers are laterally connected. Furthermore, just as with smooth muscles, their movement is involuntary. Heart muscle is controlled by the sinus node influenced by the autonomic nervous system.
Smooth
Smooth muscle contraction is regulated by the autonomic nervous system, hormones, and local chemical signals, allowing for gradual and sustained contractions. This type of muscle tissue is also capable of adapting to different levels of stretch and tension, which is important for maintaining proper blood flow and the movement of materials through the digestive system.
Physiology:
Contraction
Neuromuscular junctions are the focal point where a motor neuron attaches to a muscle. Acetylcholine, (a neurotransmitter used in skeletal muscle contraction) is released from the axon terminal of the nerve cell when an action potential reaches the microscopic junction called a synapse. A group of chemical messengers across the synapse and stimulate the formation of electrical changes, which are produced in the muscle cell when the acetylcholine binds to receptors on its surface. Calcium is released from its storage area in the cell's sarcoplasmic reticulum. An impulse from a nerve cell causes calcium release and brings about a single, short muscle contraction called a muscle twitch. If there is a problem at the neuromuscular junction, a very prolonged contraction may occur, such as the muscle contractions that result from tetanus. Also, a loss of function at the junction can produce paralysis.
Skeletal muscles are organized into hundreds of motor units, each of which involves a motor neuron, attached by a series of thin finger-like structures called axon terminals. These attach to and control discrete bundles of muscle fibers. A coordinated and fine-tuned response to a specific circumstance will involve controlling the precise number of motor units used. While individual muscle units contract as a unit, the entire muscle can contract on a predetermined basis due to the structure of the motor unit. Motor unit coordination, balance, and control frequently come under the direction of the cerebellum of the brain. This allows for complex muscular coordination with little conscious effort, such as when one drives a car without thinking about the process.
Tendon
A tendon is a piece of connective tissue that connects a muscle to a bone.[8] When a muscle intercept, it pulls against the skeleton to create movement. A tendon connects this muscle to a bone, making this function possible.
Aerobic and anaerobic muscle activity
At rest, the body produces the majority of its ATP aerobically in the mitochondria without producing lactic acid or other fatiguing byproducts. During exercise, the method of ATP production varies depending on the fitness of the individual as well as the duration and intensity of exercise. At lower activity levels, when exercise continues for a long duration (several minutes or longer), energy is produced aerobically by combining oxygen with carbohydrates and fats stored in the body.
During activity that is higher in intensity, with possible duration decreasing as intensity increases, ATP production can switch to anaerobic pathways, such as the use of the creatine phosphate and the phosphagen system or anaerobic glycolysis. Aerobic ATP production is biochemically much slower and can only be used for long-duration, low-intensity exercise, but produces no fatiguing waste products that can not be removed immediately from the sarcomere and the body, and it results in a much greater number of ATP molecules per fat or carbohydrate molecule. Aerobic training allows the oxygen delivery system to be more efficient, allowing aerobic metabolism to begin quicker. Anaerobic ATP production produces ATP much faster and allows near-maximal intensity exercise, but also produces significant amounts of lactic acid which render high-intensity exercise unsustainable for more than several minutes. The phosphagen system is also anaerobic. It allows for the highest levels of exercise intensity, but intramuscular stores of phosphocreatine are very limited and can only provide energy for exercises lasting up to ten seconds. Recovery is very quick, with full creatine stores regenerated within five minutes.
Clinical significance
Multiple diseases can affect the muscular system.
Muscular Dystrophy
Muscular dystrophy is a group of disorders associated with progressive muscle weakness and loss of muscle mass. These disorders are caused by mutations in a person’s genes. The disease affects between 19.8 and 25.1 per 100,000 person-years globally.
There are more than 30 types of muscular dystrophy. Depending on the type, muscular dystrophy can affect the patient's heart and lungs, and/or their ability to move, walk, and perform daily activities. The most common types include:
* Duchenne muscular dystrophy (DMD) and Becker muscular dystrophy (BMD)
* Myotonic dystrophy
* Limb-Girdle (LGMD)
* Facioscapulohumeral dystrophy (FSHD)
* Congenital dystrophy (CMD)
* Distal (DD)
* Oculopharyngeal dystrophy (OPMD)
* Emery-Dreifuss (EDMD).
]]>
Gist
The study of the structure of a plant or animal. Human anatomy includes the cells, tissues, and organs that make up the body and how they are organized in the body.
Summary
Anatomy is the study of the structure of living things – animal, human, plant – from microscopic cells and molecules to whole organisms as large as whales.
Anatomy Is Everywhere
* Anthropologists study cultures around the world.
* Paleontologists use cutting-edge technology to discover the ancient world.
* Archeologists uncover our history one artifact at a time.
* Veterinarians help humans care for pets and farm animals.
* Zoologists ensure captive animals – from backyard critters to endangered species – receive optimal care.
* Medical students learn anatomy before becoming nurses, doctors, and dentists.
* Inventors create exoskeletons to give people mobility.
* Biomedical engineers create better pacemakers and prosthetics.
* Physical therapists find remedies for their patients’ challenges.
Who Are Anatomists?
An anatomist broadly describes someone who studies, researches, or teaches in the anatomical sciences, including the study of extinct species, such as dinosaurs and Neanderthals. They help us understand how things are formed and constructed, which has enormous impact. However, not everyone who studies, applies, or researches anatomy calls themselves ‘anatomists.’
WHAT ANATOMISTS DO
Anatomists work with students and researchers to better understand humans and animals, in order to teach the next generation of doctors, nurses, physical therapists, dentists, and veterinarians. Their research into cell and molecular anatomy means that conditions such as cleft palate, congenital heart defects, neurological disorders, and cancer biology are better understood – and can be treated.
WHERE ANATOMISTS WORK
Anatomists work in universities, research institutions, and private industry. They teach anatomy in medical, dental, and veterinary schools, as well as at large undergraduate universities. They run their own research labs at organizations and universities, and they work together in teams of scientists, postdoctoral researchers, and students to uncover discoveries that lead to better understanding of our biology.
Details
Anatomy (from Ancient Greek anatomḗ) 'dissection') is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine, and is often studied alongside physiology.
Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek "dissection" (from "I cut up, cut open" from "up", and "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system.
Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.
The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.
Animal tissues
The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular gender organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.
Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.
Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.
Connective tissue
Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Connective tissue gives shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.[16]
Epithelium
Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.
Muscle tissue
Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.
Nervous tissue
Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.
Vertebrate anatomy
All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the math. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the math at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.
Mammal anatomy
Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.
Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a nipple and completes its development.
Human anatomy
In humans, dexterous hand movements and increased brain size are likely to have evolved simultaneously.
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.
Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.
Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.
Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.
Additional Information
Anatomy is a field in the biological sciences concerned with the identification and description of the body structures of living things. Gross anatomy involves the study of major body structures by dissection and observation and in its narrowest sense is concerned only with the human body. “Gross anatomy” customarily refers to the study of those body structures large enough to be examined without the help of magnifying devices, while microscopic anatomy is concerned with the study of structural units small enough to be seen only with a light microscope. Dissection is basic to all anatomical research. The earliest record of its use was made by the Greeks, and Theophrastus called dissection “anatomy,” from ana temnein, meaning “to cut up.”
Comparative anatomy, the other major subdivision of the field, compares similar body structures in different species of animals in order to understand the adaptive changes they have undergone in the course of evolution.
Gross anatomy
This ancient discipline reached its culmination between 1500 and 1850, by which time its subject matter was firmly established. None of the world’s oldest civilizations dissected a human body, which most people regarded with superstitious awe and associated with the spirit of the departed soul. Beliefs in life after death and a disquieting uncertainty concerning the possibility of bodily resurrection further inhibited systematic study. Nevertheless, knowledge of the body was acquired by treating wounds, aiding in childbirth, and setting broken limbs. The field remained speculative rather than descriptive, though, until the achievements of the Alexandrian medical school and its foremost figure, Herophilus (flourished 300 BCE), who dissected human cadavers and thus gave anatomy a considerable factual basis for the first time. Herophilus made many important discoveries and was followed by his younger contemporary Erasistratus, who is sometimes regarded as the founder of physiology. In the 2nd century CE, Greek physician Galen assembled and arranged all the discoveries of the Greek anatomists, including with them his own concepts of physiology and his discoveries in experimental medicine. The many books Galen wrote became the unquestioned authority for anatomy and medicine in Europe because they were the only ancient Greek anatomical texts that survived the Dark Ages in the form of Arabic (and then Latin) translations.
Owing to church prohibitions against dissection, European medicine in the Middle Ages relied upon Galen’s mixture of fact and fancy rather than on direct observation for its anatomical knowledge, though some dissections were authorized for teaching purposes. In the early 16th century, the artist Leonardo da Vinci undertook his own dissections, and his beautiful and accurate anatomical drawings cleared the way for Flemish physician Andreas Vesalius to “restore” the science of anatomy with his monumental De humani corporis fabrica libri septem (1543; “The Seven Books on the Structure of the Human Body”), which was the first comprehensive and illustrated textbook of anatomy. As a professor at the University of Padua, Vesalius encouraged younger scientists to accept traditional anatomy only after verifying it themselves, and this more critical and questioning attitude broke Galen’s authority and placed anatomy on a firm foundation of observed fact and demonstration.
From Vesalius’s exact descriptions of the skeleton, muscles, blood vessels, nervous system, and digestive tract, his successors in Padua progressed to studies of the digestive glands and the urinary and reproductive systems. Hieronymus Fabricius, Gabriello Fallopius, and Bartolomeo Eustachio were among the most important Italian anatomists, and their detailed studies led to fundamental progress in the related field of physiology. William Harvey’s discovery of the circulation of the blood, for instance, was based partly on Fabricius’s detailed descriptions of the venous valves.
Microscopic anatomy
The new application of magnifying glasses and compound microscopes to biological studies in the second half of the 17th century was the most important factor in the subsequent development of anatomical research. Primitive early microscopes enabled Marcello Malpighi to discover the system of tiny capillaries connecting the arterial and venous networks, Robert Hooke to first observe the small compartments in plants that he called “cells,” and Antonie van Leeuwenhoek to observe muscle fibres and spermatozoa. Thenceforth attention gradually shifted from the identification and understanding of bodily structures visible to the naked eye to those of microscopic size.
The use of the microscope in discovering minute, previously unknown features was pursued on a more systematic basis in the 18th century, but progress tended to be slow until technical improvements in the compound microscope itself, beginning in the 1830s with the gradual development of achromatic lenses, greatly increased that instrument’s resolving power. These technical advances enabled Matthias Jakob Schleiden and Theodor Schwann to recognize in 1838–39 that the cell is the fundamental unit of organization in all living things. The need for thinner, more transparent tissue specimens for study under the light microscope stimulated the development of improved methods of dissection, notably machines called microtomes that can slice specimens into extremely thin sections. In order to better distinguish the detail in these sections, synthetic dyes were used to stain tissues with different colours. Thin sections and staining had become standard tools for microscopic anatomists by the late 19th century. The field of cytology, which is the study of cells, and that of histology, which is the study of tissue organization from the cellular level up, both arose in the 19th century with the data and techniques of microscopic anatomy as their basis.
In the 20th century anatomists tended to scrutinize tinier and tinier units of structure as new technologies enabled them to discern details far beyond the limits of resolution of light microscopes. These advances were made possible by the electron microscope, which stimulated an enormous amount of research on subcellular structures beginning in the 1950s and became the prime tool of anatomical research. About the same time, the use of X-ray diffraction for studying the structures of many types of molecules present in living things gave rise to the new subspecialty of molecular anatomy.
Anatomical nomenclature
Scientific names for the parts and structures of the human body are usually in Latin; for example, the name musculus biceps brachii denotes the biceps muscle of the upper arm. Some such names were bequeathed to Europe by ancient Greek and Roman writers, and many more were coined by European anatomists from the 16th century on. Expanding medical knowledge meant the discovery of many bodily structures and tissues, but there was no uniformity of nomenclature, and thousands of new names were added as medical writers followed their own fancies, usually expressing them in a Latin form.
By the end of the 19th century the confusion caused by the enormous number of names had become intolerable. Medical dictionaries sometimes listed as many as 20 synonyms for one name, and more than 50,000 names were in use throughout Europe. In 1887 the German Anatomical Society undertook the task of standardizing the nomenclature, and, with the help of other national anatomical societies, a complete list of anatomical terms and names was approved in 1895 that reduced the 50,000 names to 5,528. This list, the Basle Nomina Anatomica, had to be subsequently expanded, and in 1955 the Sixth International Anatomical Congress at Paris approved a major revision of it known as the Paris Nomina Anatomica (or simply Nomina Anatomica). In 1998 this work was supplanted by the Terminologia Anatomica, which recognizes about 7,500 terms describing macroscopic structures of human anatomy and is considered to be the international standard on human anatomical nomenclature. The Terminologia Anatomica, produced by the International Federation of Associations of Anatomists and the Federative Committee on Anatomical Terminology (later known as the Federative International Programme on Anatomical Terminologies), was made available online in 2011.
]]>
Gist
TIF (or TIFF) is an image format used for containing high quality graphics. It stands for “Tagged Image File Format” or “Tagged Image Format”. The format was created by Aldus Corporation but Adobe acquired the format later and made subsequent update in this format.
Summary
A TIFF, which stands for Tag Image File Format, is a computer file used to store raster graphics and image information. A favorite among photographers, TIFFs are a handy way to store high-quality images before editing if you want to avoid lossy file formats.
Aldus eventually merged with Adobe Systems, who held the patent on the format from then on. Today, TIFF files are still widely used in the printing and publishing industry.
A TIFF file is a great choice when high quality is your goal, especially when it comes to printing photos or even billboards. TIFF is also an adaptable format that can support both lossy and lossless compression.
Details
Tag Image File Format or Tagged Image File Format, commonly known by the abbreviations TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processing, optical character recognition, image manipulation, desktop publishing, and page-layout applications. The format was created by the Aldus Corporation for use in desktop publishing. It published the latest version 6.0 in 1992, subsequently updated with an Adobe Systems copyright after the latter acquired Aldus in 1994. Several Aldus or Adobe technical notes have been published with minor extensions to the format, and several specifications have been based on TIFF 6.0, including TIFF/EP (ISO 12234-2), TIFF/IT (ISO 12639), TIFF-F (RFC 2306) and TIFF-FX (RFC 3949).
History
TIFF was created as an attempt to get desktop scanner vendors of the mid-1980s to agree on a common scanned image file format, in place of a multitude of proprietary formats. In the beginning, TIFF was only a binary image format (only two possible values for each pixel), because that was all that desktop scanners could handle. As scanners became more powerful, and as desktop computer disk space became more plentiful, TIFF grew to accommodate grayscale images, then color images. Today, TIFF, along with JPEG and PNG, is a popular format for deep-color images.
The first version of the TIFF specification was published by the Aldus Corporation in the autumn of 1986 after two major earlier draft releases. It can be labeled as Revision 3.0. It was published after a series of meetings with various scanner manufacturers and software developers. In April 1987 Revision 4.0 was released and it contained mostly minor enhancements. In October 1988 Revision 5.0 was released and it added support for palette color images and LZW compression.
TIFF is a complex format, defining many tags of which typically only a few are used in each file. This led to implementations supporting many varying subsets of the format, a situation that gave rise to the joke that TIFF stands for Thousands of Incompatible File Formats. This problem was addressed in revision 6.0 of the TIFF specification (June 1992) by introducing a distinction between Baseline TIFF (which all implementations were required to support) and TIFF Extensions (which are optional). Additional extensions are defined in two supplements to the specification, published September 1995 and March 2002 respectively.
Overview
A TIFF file contains one or several images, termed subfiles in the specification. The basic use-case for having multiple subfiles is to encode a multipage telefax in a single file, but it is also allowed to have different subfiles be different variants of the same image, for example scanned at different resolutions. Rather than being a continuous range of bytes in the file, each subfile is a data structure whose top-level entity is called an image file directory (IFD). Baseline TIFF readers are only required to make use of the first subfile, but each IFD has a field for linking to a next IFD.
The IFDs are where the tags for which TIFF is named are located. Each IFD contains one or several entries, each of which is identified by its tag. The tags are arbitrary 16-bit numbers; their symbolic names such as ImageWidth often used in discussions of TIFF data do not appear explicitly in the file itself. Each IFD entry has an associated value, which may be decoded based on general rules of the format, but it depends on the tag what that value then means. There may within a single IFD be no more than one entry with any particular tag. Some tags are for linking to the actual image data, other tags specify how the image data should be interpreted, and still other tags are used for image metadata.
TIFF images are made up of rectangular grids of pixels. The two axes of this geometry are termed horizontal (or X, or width) and vertical (or Y, or length). Horizontal and vertical resolution need not be equal (since in a telefax they typically would not be equal). A baseline TIFF image divides the vertical range of the image into one or several strips, which are encoded (in particular: compressed) separately. Historically this served to facilitate TIFF readers (such as fax machines) with limited capacity to store uncompressed data — one strip would be decoded and then immediately printed — but the present specification motivates it by "increased editing flexibility and efficient I/O buffering". A TIFF extension provides the alternative of tiled images, in which case both the horizontal and the vertical ranges of the image are decomposed into smaller units.
An example of these things, which also serves to give a flavor of how tags are used in the TIFF encoding of images, is that a striped TIFF image would use tags 273 (StripOffsets), 278 (RowsPerStrip), and 279 (StripByteCounts). The StripOffsets point to the blocks of image data, the StripByteCounts say how long each of these blocks are (as stored in the file), and RowsPerStrip says how many rows of pixels there are in a strip; the latter is required even in the case of having just one strip, in which case it merely duplicates the value of tag 257 (ImageLength). A tiled TIFF image instead uses tags 322 (TileWidth), 323 (TileLength), 324 (TileOffsets), and 325 (TileByteCounts). The pixels within each strip or tile appear in row-major order, left to right and top to bottom.
The data for one pixel is made up of one or several samples; for example an RGB image would have one Red sample, one Green sample, and one Blue sample per pixel, whereas a greyscale or palette color image only has one sample per pixel. TIFF allows for both additive (e.g. RGB, RGBA) and subtractive (e.g. CMYK) color models. TIFF does not constrain the number of samples per pixel (except that there must be enough samples for the chosen color model), nor does it constrain how many bits are encoded for each sample, but baseline TIFF only requires that readers support a few combinations of color model and bit-depth of images. Support for custom sets of samples is very useful for scientific applications; 3 samples per pixel is at the low end of multispectral imaging, and hyperspectral imaging may require hundreds of samples per pixel. TIFF supports having all samples for a pixel next to each other within a single strip/tile (PlanarConfiguration = 1) but also different samples in different strips/tiles (PlanarConfiguration = 2). The default format for a sample value is as an unsigned integer, but a TIFF extension allows declaring them as alternatively being signed integers or IEEE-754 floats, as well as specify a custom range for valid sample values.
TIFF images may be uncompressed, compressed using a lossless compression scheme, or compressed using a lossy compression scheme. The lossless LZW compression scheme has at times been regarded as the standard compression for TIFF, but this is technically a TIFF extension, and the TIFF6 specification notes the patent situation regarding LZW. Compression schemes vary significantly in at what level they process the data: LZW acts on the stream of bytes encoding a strip or tile (without regard to sample structure, bit depth, or row width), whereas the JPEG compression scheme both transforms the sample structure of pixels (switching to a different color model) and encodes pixels in 8×8 blocks rather than row by row.
Most data in TIFF files are numerical, but the format supports declaring data as rather being textual, if appropriate for a particular tag. Tags that take textual values include Artist, Copyright, DateTime, DocumentName, InkNames, and Model.
Internet Media Type
The MIME type image/tiff (defined in RFC 3302) without an application parameter is used for Baseline TIFF 6.0 files or to indicate that it is not necessary to identify a specific subset of TIFF or TIFF extensions. The optional "application" parameter (Example: Content-type: image/tiff; application=foo) is defined for image/tiff to identify a particular subset of TIFF and TIFF extensions for the encoded image data, if it is known. According to RFC 3302, specific TIFF subsets or TIFF extensions used in the application parameter must be published as an RFC.
MIME type image/tiff-fx (defined in RFC 3949 and RFC 3950) is based on TIFF 6.0 with TIFF Technical Notes TTN1 (Trees) and TTN2 (Replacement TIFF/JPEG specification). It is used for Internet fax compatible with the ITU-T Recommendations for Group 3 black-and-white, grayscale and color fax.
Digital preservation
Adobe holds the copyright on the TIFF specification (aka TIFF 6.0) along with the two supplements that have been published. These documents can be found on the Adobe TIFF Resources page. The Fax standard in RFC 3949 is based on these TIFF specifications.
TIFF files that strictly use the basic "tag sets" as defined in TIFF 6.0 along with restricting the compression technology to the methods identified in TIFF 6.0 and are adequately tested and verified by multiple sources for all documents being created can be used for storing documents. Commonly seen issues encountered in the content and document management industry associated with the use of TIFF files arise when the structures contain proprietary headers, are not properly documented, and/or contain "wrappers" or other containers around the TIFF datasets, and/or include improper compression technologies, or those compression technologies are not properly implemented.
Variants of TIFF can be used within document imaging and content/document management systems using CCITT Group IV 2D compression which supports black-and-white (bitonal, monochrome) images, among other compression technologies that support color. When storage capacity and network bandwidth was a greater issue than commonly seen in today's server environments, high-volume storage scanning, documents were scanned in black and white (not in color or in grayscale) to conserve storage capacity.
The inclusion of the SampleFormat tag in TIFF 6.0 allows TIFF files to handle advanced pixel data types, including integer images with more than 8 bits per channel and floating point images. This tag made TIFF 6.0 a viable format for scientific image processing where extended precision is required. An example would be the use of TIFF to store images acquired using scientific CCD cameras that provide up to 16 bits per photosite of intensity resolution. Storing a sequence of images in a single TIFF file is also possible, and is allowed under TIFF 6.0, provided the rules for multi-page images are followed.
Details
TIFF is a flexible, adaptable file format for handling images and data within a single file, by including the header tags (size, definition, image-data arrangement, applied image compression) defining the image's geometry. A TIFF file, for example, can be a container holding JPEG (lossy) and PackBits (lossless) compressed images. A TIFF file also can include a vector-based clipping path (outlines, croppings, image frames). The ability to store image data in a lossless format makes a TIFF file a useful image archive, because, unlike standard JPEG files, a TIFF file using lossless compression (or none) may be edited and re-saved without losing image quality. This is not the case when using the TIFF as a container holding compressed JPEG. Other TIFF options are layers and pages.
TIFF offers the option of using LZW compression, a lossless data-compression technique for reducing a file's size. Use of this option was limited by patents on the LZW technique until their expiration in 2004.
The TIFF 6.0 specification consists of the following parts:
* Introduction (contains information about TIFF Administration, usage of Private fields and values, etc.)
* Part 1: Baseline TIFF
* Part 2: TIFF Extensions
* Part 3: Appendices
Additional Information
TIFFs are a file format popular with graphic designers and photographers for their flexibility, high quality, and near-universal compatibility. Learn more about these raster graphic files and how you can put them to use in your next project.
What is a TIFF file?
A TIFF, which stands for Tag Image File Format, is a computer file used to store raster graphics and image information. A favorite among photographers, TIFFs are a handy way to store high-quality images before editing if you want to avoid lossy file formats.
TIFF files:
* Have either a .tiff or .tif extension.
* Are a lossless form of file compression, which means they’re larger than most but don’t lose image quality.
* Work with Windows, Linux, and macOS.
TIFFs aren’t the smallest files around, but they enable a user to tag up extra image information and data, such as additional layers. They’re also compatible with editing software like Adobe Photoshop.
History of the TIFF file.
Aldus Corporation created the TIFF file in the mid-1980s for use in desktop publishing. TIFFs retained high-quality data and could publish content directly from a computer. The file was designed as a universally applicable format for desktop scanners — hardware that previously handled, depending on the make and model, only a limited set of file formats.
Initially, TIFFs were restricted to print publications before they expanded into digital content. Aldus Corporation was later acquired by Adobe, which has since been responsible for the copyright of the file format.
What are TIFFs used for?
TIFFs are popular across a range of industries — such as design, photography, and desktop publishing. TIFF files can be used for:
* High-quality photographs.
TIFFs are perfect for retaining lots of impressively detailed image data because they use a predominately lossless form of file compression. This makes them a great choice for professional photographers and editors.
* High-resolution scans.
The detailed image quality stored within a TIFF means they’re ideal for scanned images and high-resolution documents. You might find them a useful choice for storing high-resolution images of your artwork or personal documents.
* Container files.
TIFFs also work as container files that store smaller JPEGs. You could store several lower-resolution JPEGs within one TIFF if you wanted to email a selection of photos to a contact.
]]>
Gist
The full form of PNG is Portable Network Graphics. It is a file format that is used for image compression.
Summary
This document describes PNG (Portable Network Graphics), an extensible file format for the lossless, portable, well-compressed storage of static and animated raster images. PNG provides a patent-free replacement for GIF and can also replace many common uses of TIFF. Indexed-colour, greyscale, and truecolour images are supported, plus an optional alpha channel. Sample depths range from 1 to 16 bits.
PNG is designed to work well in online viewing applications, such as the World Wide Web, so it is fully streamable with a progressive display option. PNG is robust, providing both full file integrity checking and simple detection of common transmission errors. Also, PNG can store colour space data for improved colour matching on heterogeneous platforms.
This specification defines two Internet Media Types, image/png and image/apng.
Status of This Document
This specification is intended to become an International Standard, but is not yet one. It is inappropriate to refer to this specification as an International Standard.
This document was published by the Portable Network Graphics (PNG) Working Group as a Candidate Recommendation Snapshot using the Recommendation track.
Publication as a Candidate Recommendation does not imply endorsement by W3C and its Members. A Candidate Recommendation Snapshot has received wide review, is intended to gather implementation experience, and has commitments from Working Group members to royalty-free licensing for implementations.
This Candidate Recommendation is not expected to advance to Proposed Recommendation any earlier than 21 December 2023.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 12 June 2023 W3C Process Document.
Details
PNG (Portable Network Graphics) is a file format used for lossless image compression. PNG has almost entirely replaced the Graphics Interchange Format (GIF) that was widely used in the past.
Like a GIF, a PNG file is compressed in lossless fashion, meaning all image information is restored when the file is decompressed during viewing. A PNG file is not intended to replace the JPEG format, which is "lossy" but lets the creator make a trade-off between file size and image quality when the image is compressed. Typically, an image in a PNG file can be 10 percent to 30 percent more compressed than in a GIF format.
File format of PNG
The PNG format includes these features:
* Not only can one color be made transparent, but the degree of transparency, called opacity, can be controlled.
* Supports image interlacing and develops faster than in interlaced GIF format.
* Gamma correction allows tuning of the image’s color brightness required by specific display manufacturers.
* Images can be saved using true color, as well as in the palette and grayscale formats provided by the GIF.
JPEG vs. PNG
JPEG and PNG are the two most commonly used image file formats on the web, but there are differences between them.
JPEG (Joint Photographic Experts Group) was created in 1986. This image format takes up very little storage space and is quick to upload or download. JPEGs can display millions of colors, so they’re perfect for real-life images, such as photographs. They work well on websites and are ideal for posting on social media.
Because JPEG is “lossy,” -- which means that when data is compressed, unnecessary (redundant) information is deleted from the file permanently -- some quality will be lost or compromised when a file is converted to a JPEG.
JPEG is the default file format for uploading pictures to the web, unless they have text in them, need transparency, are animated or would benefit from color changes, such as logos or icons.
However, JPEGs aren’t good for images that have very few color data, such as interface screenshots and other simple computer-generated graphics.
The main advantage of PNG over JPEG is that the compression is lossless, which means there’s no loss in quality each time a file is opened and saved again. PNG is also good for detailed, high-contrast images. Consequently, PNG is typically the default file format for screenshots because, instead of compressing groups of pixels together, it offers a nearly perfect pixel-for-pixel representation of the screen.
Another key feature of PNG is that it supports transparency. With both grayscale and color and images, pixels in PNG files can be transparent, enabling users to create images that overlay neatly with the content of a website or image.
Uses of PNG
PNG can be used for:
* Photos with line art, such as drawings, illustrations and comics.
* Photos or scans of text, such as handwritten letters or newspaper articles.
* Charts, logos, graphs, architectural plans and blueprints.
* Anything with text, such as page layouts made in Photoshop or InDesign then saved as images.
Advantages of PNG
The advantages of the PNG format include:
* Lossless compression -- doesn’t lose detail and quality after image compression.
* Supports a large number of colors -- the format is suitable for different types of digital images, including photographs and graphics.
* Support for transparency -- supports compression of digital images with transparent areas.
* Perfect for editing images – lossless compressions makes it perfect for storing digital images for editing.
* Sharp edges and solid colors -- ideal for images containing texts, line arts and graphics.
The disadvantages of the PNG format include:
* Bigger file size -- compresses digital images at a larger file size.
* Not ideal for professional-quality print graphics -- doesn’t support non-RGB color spaces such as CMYK (cyan, magenta, yellow and black).
* Doesn’t support embedding EXIF metadata used by most digital cameras.
* Doesn’t natively support animation, but there are unofficial extensions available.
History of PNG
PNG was developed by an Internet working group headed up by Thomas Boutell that came together in 1994 to begin creating the PNG format. At the time, the GIF format was already well-established. Their goal was to increase color support as well as provide an image format that didn’t need at patent license.
The GIF format was owned by Unisys and its use in image-handling software involved licensing or other legal considerations. Web users could make, view and send GIF files freely but they couldn’t develop software that built them without an arrangement with Unisys.
The first PNG draft was issued on January 4, 1995, and within a week, most of the major PNG features had been proposed and accepted. Over the next three weeks, the group produced seven important drafts.
By the beginning of March 1995, all the specifications were in place (draft nine) and accepted. In October 1996, the first version of the PNG specification was issued as a W3C recommendation. Additional versions were released in 1998, 1999 and 2003, when it became an international standard.
Additional Information
Portable Network Graphics (PNG) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF)—unofficially, the initials PNG stood for the recursive acronym "PNG's not GIF".
PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without an alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore, non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of chunks, encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083.
PNG files have the ".png" file extension and the "image/png" MIME media type. PNG was published as an informational RFC 2083 in March 1997 and as an ISO/IEC 15948 standard in 2004.
History and development
The motivation for creating the PNG format was the realization, on 28 December 1994, that the Lempel–Ziv–Welch (LZW) data compression algorithm used in the Graphics Interchange Format (GIF) format was patented by Unisys. The patent required that all software supporting GIF pay royalties, leading to a flurry of criticism from Usenet users. One of them was Thomas Boutell, who on 4 January 1995 posted a precursory discussion thread on the Usenet newsgroup "comp.graphics" in which he devised a plan for a free alternative to GIF. Other users in that thread put forth many propositions that would later be part of the final file format. Oliver Fromme, author of the popular JPEG viewer QPEG, proposed the PING name, eventually becoming PNG, a recursive acronym meaning PING is not GIF, and also the .png extension. Other suggestions later implemented included the deflate compression algorithm and 24-bit color support, the lack of the latter in GIF also motivating the team to create their file format. The group would become known as the PNG Development Group, and as the discussion rapidly expanded, it later used a mailing list associated with a CompuServe forum.
The full specification of PNG was released under the approval of W3C on 1 October 1996, and later as RFC 2083 on 15 January 1997. The specification was revised on 31 December 1998 as version 1.1, which addressed technical problems for gamma and color correction. Version 1.2, released on 11 August 1999, added the iTXt chunk as the specification's only change, and a reformatted version of 1.2 was released as a second edition of the W3C standard on 10 November 2003, and as an International Standard (ISO/IEC 15948:2004) on 3 March 2004.
Although GIF allows for animation, it was decided that PNG should be a single-image format. In 2001, the developers of PNG published the Multiple-image Network Graphics (MNG) format, with support for animation. MNG achieved moderate application support, but not enough among mainstream web browsers and no usage among web site designers or publishers. In 2008, certain Mozilla developers published the Animated Portable Network Graphics (APNG) format with similar goals. APNG is a format that is natively supported by Gecko- and Presto-based web browsers and is also commonly used for thumbnails on Sony's PlayStation Portable system (using the normal PNG file extension). In 2017, Chromium based browsers adopted APNG support. In January 2020, Microsoft Edge became Chromium based, thus inheriting support for APNG. With this all major browsers now support APNG.
PNG Working Group
The original PNG specification was authored by an ad hoc group of computer graphics experts and enthusiasts. Discussions and decisions about the format were conducted by email. The original authors listed on RFC 2083 are:
Editor: Thomas Boutell
Contributing Editor: Tom Lane
Authors (in alphabetical order by last name): Mark Adler, Thomas Boutell, Christian Brunschen, Adam M. Costello, Lee Daniel Crocker, Andreas Dilger, Oliver Fromme, Jean-loup Gailly, Chris Herborth, Aleks Jakulin, Neal Kettler, Tom Lane, Alexander Lehmann, Chris Lilley, Dave Martindale, Owen Mortensen, Keith S. Pickens, Robert P. Poole, Glenn Randers-Pehrson, Greg Roelofs, Willem van Schaik, Guy Schalnat, Paul Schmidt, Tim Wegner, Jeremy Wohl.
]]>
Gist
Adobe PDF files—short for portable document format files—are one of the most commonly used file types today. If you've ever downloaded a printable form or document from the Web, such as an IRS tax form, there's a good chance it was a PDF file. Whenever you see a file that ends with .pdf, that means it's a PDF file.
Why use PDF files?
Let's say you create a newsletter in Microsoft Word and share it as a .docx file, which is the default file format for Word documents. Unless everyone has Microsoft Word installed on their computers, there's no guarantee that they would be able to open and view the newsletter. And because Word documents are meant to be edited, there's a chance that some of the formatting and text in your document may be shifted around.
By contrast, PDF files are primarily meant for viewing, not editing. One reason they're so popular is that PDFs can preserve document formatting, which makes them more shareable and helps them to look the same on any device. Sharing the newsletter as a PDF file would help ensure everyone is able to view it as you intended.
Summary
Portable Document Format (PDF) is a file format that has captured all the elements of a printed document as an electronic image that users can view, navigate, print or forward to someone else.
However, PDF files are more than images of documents. Files can embed type fonts so that they're available at any viewing location. They can also include interactive elements, such as buttons for forms entry and for triggering sound or video.
How are PDFs created?
PDF files are created using tools such as Adobe Acrobat or other software that can save documents in PDF.
To view saved PDF files, users need either the full Acrobat program, which is not free, or a less expensive program, such as Adobe Reader, which is available for free from Adobe. PDF files can also be viewed in most web browsers.
A PDF file contains one or more page images, each of which users can zoom in on or out from. They can also scroll backward and forward through the pages of a PDF.
What are the use cases for PDF documents?
Some situations in which PDF files are desirable are the following:
* When users want to preserve the original formatting of a document. For example, if they created a resume using a word processing program and saved it as a PDF, the recipient sees the same fonts and layout that the sender used, for example.
* When users want to send a document electronically but be sure that the recipient sees it exactly as the sender intended it to look.
* When a user wants to create a document that cannot be easily edited. For example, if they wanted to send someone a contract but didn't want them to change it, the creator could save it as a PDF.
What are the benefits of using PDF?
PDF files are useful for documents such as magazine articles, product brochures or flyers, in which the creator wants to preserve the original graphic appearance online.
PDFs are also useful for documents that are downloaded and printed, such as resumes, contracts and application forms.
PDFs also support embedding digital signatures in documents for authenticating the integrity of a digital document.
What are the disadvantages of PDFs?
The main disadvantage of PDFs is that they are not editable. If an individual needs to change a document after it has been saved as a PDF, they must return to the original program used to create it and make the changes there. Then, they need to resave the new PDF image.
Another disadvantage is that some older versions of software cannot read PDFs. In order to open a PDF, recipients must have a PDF reader installed on their computer.
Are there security risks associated with PDFs?
PDFs can contain viruses, so it's important to be sure that recipients trust the source of any PDF files they download. In addition, PDFs can be password-protected so that anyone who tries to open the file needs a password in order to access it.
Can PDFs be converted to other formats?
PDF files can be converted to other file formats, such as Microsoft Word, Excel or image formats, such as JPG. However, the format of the original document may not be perfectly preserved in the conversion process.
Details
Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF has its roots in "The Camelot Project" initiated by Adobe co-founder John Warnock in 1991. PDF was standardized as ISO 32000 in 2008. The last edition as ISO 32000-2:2020 was published in December 2020.
PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats. The PDF specification also provides for encryption and digital signatures, file attachments, and metadata to enable workflows requiring these features.
History
Adobe Systems made the PDF specification available free of charge in 1993. In the early years PDF was popular mainly in desktop publishing workflows, and competed with several other formats, including DjVu, Envoy, Common Ground Digital Paper, Farallon Replica and even Adobe's own PostScript format.
PDF was a proprietary format controlled by Adobe until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008, at which time control of the specification passed to an ISO Committee of volunteer industry experts. In 2008, Adobe published a Public Patent License to ISO 32000-1 granting royalty-free rights for all patents owned by Adobe necessary to make, use, sell, and distribute PDF-compliant implementations.
PDF 1.7, the sixth edition of the PDF specification that became ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification. These proprietary technologies are not standardized, and their specification is published only on Adobe's website. Many of them are not supported by popular third-party implementations of PDF.
ISO published ISO 32000-2 in 2017, available for purchase, replacing the free specification provided by Adobe. In December 2020, the second edition of PDF 2.0, ISO 32000-2:2020, was published, with clarifications, corrections, and critical updates to normative references (ISO 32000-2 does not include any proprietary technologies as normative references). In April 2023 the PDF Association made ISO 32000-2 available for download free of charge.
Technical details
A PDF file is often a combination of vector graphics, text, and bitmap graphics. The basic types of content in a PDF are:
* Typeset text stored as content streams (i.e., not encoded in plain text);
* Vector graphics for illustrations and designs that consist of shapes and lines;
* Raster graphics for photographs and other types of images; and
* Other multimedia objects.
In later PDF revisions, a PDF document can also support links (inside document or web page), forms, JavaScript (initially available as a plugin for Acrobat 3.0), or any other types of embedded contents that can be handled using plug-ins.
PDF combines three technologies:
* An equivalent subset of the PostScript page description programming language but in declarative form, for generating the layout and graphics.
* A font-embedding/replacement system to allow fonts to travel with the documents.
* A structured storage system to bundle these elements and any associated content into a single file, with data compression where appropriate.
PostScript language
PostScript is a page description language run in an interpreter to generate an image. It can handle graphics and has standard features of programming languages such as branching and looping. PDF is a subset of PostScript, simplified to remove such flow control features, while graphics commands remain.
PostScript was originally designed for a drastically different use case: transmission of one-way linear print jobs in which the PostScript interpreter would collect up commands until it encountered the showpage command, then evaluate the commands to render a page to a printing device. PostScript was not intended for long-term storage and interactive rendering of electronic documents, so there was no need to scroll back to previous pages. Thus, to accurately render any given page, it was necessary to evaluate all the commands before that point.
Traditionally, the PostScript-like PDF code is generated from a source PostScript file (that is, an executable program), with standard compiler techniques like loop unrolling, inlining and removing unused branches, resulting in code that is purely declarative and static. This is then packaged into a container format, together with all necessary dependencies for correct rendering (external files, graphics, or fonts to which the document refers), and compressed.
As a document format, PDF has several advantages over PostScript:
* PDF contains only static declarative PostScript code, that can be processed as data, and does not require a full program interpreter or compiler. This avoids the complexity and security risks of an engine with such a higher complexity level.
* Like Display PostScript, since version 1.4 PDF supports transparent graphics, while standard PostScript does not.
* PDF enforces the rule that the code for a page cannot affect any other pages. That rule is strongly recommended for PostScript code too, but has to be implemented explicitly (see, e.g., the Document Structuring Conventions), as PostScript is a full programming language that allows for such greater flexibilities and is not limited to the concepts of pages and documents.
* All data required for rendering is included on the file itself, improving portability.
Its disadvantages are:
* Loss of flexibility, and limitation to a single use case.
* A (sometimes much) larger size. Although for trivially repetitive content, this is mitigated with compression. (Overall, compared to e.g. a bitmap image, it is still orders of magnitude smaller.)
PDF since v1.6 supports embedding of interactive 3D documents: 3D drawings can be embedded using U3D or PRC and various other data formats.
Additional Information
PDF is an abbreviation that stands for Portable Document Format. It's a versatile file format created by Adobe that gives people an easy, reliable way to present and exchange documents - regardless of the software, hardware or operating systems being used by anyone who views the document.
The PDF format is now an open standard, maintained by the International Organisation for Standardisation (ISO). PDF docs can contain links and buttons, form fields, audio, video and business logic. They can be signed electronically and can easily be viewed on Windows or MacOS using the free Adobe Acrobat Reader software.
In 1991, Adobe co-founder Dr John Warnock launched the paper-to-digital revolution with an idea he called, The Camelot Project. The goal was to enable anyone to capture documents from any application, send electronic versions of these documents anywhere and view and print them on any machine. By 1992, Camelot had developed into PDF. Today, it is the file format trusted by businesses around the world.
Warnock’s vision continues to shape the way we work. When you create an Adobe PDF from documents or images, it looks just the way you intended it to. While many PDFs are simply pictures of pages, Adobe PDFs preserve all the data in the original file format—even when text, graphics, spreadsheets and more are combined in a single file.
You can be confident your PDF file meets ISO 32000 standards for electronic document exchange, including special-purpose standards such as PDF/A for archiving, PDF/E for engineering and PDF/X for printing. You can also create PDFs to meet a range of accessibility standards that make content more usable by people with disabilities.
When you need to electronically sign a PDF, it’s easy using the Adobe Acrobat Reader mobile app or the Acrobat Sign mobile app. For managing legally-binding electronic or digital signature processes, try Adobe Acrobat or Acrobat Sign.
When you need to electronically sign a PDF, it’s easy using the Adobe Acrobat Reader mobile app or the Acrobat Sign mobile app. Access your PDFs from any web browser or operating system. For managing legally-binding electronic or digital signature processes, try Adobe Acrobat or Acrobat Sign.
]]>
Gist
A GIF is a kind of image file that showcases brief animations. Mainly utilized to produce straightforward, repeating animations, GIFs are commonly utilized for sharing compact video clips on the web or for giving movement to graphics and text. GIF can be produced from a sequence of static pictures or from video footage and is frequently utilized to spread amusing or entertaining content on social media platforms and across the web.
GIF stands for "Graphics Interchange Format."
Summary
GIF is a digital file format devised in 1987 by the Internet service provider CompuServe as a means of reducing the size of images and short animations. Because GIF is a lossless data compression format, meaning that no information is lost in the compression, it quickly became a popular format for transmitting and storing graphic files.
At the time of its creation, GIF’s support of 256 different colours was considered vast, as many computer monitors had the same limit (in 8-bit systems, or 28 colours). The method used to keep file size to a minimum is a compression algorithm commonly referred to as LZW, named after its inventors, Abraham Lempel and Jacob Ziv of Israel and Terry Welch of the United States. LZW was the source of a controversy started by the American Unisys Corporation in 1994, when it was revealed that they owned a patent for LZW and were belatedly seeking royalties from several users. Although the relevant patents expired by 2004, the controversy resulted in the creation of the portable network graphics (PNG) format, an alternative to GIF that offered a wider array of colours and different compression methods. JPEG (joint photographic experts group), a digital file format that supports millions of different colour options, is often used to transmit better-quality images, such as digital photographs, at the cost of greater size. Despite the competition, GIF remains popular.
Details
GIF stands for Graphics Interchange Format and is a type of digital image file that is often used for short, looping animations. They’re often used to create memes and other types of social content.
What is a GIF?
A GIF file contains a series of static images that are displayed in rapid succession, creating the illusion of a short animation. Unlike traditional video files, GIFs have a limited color palette and don't support sound, but they are small in size and easy to share online.
Where are GIFs used?
GIFs are used online, mostly on social media, to express emotions, reactions, or just to add humor to a message. Unlike traditional video formats, GIFs only support 256 colors, which makes them smaller in file size and ideal for use in web-based applications, website pages, and online messaging.
Where do GIFs come from?
GIFs can be created by taking a short clip from a TV show, movie, or other media, and looping it. People can add or create GIFs to make a point or add a funny spin to a current event or cultural reference. For example, you might see a GIF of a politician making a ridiculous statement, accompanied by a caption or meme.
Why are GIFs so popular?
GIFs have become an integral part of online culture and social commentary. They’re used for visual communication, allowing users to express themselves in a quick and playful way without using words. For example, using a GIF of a cat waving its paw to say "goodbye" or a GIF of a person rolling their eyes to show sarcasm.
In addition to their emotional and communicative uses, GIFs are also very useful for technical design purposes. GIFs' small file size and seamless looping make them really easy to use in web design, video game development, and other applications where small, animated images are needed.
]]>
Gist
Any disorder that affects the heart muscle is called a cardiomyopathy. Cardiomyopathy causes the heart to lose its ability to pump blood well. In some cases, the heart rhythm also becomes disturbed. This leads to arrhythmias (irregular heartbeats).
Summary
Cardiomyopathy refers to problems with your heart muscle that can make it harder for your heart to pump blood. There are many types and causes of cardiomyopathy, and it can affect people of all ages.
Depending on the type of cardiomyopathy that you have, your heart muscle may become thicker, stiffer, or larger than normal. This can weaken your heart and cause an irregular heartbeat, heart failure, or a life-threatening condition called cardiac arrest.
Some people have no symptoms and do not need treatment. Others may have shortness of breath, tiredness, dizziness and fainting, or chest pain as the disease gets worse. Your doctor will ask about your symptoms and do tests to diagnose cardiomyopathy. Echocardiography is the most common test.
Cardiomyopathy can be caused by your genes , other medical conditions, or extreme stress. It can also happen or get worse during pregnancy. Many times, the cause is not known.
Treatments include medicines, procedures, and implanted devices. These treatments might not fix the problem with your heart but can help control your symptoms and prevent the disease from getting worse.
Details:
Overview
Cardiomyopathy is a disease of the heart muscle. It causes the heart to have a harder time pumping blood to the rest of the body, which can lead to symptoms of heart failure. Cardiomyopathy also can lead to some other serious heart conditions.
There are various types of cardiomyopathy. The main types include dilated, hypertrophic and restrictive cardiomyopathy. Treatment includes medicines and sometimes surgically implanted devices and heart surgery. Some people with severe cardiomyopathy need a heart transplant. Treatment depends on the type of cardiomyopathy and how serious it is.
Types
Dilated cardiomyopathy
Hypertrophic cardiomyopathy
Symptoms
Some people with cardiomyopathy don't ever get symptoms. For others, symptoms appear as the condition becomes worse. Cardiomyopathy symptoms can include:
* Shortness of breath or trouble breathing with activity or even at rest.
* Chest pain, especially after physical activity or heavy meals.
* Heartbeats that feel rapid, pounding or fluttering.
* Swelling of the legs, ankles, feet, stomach area and neck veins.
* Bloating of the stomach area due to fluid buildup.
* Cough while lying down.
* Trouble lying flat to sleep.
* Fatigue, even after getting rest.
* Dizziness.
* Fainting.
Symptoms tend to get worse unless they are treated. In some people, the condition becomes worse quickly. In others, it might not become worse for a long time.
When to see a doctor
See your healthcare professional if you have any symptoms of cardiomyopathy. Call your local emergency number if you faint, have trouble breathing or have chest pain that lasts for more than a few minutes.
Some types of cardiomyopathy can be passed down through families. If you have the condition, your healthcare professional might recommend that your family members be checked.
Causes
Often, the cause of the cardiomyopathy isn't known. But some people get it due to another condition. This is known as acquired cardiomyopathy. Other people are born with cardiomyopathy because of a gene passed on from a parent. This is called inherited cardiomyopathy.
Certain health conditions or behaviors that can lead to acquired cardiomyopathy include:
* Long-term high blood pressure.
* Heart tissue damage from a heart attack.
* Long-term rapid heart rate.
* Heart valve problems.
* COVID-19 infection.
* Certain infections, especially those that cause inflammation of the heart.
* Metabolic disorders, such as obesity, thyroid disease or diabetes.
* Lack of essential vitamins or minerals in the diet, such as thiamin (vitamin B-1).
* Pregnancy complications.
* Iron buildup in the heart muscle, called hemochromatosis.
* The growth of tiny lumps of inflammatory cells called granulomas in any part of the body. When this happens in the heart or lungs, it's called sarcoidosis.
* The buildup of irregular proteins in the organs, called amyloidosis.
* Connective tissue disorders.
* Drinking too much alcohol over many years.
* Use of drugs, amphetamines or anabolic steroids.
* Use of some chemotherapy medicines and radiation to treat cancer.
Types of cardiomyopathy include:
* Dilated cardiomyopathy. In this type of cardiomyopathy, the heart's chambers thin and stretch, growing larger. The condition tends to start in the heart's main pumping chamber, called the left ventricle. As a result, the heart has trouble pumping blood to the rest of the body.
This type can affect people of all ages. But it happens most often in people younger than 50 and is more likely to affect men. Conditions that can lead to a dilated heart include coronary artery disease and heart attack. But for some people, gene changes play a role in the disease.
* Hypertrophic cardiomyopathy. In this type, the heart muscle becomes thickened. This makes it harder for the heart to work. The condition mostly affects the muscle of the heart's main pumping chamber.
Hypertrophic cardiomyopathy can start at any age. But it tends to be worse if it happens during childhood. Most people with this type of cardiomyopathy have a family history of the disease. Some gene changes have been linked to hypertrophic cardiomyopathy. The condition doesn't happen due to a heart problem.
* Restrictive cardiomyopathy. In this type, the heart muscle becomes stiff and less flexible. As a result, it can't expand and fill with blood between heartbeats. This least common type of cardiomyopathy can happen at any age. But it most often affects older people.
Restrictive cardiomyopathy can occur for no known reason, also called an idiopathic cause. Or it can by caused by a disease elsewhere in the body that affects the heart, such as amyloidosis.
* Arrhythmogenic right ventricular cardiomyopathy (ARVC). This is a rare type of cardiomyopathy that tends to happen between the ages of 10 and 50. It mainly affects the muscle in the lower right heart chamber, called the right ventricle. The muscle is replaced by fat that can become scarred. This can lead to heart rhythm problems. Sometimes, the condition involves the left ventricle as well. ARVC often is caused by gene changes.
* Unclassified cardiomyopathy. Other types of cardiomyopathy fall into this group.
Risk factors
Many things can raise the risk of cardiomyopathy, including:
* Family history of cardiomyopathy, heart failure and sudden cardiac arrest.
* Long-term high blood pressure.
* Conditions that affect the heart. These include a past heart attack, coronary artery disease or an infection in the heart.
* Obesity, which makes the heart work harder.
* Long-term alcohol misuse.
* Illicit drug use, such as amphetamines and anabolic steroids.
* Treatment with certain chemotherapy medicines and radiation for cancer.
Many diseases also raise the risk of cardiomyopathy, including:
* Diabetes.
* Thyroid disease.
* Storage of excess iron in the body, called hemochromatosis.
* Buildup of a certain protein in organs, called amyloidosis.
* The growth of small patches of inflamed tissue in organs, called sarcoidosis.
* Connective tissue disorders.
Complications
* An enlarged heart
* Enlarged heart, in heart failure
Cardiomyopathy can lead to serious medical conditions, including:
* Heart failure. The heart can't pump enough blood to meet the body's needs. Without treatment, heart failure can be life-threatening.
* Blood clots. Because the heart can't pump well, blood clots might form in the heart. If clots enter the bloodstream, they can block the blood flow to other organs, including the heart and brain.
* Heart valve problems. Because cardiomyopathy can cause the heart to become larger, the heart valves might not close properly. This can cause blood to flow backward in the valve.
* Cardiac arrest and sudden death. Cardiomyopathy can trigger irregular heart rhythms that cause fainting. Sometimes, irregular heartbeats can cause sudden death if the heart stops beating effectively.
Prevention
Inherited types of cardiomyopathy can't be prevented. Let your healthcare professional know if you have a family history of the condition.
You can help lower the risk of acquired types of cardiomyopathy, which are caused by other conditions. Take steps to lead a heart-healthy lifestyle, including:
* Stay away from alcohol or illegal drugs.
* Control any other conditions you have, such as high blood pressure, high cholesterol or diabetes.
* Eat a healthy diet.
* Get regular exercise.
* Get enough sleep.
* Lower your stress.
These healthy habits also can help people with inherited cardiomyopathy control their symptoms.
Additional Information
Cardiomyopathy is a group of primary diseases of the heart muscle. Early on there may be few or no symptoms. As the disease worsens, shortness of breath, feeling tired, and swelling of the legs may occur, due to the onset of heart failure. An irregular heart beat and fainting may occur. Those affected are at an increased risk of sudden cardiac death.
As of 2013, cardiomyopathies are defined as "disorders characterized by morphologically and functionally abnormal myocardium in the absence of any other disease that is sufficient, by itself, to cause the observed phenotype." Types of cardiomyopathy include hypertrophic cardiomyopathy, dilated cardiomyopathy, restrictive cardiomyopathy, arrhythmogenic right ventricular dysplasia, and Takotsubo cardiomyopathy (broken heart syndrome). In hypertrophic cardiomyopathy the heart muscle enlarges and thickens. In dilated cardiomyopathy the ventricles enlarge and weaken. In restrictive cardiomyopathy the ventricle stiffens.
In many cases, the cause cannot be determined. Hypertrophic cardiomyopathy is usually inherited, whereas dilated cardiomyopathy is inherited in about one third of cases. Dilated cardiomyopathy may also result from alcohol, heavy metals, coronary artery disease, cocaine use, and viral infections. Restrictive cardiomyopathy may be caused by amyloidosis, hemochromatosis, and some cancer treatments. Broken heart syndrome is caused by extreme emotional or physical stress.
Treatment depends on the type of cardiomyopathy and the severity of symptoms. Treatments may include lifestyle changes, medications, or surgery. Surgery may include a ventricular assist device or heart transplant. In 2015 cardiomyopathy and myocarditis affected 2.5 million people. Hypertrophic cardiomyopathy affects about 1 in 500 people while dilated cardiomyopathy affects 1 in 2,500. They resulted in 354,000 deaths up from 294,000 in 1990. Arrhythmogenic right ventricular dysplasia is more common in young people.
Signs and symptoms
Signs and symptoms of cardiomyopathy include:
* Shortness of breath or trouble breathing, especially with physical exertion
* Fatigue
* Swelling in the ankles, feet, legs, abdomen and veins in the neck
* Dizziness
* Lightheadedness
* Fainting during physical activity
* Arrhythmias (abnormal heartbeats)
* Chest pain, especially after physical exertion or heavy meals
* Heart murmurs (unusual sounds associated with heartbeats)
Causes
Cardiomyopathies can be of genetic (familial) or non-genetic (acquired) origin. Genetic cardiomyopathies usually are caused by sarcomere or cytoskeletal diseases, neuromuscular disorders, inborn errors of metabolism, malformation syndromes and sometimes are unidentified. Non-genetic cardiomyopathies can have definitive causes such as viral infections, myocarditis and others.
Cardiomyopathies are either confined to the heart or are part of a generalized systemic disorder, both often leading to cardiovascular death or progressive heart failure-related disability. Other diseases that cause heart muscle dysfunction are excluded, such as coronary artery disease, hypertension, or abnormalities of the heart valves. Often, the underlying cause remains unknown, but in many cases the cause may be identifiable. Alcoholism, for example, has been identified as a cause of dilated cardiomyopathy, as has drug toxicity, and certain infections (including Hepatitis C). Untreated celiac disease can cause cardiomyopathies, which can completely reverse with a timely diagnosis. In addition to acquired causes, molecular biology and genetics have given rise to the recognition of various genetic causes.
A more clinical categorization of cardiomyopathy as 'hypertrophied', 'dilated', or 'restrictive', has become difficult to maintain because some of the conditions could fulfill more than one of those three categories at any particular stage of their development.
The current American Heart Association (AHA) definition divides cardiomyopathies into primary, which affect the heart alone, and secondary, which are the result of illness affecting other parts of the body. These categories are further broken down into subgroups which incorporate new genetic and molecular biology knowledge.
Mechanism
The pathophysiology of cardiomyopathies is better understood at the cellular level with advances in molecular techniques. Mutant proteins can disturb cardiac function in the contractile apparatus (or mechanosensitive complexes). Cardiomyocyte alterations and their persistent responses at the cellular level cause changes that are correlated with sudden cardiac death and other cardiac problems.
Cardiomyopathies are generally varied individually. Different factors can cause Cardiomyopathies in adults as well as children. To exemplify, Dilated Cardiomyopathy in adults is associated with Ischemic Cardiomyopathy, Hypertension, Valvular diseases, and Genetics. While in Children, Neuromuscular diseases such as Becker muscular dystrophy, including X-linked genetic disorder, are directly linked with their Cardiomyopathies.
Diagnosis
Among the diagnostic procedures done to determine a cardiomyopathy are:
* Physical exam
* Family history
* Blood test
* ECG
* Echocardiogram
* Stress test
* Genetic testing.
]]>
Joint Photographic Group
Gist
The JPG, typically pronounced “jay-peg”, was developed by the Joint Photographic Experts Group (JPEG) in 1992. The group recognized a need to make large photo files smaller so that they could be shared more easily. Some quality is compromised when an image is converted to a JPG.
JPGs and JPEGs are the same file format. JPG and JPEG both stand for Joint Photographic Experts Group and are both raster image file types. The only reason JPG is three characters long as opposed to four is that early versions of Windows required a three-letter extension for file names.
JPEG (short for Joint Photographic Experts Group) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality.
Summary
The Joint Photographic Experts Group launched the JPG in 1992 with the aim of easing image sharing without compromising on quality. Digital cameras and image-capturing devices widely use the JPG format to save pictures.
So, what is a jpg file? It is an image file with the extension ‘.jpg’ or ‘.jpeg.’ Images in the form of screenshots, memes, quotes, profile pictures, passport-size pictures, infographics, and several other types can be saved in JPG format. Several social media platforms support the JPG format for image sharing, including Instagram, Facebook, Twitter, Snapchat, and WhatsApp.
JPEG files use lossy compression, meaning some information from the original image is discarded for the reduced file size. This compression, if done at a high level or multiple times, can result in a loss of image quality.
Content creators, photographers, designers, and artists use the JPG format to upload quality images on their social media platforms. The file format requires less loading time, making it one of the most used image formats among users.
JPG images serve multiple other advantages:
* The portability and compatibility allow easy uploading on web pages and apps regardless of the device and software.
* The support for 24-bit color with numbers ranging up to 16 million provides for high-definition and vibrant images.
* The variable compression enables flexible image size.
* JPEG provides for effortless decompression, thus retaining the original quality.
* JPG file format pictures can be converted to multiple other formats such as ‘.jpe,’ ‘.jiff.’‘.jif,’ and others.
Details
JPEG (short for Joint Photographic Experts Group) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015.
The Joint Photographic Experts Group created the standard in 1992. JPEG was largely responsible for the proliferation of digital images and digital photos across the Internet and later social media. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished and are simply called JPEG.
The MIME media type for JPEG is "image/jpeg," except in older Internet Explorer versions, which provide a MIME type of "image/pjpeg" when uploading JPEG images. JPEG files usually have a filename extension of "jpg" or "jpeg". JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels, hence up to 4 gigapixels for an aspect ratio of 1:1. In 2000, the JPEG group introduced a format intended to be a successor, JPEG 2000, but it was unable to replace the original JPEG as the dominant image standard.
JPEG standard
"JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and also other still picture coding standards. The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. Founded in 1986, the group developed the JPEG standard during the late 1980s. The group published the JPEG standard in 1992.
In 1987, ISO TC 97 became ISO/IEC JTC 1 and, in 1992, CCITT became ITU-T. Currently on the JTC1 side, JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC 1/SC 29/WG 1) – titled as Coding of still pictures. On the ITU-T side, ITU-T SG16 is the respective body. The original JPEG Group was organized in 1986,[17] issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81 and, in 1994, as ISO/IEC 10918-1.
The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream. The Exif and JFIF standards define the commonly used file formats for interchange of JPEG-compressed images.
]]>
Gist
Morphology is the study of how things are put together, like the make-up of animals and plants, or the branch of linguistics that studies the structure of words.
In morphology, the word part morph- means "form" and -ology means "the study of." So, those who study how something is made or formed are engaged in morphology. In biology, the morphology of fish might investigate how the gills work as part of the respiratory system.
Summary
Morphology, in biology, is the study of the size, shape, and structure of animals, plants, and microorganisms and of the relationships of their constituent parts. The term refers to the general aspects of biological form and arrangement of the parts of a plant or an animal. The term anatomy also refers to the study of biological structure but usually suggests study of the details of either gross or microscopic structure. In practice, however, the two terms are used almost synonymously.
Typically, morphology is contrasted with physiology, which deals with studies of the functions of organisms and their parts; function and structure are so closely interrelated, however, that their separation is somewhat artificial. Morphologists were originally concerned with the bones, muscles, blood vessels, and nerves comprised by the bodies of animals and the roots, stems, leaves, and flower parts comprised by the bodies of higher plants. The development of the light microscope made possible the examination of some structural details of individual tissues and single cells; the development of the electron microscope and of methods for preparing ultrathin sections of tissues created an entirely new aspect of morphology—that involving the detailed structure of cells. Electron microscopy has gradually revealed the amazing complexity of the many structures of the cells of plants and animals. Other physical techniques have permitted biologists to investigate the morphology of complex molecules such as hemoglobin, the gas-carrying protein of blood, and deoxyribonucleic acid (DNA), of which most genes are composed. Thus, morphology encompasses the study of biological structures over a tremendous range of sizes, from the macroscopic to the molecular.
A thorough knowledge of structure (morphology) is of fundamental importance to the physician, to the veterinarian, and to the plant pathologist, all of whom are concerned with the kinds and causes of the structural changes that result from specific diseases.
Details
Morphology is a branch of biology dealing with the study of the form and structure of organisms and their specific structural features.
This includes aspects of the outward appearance (shape, structure, colour, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of the internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of gross structure of an organism or taxon and its component parts.
History
The etymology of the word "morphology" is from the Ancient Greek (morphḗ), meaning "form", and (lógos), meaning "word, study, research".
While the concept of form in biology, opposed to function, dates back to Aristotle, the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800).
Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Carl Gegenbaur and Ernst Haeckel.
In 1830, Cuvier and E.G.Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution.
Divisions of morphology
Comparative morphology is analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.
Functional morphology is the study of the relationship between the structure and function of morphological features.
Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.
Anatomy is a "branch of morphology that deals with the structure of organisms".
Molecular morphology is a rarely used term, usually referring to the superstructure of polymers such as fiber formation or to larger composite assemblies. The term is commonly not applied to the spatial structure of individual molecules.
Gross morphology refers to the collective structures of an organism as a whole as a general description of the form and structure of an organism, taking into account all of its structures without specifying an individual structure.
Morphology and classification
Most taxa differ morphologically from other taxa. Typically, closely related taxa differ much less than more distantly related ones, but there are exceptions to this. Cryptic species are species which look very similar, or perhaps even outwardly identical, but are reproductively isolated. Conversely, sometimes unrelated taxa acquire a similar appearance as a result of convergent evolution or even mimicry. In addition, there can be morphological differences within a species, such as in Apoica flavissima where queens are significantly smaller than workers. A further problem with relying on morphological data is that what may appear, morphologically speaking, to be two distinct species, may in fact be shown by DNA analysis to be a single species. The significance of these differences can be examined through the use of allometric engineering in which one or both species are manipulated to phenocopy the other species.
A step relevant to the evaluation of morphology between traits/features within species, includes an assessment of the terms: homology and homoplasy. Homology between features indicate that those features have been derived from a common ancestor. Alternatively, homoplasy between features describes those that can resemble each other, but derive independently via parallel or convergent evolution.
3D cell morphology: classification
The invention and development of microscopy enable the observation of 3-D cell morphology with both high spatial and temporal resolution. The dynamic processes of this cell morphology which are controlled by a complex system play an important role in varied important biological process, such as immune and invasive responses.
Additional Information
Morphology is the study of biological organisms’ structure and organization. Whether one is admiring an organism’s structure or studying individual cells under a microscope, morphology holds the key to understanding life’s numerous structures. Morphology is the study of the physical characteristics of living things.
Examining, assessing, and classifying the shapes, sizes, and forms of individual cells as well as tissues, organs and entire organisms are all part of it. By analyzing morphology, scientists can discover more about the relationships and functions of different parts of a living system.
Definition of Morphology
Morphology is a branch of biology that studies the shape and structure of living organisms.
Morphology Meaning
Morphology is a field of biology that examines an organism’s form, structure, and unique structural characteristics. This encompasses the external morphology (also known as eidonomy) which is the shape, structure, colour, pattern, and size of an object, as well as the internal morphology (also known as anatomy) which is the shape and structure of the internal parts such as bones and organs. In contrast, physiology is concerned largely with function. The study of an organism’s or taxon’s gross structure and its constituent elements is known as morphology in the field of life sciences.
Principles of Morphology
Morphology is an important part of taxonomy as it uses different characteristics and features to identify various species. Some of the basis on which organsim are morphologically classified are as follows:
Structural Organisation
A fundamental principle of morphology is that organisms possess a certain structural arrangement. This organisation could have a hierarchical structure with smaller divisions ascending to the top to build larger organisations. For example, tissues are made up of cells and organs are made up of tissues and organs which together make a body.
Adaptation and Evolution
Another aspect of morphology is the study of how structures have evolved to adapt to their environments. By examining adaptations, scientists can gain a better understanding of how organisms have evolved specific qualities to survive in their environments. This evolutionary approach highlights the ongoing changes that result in the diversity of life which gives morphology an ever-evolving character.
Function and Form
Form and function and their interaction is another key concept. The form and structure of an organ or organism are strongly tied to its function. By examining these interactions, scientists can determine the purpose of each element, providing insight into its complexity
Category of Morphology
Within the field of morphology, there are multiple levels of study, each concentrating on a different aspect of form and structure. Let’s examine these categories in more detail.
Tissue Morphology
Tissues are groups of cells that work together to provide specific functions. Morphologists carefully study tissues to understand how different cell types cooperate to carry out tasks that are essential to the organism’s existence. For instance, muscle tissue contracts to enable movement, but nerve tissue transmits messages for communication.
Organ Morphology
Moving up the organisational hierarchy, we encounter organs, which are composed of various tissues that work well together. Organ morphology is the study of how these tissues come together to form organs such as the liver, heart, or lungs. Organ morphology provides crucial information on the mechanisms that sustain life.
Cellular Morphology
The cellular study of individual cells and their structures is known as cellular morphology. This requires examining the shapes, sizes and organelle arrangements of individual cells. Having an understanding of cellular morphology is crucial for understanding both the building blocks of tissues and organs
The Whole Organism
Morphologists examine how each part functions as a whole to create a living, breathing organism by looking at the bigger image of the whole thing. This means breaking down the characteristics that differentiate each species like its external appearance, internal structure and internal function. This is the highest level of study in morphology.
Comparative Morphology
Comparative morphology studies how different species differ and are similar structurally. Scientists can discover common ancestry and evolutionary links between various organisms by comparing morphological features. Comparing the wing structure of birds and bats. For example, Comparative morphology shows how convergent evolution occurs when distinct species evolve similar flying capabilities despite having different genetic foundations.
Developmental Morphology
Developmental morphology is the study of how characteristics develop and change throughout an organism’s life cycle. Studying embryonic development may result in important insights into how animals develop from a single cell to a complex multicellular organisation. This branch of morphology increases our knowledge of the genetic and environmental factors affecting the variations and adaptations in morphology observed throughout life.
Conclusion
In summary, understanding the complex framework of life’s forms and structures requires an understanding of morphology. Researchers look into everything from entire species to individual cells to try and find answers to the mysteries of adaptation, evolution and the dynamic interplay of form and function. The morphological categorization principles which provide a framework for understanding the hierarchical structure of living systems, serve as the direction for this inquiry.
As we study tissue morphology, organ morphology, cellular morphology and the study of the complete organism, we discover a great deal about nature. Morphology improves scientific knowledge while igniting curiosity.
]]>
Gist
Neurology is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves.
Summary
Neurology is a medical specialty concerned with the nervous system and its functional or organic disorders. Neurologists diagnose and treat diseases and disorders of the brain, spinal cord, and nerves.
The first scientific studies of nerve function in animals were performed in the early 18th century by English physiologist Stephen Hales and Scottish physiologist Robert Whytt. Knowledge was gained in the late 19th century about the causes of aphasia, epilepsy, and motor problems arising from brain damage. French neurologist Jean-Martin Charcot and English neurologist William Gowers described and classified many diseases of the nervous system. The mapping of the functional areas of the brain through selective electrical stimulation also began in the 19th century. Despite these contributions, however, most knowledge of the brain and nervous functions came from studies in animals and from the microscopic analysis of nerve cells.
The electroencephalograph (EEG), which records electrical brain activity, was invented in the 1920s by Hans Berger. Development of the EEG, analysis of cerebrospinal fluid obtained by lumbar puncture (spinal tap), and development of cerebral angiography allowed neurologists to increase the precision of their diagnoses and develop specific therapies and rehabilitative measures. Further aiding the diagnosis and treatment of brain disorders were the development of computerized axial tomography (CT) scanning in the early 1970s and magnetic resonance imaging (MRI) in the 1980s, both of which yielded detailed, noninvasive views of the inside of the brain. (See brain scanning.) The identification of chemical agents in the central nervous system and the elucidation of their roles in transmitting and blocking nerve impulses have led to the introduction of a wide array of medications that can correct or alleviate various neurological disorders including Parkinson disease, multiple sclerosis, and epilepsy. Neurosurgery, a medical specialty related to neurology, has also benefited from CT scanning and other increasingly precise methods of locating lesions and other abnormalities in nervous tissues.
Details
Neurology is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves. Neurological practice relies heavily on the field of neuroscience, the scientific study of the nervous system.
A neurologist is a physician specializing in neurology and trained to investigate, diagnose and treat neurological disorders. Neurologists diagnose and treat myriad neurologic conditions, including stroke, epilepsy, movement disorders such as Parkinson's disease, brain infections, autoimmune neurologic disorders such as multiple sclerosis, sleep disorders, brain injury, headache disorders like migraine, tumors of the brain and dementias such as Alzheimer's disease. Neurologists may also have roles in clinical research, clinical trials, and basic or translational research. Neurology is a nonsurgical specialty, its corresponding surgical specialty is neurosurgery.
History
The academic discipline began between the 15th and 16th centuries with the work and research of many neurologists such as Thomas Willis, Robert Whytt, Matthew Baillie, Charles Bell, Moritz Heinrich Romberg, Duchenne de Boulogne, William A. Hammond, Jean-Martin Charcot, C. Miller Fisher and John Hughlings Jackson. Neo-Latin neurologia appeared in various texts from 1610 denoting an anatomical focus on the nerves (variably understood as vessels), and was most notably used by Willis, who preferred Greek νευρολογία.
Many neurologists also have additional training or interest in one area of neurology, such as stroke, epilepsy, headache, neuromuscular disorders, sleep medicine, pain management, or movement disorders.
In the United States and Canada, neurologists are physicians who have completed a postgraduate training period known as residency specializing in neurology after graduation from medical school. This additional training period typically lasts four years, with the first year devoted to training in internal medicine. On average, neurologists complete a total of eight to ten years of training. This includes four years of medical school, four years of residency and an optional one to two years of fellowship.
While neurologists may treat general neurologic conditions, some neurologists go on to receive additional training focusing on a particular subspecialty in the field of neurology. These training programs are called fellowships, and are one to two years in duration. Subspecialties include brain injury medicine, clinical neurophysiology, epilepsy, neurodevelopmental disabilities, neuromuscular medicine, pain medicine, sleep medicine, neurocritical care, vascular neurology (stroke), behavioral neurology, child neurology, headache, multiple sclerosis, neuroimaging, neurooncology, and neurorehabilitation.
In Germany, a compulsory year of psychiatry must be done to complete a residency of neurology.
In the United Kingdom and Ireland, neurology is a subspecialty of general (internal) medicine. After five years of medical school and two years as a Foundation Trainee, an aspiring neurologist must pass the examination for Membership of the Royal College of Physicians (or the Irish equivalent) and complete two years of core medical training before entering specialist training in neurology. Up to the 1960s, some intending to become neurologists would also spend two years working in psychiatric units before obtaining a diploma in psychological medicine. However, that was uncommon and, now that the MRCPsych takes three years to obtain, would no longer be practical. A period of research is essential, and obtaining a higher degree aids career progression. Many found it was eased after an attachment to the Institute of Neurology at Queen Square, London. Some neurologists enter the field of rehabilitation medicine (known as physiatry in the US) to specialise in neurological rehabilitation, which may include stroke medicine, as well as traumatic brain injuries.
Physical examination
During a neurological examination, the neurologist reviews the patient's health history with special attention to the patient's neurologic complaints. The patient then takes a neurological exam. Typically, the exam tests mental status, function of the cranial nerves (including vision), strength, coordination, reflexes, sensation and gait. This information helps the neurologist determine whether the problem exists in the nervous system and the clinical localization. Localization of the pathology is the key process by which neurologists develop their differential diagnosis. Further tests may be needed to confirm a diagnosis and ultimately guide therapy and appropriate management. Useful adjunct imaging studies in neurology include CT scanning and MRI. Other tests used to assess muscle and nerve function include nerve conduction studies and electromyography.
Clinical tasks
Neurologists examine patients who are referred to them by other physicians in both the inpatient and outpatient settings. Neurologists begin their interactions with patients by taking a comprehensive medical history, and then performing a physical examination focusing on evaluating the nervous system. Components of the neurological examination include assessment of the patient's cognitive function, cranial nerves, motor strength, sensation, reflexes, coordination, and gait.
In some instances, neurologists may order additional diagnostic tests as part of the evaluation. Commonly employed tests in neurology include imaging studies such as computed axial tomography (CAT) scans, magnetic resonance imaging (MRI), and ultrasound of major blood vessels of the head and neck. Neurophysiologic studies, including electroencephalography (EEG), needle electromyography (EMG), nerve conduction studies (NCSs) and evoked potentials are also commonly ordered. Neurologists frequently perform lumbar punctures to assess characteristics of a patient's cerebrospinal fluid. Advances in genetic testing have made genetic testing an important tool in the classification of inherited neuromuscular disease and diagnosis of many other neurogenetic diseases. The role of genetic influences on the development of acquired neurologic diseases is an active area of research.
Some of the commonly encountered conditions treated by neurologists include headaches, radiculopathy, neuropathy, stroke, dementia, seizures and epilepsy, Alzheimer's disease, attention deficit/hyperactivity disorder, Parkinson's disease, Tourette's syndrome, multiple sclerosis, head trauma, sleep disorders, neuromuscular diseases, and various infections and tumors of the nervous system. Neurologists are also asked to evaluate unresponsive patients on life support to confirm brain death.
Treatment options vary depending on the neurological problem. They can include referring the patient to a physiotherapist, prescribing medications, or recommending a surgical procedure.
Some neurologists specialize in certain parts of the nervous system or in specific procedures. For example, clinical neurophysiologists specialize in the use of EEG and intraoperative monitoring to diagnose certain neurological disorders. Other neurologists specialize in the use of electrodiagnostic medicine studies – needle EMG and NCSs. In the US, physicians do not typically specialize in all the aspects of clinical neurophysiology – i.e. sleep, EEG, EMG, and NCSs. The American Board of Clinical Neurophysiology certifies US physicians in general clinical neurophysiology, epilepsy, and intraoperative monitoring. The American Board of Electrodiagnostic Medicine certifies US physicians in electrodiagnostic medicine and certifies technologists in nerve-conduction studies. Sleep medicine is a subspecialty field in the US under several medical specialties including anesthesiology, internal medicine, family medicine, and neurology. Neurosurgery is a distinct specialty that involves a different training path, and emphasizes the surgical treatment of neurological disorders.
Also, many nonmedical doctors, those with doctoral degrees (usually PhDs) in subjects such as biology and chemistry, study and research the nervous system. Working in laboratories in universities, hospitals, and private companies, these neuroscientists perform clinical and laboratory experiments and tests to learn more about the nervous system and find cures or new treatments for diseases and disorders.
A great deal of overlap occurs between neuroscience and neurology. Many neurologists work in academic training hospitals, where they conduct research as neuroscientists in addition to treating patients and teaching neurology to medical students.
General caseload
Neurologists are responsible for the diagnosis, treatment, and management of all the conditions mentioned above. When surgical or endovascular intervention is required, the neurologist may refer the patient to a neurosurgeon or an interventional neuroradiologist. In some countries, additional legal responsibilities of a neurologist may include making a finding of brain death when it is suspected that a patient has died. Neurologists frequently care for people with hereditary (genetic) diseases when the major manifestations are neurological, as is frequently the case. Lumbar punctures are frequently performed by neurologists. Some neurologists may develop an interest in particular subfields, such as stroke, dementia, movement disorders, neurointensive care, headaches, epilepsy, sleep disorders, chronic pain management, multiple sclerosis, or neuromuscular diseases.
Overlapping areas
Some overlap also occurs with other specialties, varying from country to country and even within a local geographic area. Acute head trauma is most often treated by neurosurgeons, whereas sequelae of head trauma may be treated by neurologists or specialists in rehabilitation medicine. Although stroke cases have been traditionally managed by internal medicine or hospitalists, the emergence of vascular neurology and interventional neuroradiology has created a demand for stroke specialists. The establishment of Joint Commission-certified stroke centers has increased the role of neurologists in stroke care in many primary, as well as tertiary, hospitals. Some cases of nervous system infectious diseases are treated by infectious disease specialists. Most cases of headache are diagnosed and treated primarily by general practitioners, at least the less severe cases. Likewise, most cases of sciatica are treated by general practitioners, though they may be referred to neurologists or surgeons (neurosurgeons or orthopedic surgeons). Sleep disorders are also treated by pulmonologists and psychiatrists. Cerebral palsy is initially treated by pediatricians, but care may be transferred to an adult neurologist after the patient reaches a certain age. Physical medicine and rehabilitation physicians may treat patients with neuromuscular diseases with electrodiagnostic studies (needle EMG and nerve-conduction studies) and other diagnostic tools. In the United Kingdom and other countries, many of the conditions encountered by older patients such as movement disorders, including Parkinson's disease, stroke, dementia, or gait disorders, are managed predominantly by specialists in geriatric medicine.
Clinical neuropsychologists are often called upon to evaluate brain-behavior relationships for the purpose of assisting with differential diagnosis, planning rehabilitation strategies, documenting cognitive strengths and weaknesses, and measuring change over time (e.g., for identifying abnormal aging or tracking the progression of a dementia)
Relationship to clinical neurophysiology
In some countries such as the United States and Germany, neurologists may subspecialize in clinical neurophysiology, the field responsible for EEG and intraoperative monitoring, or in electrodiagnostic medicine nerve conduction studies, EMG, and evoked potentials. In other countries, this is an autonomous specialty (e.g., United Kingdom, Sweden, Spain).
Overlap with psychiatry
In the past, prior to the advent of more advanced diagnostic techniques such as MRI some neurologists have considered psychiatry and neurology to overlap. Although mental illnesses are believed by many to be neurological disorders affecting the central nervous system, traditionally they are classified separately, and treated by psychiatrists. In a 2002 review article in the American Journal of Psychiatry, Professor Joseph B. Martin, Dean of Harvard Medical School and a neurologist by training, wrote, "the separation of the two categories is arbitrary, often influenced by beliefs rather than proven scientific observations. And the fact that the brain and mind are one makes the separation artificial anyway".
Neurological disorders often have psychiatric manifestations, such as post-stroke depression, depression and dementia associated with Parkinson's disease, mood and cognitive dysfunctions in Alzheimer's disease, and Huntington disease, to name a few. Hence, the sharp distinction between neurology and psychiatry is not always on a biological basis. The dominance of psychoanalytic theory in the first three-quarters of the 20th century has since then been largely replaced by a focus on pharmacology. Despite the shift to a medical model, brain science has not advanced to a point where scientists or clinicians can point to readily discernible pathological lesions or genetic abnormalities that in and of themselves serve as reliable or predictive biomarkers of a given mental disorder.
Neurological enhancement
The emerging field of neurological enhancement highlights the potential of therapies to improve such things as workplace efficacy, attention in school, and overall happiness in personal lives. However, this field has also given rise to questions about neuroethics.
Additional Information
Neurology is a branch of medical science that is concerned with disorders and diseases of the nervous system. The term neurology comes from a combination of two words - "neuron" meaning nerve and "logia" meaning "the study of".
There are around a hundred billion neurons in the brain, capable of generating their own impulses and of receiving and transmitting impulses from neighbouring cells. Neurology involves the study of:
* The central nervous system, the peripheral nervous system and the autonomic nervous system.
* Structural and functional disorders of the nervous system ranging from birth defects through to degenerative diseases such as Parkinson's disease and Alzheimer's disease.
Mankind has been familiar with disorders of the nervous system for centuries. Parkinson's disease, for example, was described as the ‘shaking palsy' in 1817. It was only late into the 20th century, however, that a deficiency in the neurotransmitter dopamine was identified as the cause of Parkinson's disease and its symptoms such as tremors and muscle rigidity. Alzheimer's disease was first described in 1906.
Neurology also involves understanding and interpreting imaging and electrical studies. Examples of the imaging studies used include computed tomography (CT) scans and magnetic resonance imaging (MRI) scans. An electroencephalogram (EEG) can be used to assess the electrical activity of the brain in the diagnosis of conditions such as epilepsy. Neurologists also diagnose infections of the nervous system by analysing the cerebrospinal fluid (CSF), a clear fluid that surrounds the brain and spinal cord.
Neurologists study an undergraduate degree, spend four years at medical school and complete a one year internship. This is followed by three years of specialized training and, often, additional training in a particular area of the discipline such as stroke, epilepsy or movement disorders. Neurologists are usually physicians but they may also refer their patients to surgeons specializing in neurology called neurosurgeons.
Some examples of the diseases and disorders neurologists may treat include stroke, Alzheimer's disease, Parkinson's disease, multiple sclerosis, amyotropic lateral sclerosis, migraine, epilepsy, sleep disorders, pain, tremors, brain and spinal cord injury, peripheral nerve disease and brain tumors.
]]>
Gist
Astronomy is the study of everything in the universe beyond Earth's atmosphere. That includes objects we can see with our naked eyes, like the Sun , the Moon , the planets, and the stars . It also includes objects we can only see with telescopes or other instruments, like faraway galaxies and tiny particles.
Summary
Astronomy is the science that encompasses the study of all extraterrestrial objects and phenomena. Until the invention of the telescope and the discovery of the laws of motion and gravity in the 17th century, astronomy was primarily concerned with noting and predicting the positions of the Sun, Moon, and planets, originally for calendrical and astrological purposes and later for navigational uses and scientific interest. The catalog of objects now studied is much broader and includes, in order of increasing distance, the solar system, the stars that make up the Milky Way Galaxy, and other, more distant galaxies. With the advent of scientific space probes, Earth also has come to be studied as one of the planets, though its more-detailed investigation remains the domain of the Earth sciences.
The scope of astronomy
Since the late 19th century, astronomy has expanded to include astrophysics, the application of physical and chemical knowledge to an understanding of the nature of celestial objects and the physical processes that control their formation, evolution, and emission of radiation. In addition, the gases and dust particles around and between the stars have become the subjects of much research. Study of the nuclear reactions that provide the energy radiated by stars has shown how the diversity of atoms found in nature can be derived from a universe that, following the first few minutes of its existence, consisted only of hydrogen, helium, and a trace of lithium. Concerned with phenomena on the largest scale is cosmology, the study of the evolution of the universe. Astrophysics has transformed cosmology from a purely speculative activity to a modern science capable of predictions that can be tested.
Its great advances notwithstanding, astronomy is still subject to a major constraint: it is inherently an observational rather than an experimental science. Almost all measurements must be performed at great distances from the objects of interest, with no control over such quantities as their temperature, pressure, or chemical composition. There are a few exceptions to this limitation—namely, meteorites (most of which are from the asteroid belt, though some are from the Moon or Mars), rock and soil samples brought back from the Moon, samples of comet and asteroid dust returned by robotic spacecraft, and interplanetary dust particles collected in or above the stratosphere. These can be examined with laboratory techniques to provide information that cannot be obtained in any other way. In the future, space missions may return surface materials from Mars, or other objects, but much of astronomy appears otherwise confined to Earth-based observations augmented by observations from orbiting satellites and long-range space probes and supplemented by theory.
Determining astronomical distances
A central undertaking in astronomy is the determination of distances. Without a knowledge of astronomical distances, the size of an observed object in space would remain nothing more than an angular diameter and the brightness of a star could not be converted into its true radiated power, or luminosity. Astronomical distance measurement began with a knowledge of Earth’s diameter, which provided a base for triangulation. Within the inner solar system, some distances can now be better determined through the timing of radar reflections or, in the case of the Moon, through laser ranging. For the outer planets, triangulation is still used. Beyond the solar system, distances to the closest stars are determined through triangulation, in which the diameter of Earth’s orbit serves as the baseline and shifts in stellar parallax are the measured quantities. Stellar distances are commonly expressed by astronomers in parsecs (pc), kiloparsecs, or megaparsecs. (1 pc = 3.086 × {10}^{18} cm, or about 3.26 light-years [1.92 × {10}^{13} miles].) Distances can be measured out to around a kiloparsec by trigonometric parallax (see star: Determining stellar distances). The accuracy of measurements made from Earth’s surface is limited by atmospheric effects, but measurements made from the Hipparcos satellite in the 1990s extended the scale to stars as far as 650 parsecs, with an accuracy of about a thousandth of an arc second. The Gaia satellite is expected to measure stars as far away as 10 kiloparsecs to an accuracy of 20 percent. Less-direct measurements must be used for more-distant stars and for galaxies.
Two general methods for determining galactic distances are described here. In the first, a clearly identifiable type of star is used as a reference standard because its luminosity has been well determined. This requires observation of such stars that are close enough to Earth that their distances and luminosities have been reliably measured. Such a star is termed a “standard candle.” Examples are Cepheid variables, whose brightness varies periodically in well-documented ways, and certain types of supernova explosions that have enormous brilliance and can thus be seen out to very great distances. Once the luminosities of such nearer standard candles have been calibrated, the distance to a farther standard candle can be calculated from its calibrated luminosity and its actual measured intensity.
The second method for galactic distance measurements makes use of the observation that the distances to galaxies generally correlate with the speeds with which those galaxies are receding from Earth (as determined from the Doppler shift in the wavelengths of their emitted light). This correlation is expressed in the Hubble law: velocity = H × distance, in which H denotes Hubble’s constant, which must be determined from observations of the rate at which the galaxies are receding. There is widespread agreement that H lies between 67 and 73 kilometres per second per megaparsec (km/sec/Mpc). H has been used to determine distances to remote galaxies in which standard candles have not been found.
Details
Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry in order to explain their origin and their overall evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is a branch of astronomy that studies the universe as a whole.
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars.
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
Etymology
Astronomy means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct.
Use of terms "astronomy" and "astrophysics"
"Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties", while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject. However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department, and many professional astronomers have physics rather than astronomy degrees. Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.
History
Ancient times
In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that may have had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.
Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Egypt, Mesopotamia, Greece, Persia, India, China, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.
A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.
Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model. In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.
Middle Ages
Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels. Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.
Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars. The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Iranian scholar Al-Biruni observed that, contrary to Ptolemy, the Sun's apogee (highest point in the heavens) was mobile, not fixed. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.
It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed astronomical observatories. In Post-classical West Africa, Astronomers studied the movement of stars and relation to seasons, crafting charts of the heavens as well as precise diagrams of orbits of the other planets based on complex mathematical calculations. Songhai historian Mahmud Kati documented a meteor shower in August 1583. Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.
For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.
Scientific revolution
During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.
Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars, More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.
During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.
Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.
The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe. Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere. In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.
Observational astronomy
The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation. Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.
Radio astronomy
Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range. Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.
Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.[9][47]
A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.
Infrared astronomy
Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous galactic protostars and their host star clusters. With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space. Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.
Optical astronomy
Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy. Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm), that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.
Ultraviolet astronomy
Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm). Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei. However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.
X-ray astronomy
X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above {10}^7 (10 million) kelvins, and thermal emission from thick gases above {10}^7 Kelvin. Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.
Gamma-ray astronomy
Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes.[47] The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.
Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.
Fields not based on the electromagnetic spectrum
In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.
In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.
Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole.[56] A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.
The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.
Astrometry and celestial mechanics
One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.
Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allow astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.
During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.
Additional Information
Astronomy is one of the oldest scientific disciplines that has evolved from the humble beginnings of counting stars and charting constellations with the naked eye to the impressive showcase of humankind's technological capabilities that we see today.
Despite the progress astronomy has made over millennia, astronomers are still working hard to understand the nature of the universe and humankind's place in it. That question has only gotten more complex as our understanding of the universe grew with our expanding technical capabilities.
As the depths of the sky opened in front of our increasingly sophisticated telescopes, and sensitive detectors enabled us to spot the weirdest types of signals, the star-studded sky that our ancestors gazed at turned into a zoo of mind-boggling objects including black holes, white dwarfs, neutron stars and supernovas.
At the same time, the two-dimensional constellations that inspired the imagination of early sky-watchers were reduced to an optical illusion, behind which the swirling of galaxies hurtling through spacetime reveals a story that began with the Big Bang some 13.8 billion years ago.
What is astronomy?
Astronomy uses mathematics, physics and chemistry to study celestial objects and phenomena.
What are the four types of astronomy?
Astronomy cannot be divided solely into four types. It is a broad discipline encompassing many subfields including observational astronomy, theoretical astronomy, planetary science, astrophysics, cosmology and astrobiology.
What do you study in astronomy?
Those who study astronomy explore the structure and origin of the universe including the stars, planets, galaxies and black holes that reside in it. Astronomers aim to answer fundamental questions about our universe through theory and observation.
What's the difference between astrology and astronomy?
Astrology is widely considered to be a pseudoscience that attempts to explain how the position and motion of celestial objects such as planets affect people and events on Earth. Astronomy is the scientific study of the universe using mathematics, physics, and chemistry.
Most of today's citizens of planet Earth live surrounded by the inescapable glow of modern urban lighting and can hardly imagine the awe-inspiring presence of the pristine star-studded sky that illuminated the nights for ancient tribes and early civilizations. We can guess how drawn our ancestors were to that overwhelming sight from the role that sky-watching played in their lives.
Ancient monuments, such as the 5,000 years old Stonehenge in the U.K., were built to reflect the journey of the sun in the sky, which helped keep track of time and organize life in an age that solely depended on seasons. Art pieces depicting the moon and stars were discovered dating back several thousand years, such as the "world's oldest star map," the bronze-age Nebra disk.
Ancient Assyro-Babylonians around 1,000 B.C. systematically observed and recorded periodical motions of celestial bodies, according to the European Space Agency (ESA), and similar records exist also from early China. In fact, according to the University of Oregon, astronomy can be considered the first science as it's the one for which the oldest written records exist.
Ancient Greeks elevated sky-watching to a new level. Aristarchus of Samos made the first (highly inaccurate) attempt to calculate the distance of Earth to the sun and moon, and Hipparchus sometimes considered the father of empirical astronomy, cataloged the positions of over 800 stars using just the naked eye. He also developed the brightness scale that is still in use today, according to ESA.
]]>
Gist
Macroeconomics focuses on the performance of economies – changes in economic output, inflation, interest and foreign exchange rates, and the balance of payments. Poverty reduction, social equity, and sustainable growth are only possible with sound monetary and fiscal policies.
Summary
Macroeconomics is the study of the behaviour of a national or regional economy as a whole. It is concerned with understanding economy-wide events such as the total amount of goods and services produced, the level of unemployment, and the general behaviour of prices.
Unlike microeconomics—which studies how individual economic actors, such as consumers and firms, make decisions—macroeconomics concerns itself with the aggregate outcomes of those decisions. For that reason, in addition to using the tools of microeconomics, such as supply-and-demand analysis, macroeconomists also utilize aggregate measures such as gross domestic product (GDP), unemployment rates, and the consumer price index (CPI) to study the large-scale repercussions of micro-level decisions.
Early history and the classical school
Although complex macroeconomic structures have been characteristic of human societies since ancient times, the discipline of macroeconomics is relatively new. Until the 1930s most economic analysis was focused on microeconomic phenomena and concentrated primarily on the study of individual consumers, firms and industries. The classical school of economic thought, which derived its main principles from Scottish economist Adam Smith’s theory of self-regulating markets, was the dominant philosophy. Accordingly, such economists believed that economy-wide events such as rising unemployment and recessions are like natural phenomena and cannot be avoided. If left undisturbed, market forces would eventually correct such problems; moreover, any intervention by the government in the operation of free markets would be ineffective at best and destructive at worst.
Keynesianism
The classical view of macroeconomics, which was popularized in the 19th century as laissez-faire, was shattered by the Great Depression, which began in the United States in 1929 and soon spread to the rest of the industrialized Western world. The sheer scale of the catastrophe, which lasted almost a decade and left a quarter of the U.S. workforce without jobs, threatening the economic and political stability of many countries, was sufficient to cause a paradigm shift in mainstream macroeconomic thinking, including a reevaluation of the belief that markets are self-correcting. The theoretical foundations for that change were laid in 1935–36, when the British economist John Maynard Keynes published his monumental work The General Theory of Employment, Interest, and Money. Keynes argued that most of the adverse effects of the Great Depression could have been avoided had governments acted to counter the depression by boosting spending through fiscal policy. Keynes thus ushered in a new era of macroeconomic thought that viewed the economy as something that the government should actively manage. Economists such as Paul Samuelson, Franco Modigliani, James Tobin, Robert Solow, and many others adopted and expanded upon Keynes’s ideas, and as a result the Keynesian school of economics was born.
In contrast to the hands-off approach of classical economists, the Keynesians argued that governments have a duty to combat recessions. Although the ups and downs of the business cycle cannot be completely avoided, they can be tamed by timely intervention. At times of economic crisis, the economy is crippled because there is almost no demand for anything. As businesses’ sales decline, they begin laying off more workers, which causes a further reduction in income and demand, resulting in a prolonged recessionary cycle. Keynesians argued that, because it controls tax revenues, the government has the means to generate demand simply by increasing spending on goods and services during such times of hardship.
Monetarism
In the 1950s the first challenge to the Keynesian school of thought came from the monetarists, who were led by the influential University of Chicago economist Milton Friedman. Friedman proposed an alternative explanation of the Great Depression: he argued that what had started as a recession was turned into a prolonged depression because of the disastrous monetary policies followed by the Federal Reserve System (the central bank of the United States). If the Federal Reserve had started to increase the money supply early on, instead of doing just the opposite, the recession could have been effectively tamed before it got out of control. Over time, Friedman’s ideas were refined and came to be known as monetarism. In contrast to the Keynesian strategy of boosting demand through fiscal policy, monetarists favoured controlled increases in the money supply as a means of fighting off recesssions. Beyond that, the government should avoid intervening in free markets and the rest of the economy, according to monetarists.
Later developments
A second challenge to the Keynesian school arose in the 1970s, when the American economist Robert E. Lucas, Jr., laid the foundations of what came to be known as the New Classical school of thought in economics. Lucas’s key introduced the rational-expectations hypothesis. As opposed to the ideas in earlier Keynesian and monetarist models that viewed the individual decision makers in the economy as shortsighted and backward-looking, Lucas argued that decision makers, insofar as they are rational, do not base their decisions solely on current and past data; they also form expectations about the future on the basis of a vast array of information available to them. That fact implies that a change in monetary policy, if it has been predicted by rational agents, will have no effect on real variables such as output and the unemployment rate, because the agents will have acted upon the implications of such a policy even before it is implemented. As a result, predictable changes in monetary policy will result in changes in nominal variables such as prices and wages but will not have any real effects.
Following Lucas’s pioneering work, economists including Finn E. Kydland and Edward C. Prescott developed rigorous macroeconomic models to explain the fluctuations of the business cycle, which came to be known in the macroeconomic literature as real-business-cycle (RBC) models. RBC models were based on strong mathematical foundations and utilized Lucas’s idea of rational expectations. An important outcome of the RBC models was that they were able to explain macroeconomic fluctuations as the product of a myriad of external and internal shocks (unpredictable events that hit the economy). Primarily, they argued that shocks that result from changes in technology can account for the majority of the fluctuations in the business cycle.
The tendency of RBC models to overemphasize technology-driven fluctuations as the primary cause of business cycles and to underemphasize the role of monetary and fiscal policy led to the development of a new Keynesian response in the 1980s. New Keynesians, including John B. Taylor and Stanley Fischer, adopted the rigorous modeling approach introduced by Kydland and Prescott in the RBC literature but expanded it by altering some key underlying assumptions. Previous models had relied on the fact that nominal variables such as prices and wages are flexible and respond very quickly to changes in supply and demand. However, in the real world, most wages and many prices are locked in by contractual agreements. That fact introduces “stickiness,” or resistance to change, in those economic variables. Because wages and prices tend to be sticky, economic decision makers may react to macroeconomic events by altering other variables. For example, if wages are sticky, businesses will find themselves laying off more workers than they would in an unrealistic environment in which every employee’s salary could be cut in half.
Introducing market imperfections such as wage and price stickiness helped Taylor and Fischer to build macroeconomic models that represented the business cycle more accurately. In particular, they were able to show that in a world of market imperfections such as stickiness, monetary policy will have a direct impact on output and on employment in the short run, until enough time has passed for wages and prices to adjust. Therefore, central banks that control the supply of money can very well influence the business cycle in the short run. In the long run, however, the imperfections become less binding, as contracts can be renegotiated, and monetary policy can influence only prices.
Following the new Keynesian revolution, macroeconomists seemed to reach a consensus that monetary policy is effective in the short run and can be used as a tool to tame business cycles. Many other macroeconomic models were developed to measure the extent to which monetary policy can influence output. More recently, the impact of the financial crisis of 2007–08 and the Great Recession that followed it, coupled with the fact that many governments adopted a very Keynesian response to those events, brought about a revival of interest in the new Keynesian approach to macroeconomics, which seemed likely to lead to improved theories and better macroeconomic models in the future.
Details
Macroeconomics is a branch of economics that studies the behavior of an overall economy, which encompasses markets, businesses, consumers, and governments. Macroeconomics examines economy-wide phenomena such as inflation, price levels, rate of economic growth, national income, gross domestic product (GDP), and changes in unemployment.
Some of the key questions addressed by macroeconomics include: What causes unemployment? What causes inflation? What creates or stimulates economic growth? Macroeconomics attempts to measure how well an economy is performing, understand what forces drive it, and project how performance can improve.
KEY TAKEAWAYS
* Macroeconomics is the branch of economics that deals with the structure, performance, behavior, and decision-making of the whole, or aggregate, economy.
* The two main areas of macroeconomic research are long-term economic growth and shorter-term business cycles.
* Macroeconomics in its modern form is often defined as starting with John Maynard Keynes and his theories about market behavior and governmental policies in the 1930s; several schools of thought have developed since.
* In contrast to macroeconomics, microeconomics is more focused on the influences on and choices made by individual actors—such as people, companies, and industries—in the economy.
Macroeconomics:
Understanding Macroeconomics
As the term implies, macroeconomics is a field of study that analyzes an economy through a wide lens. This includes looking at variables like unemployment, GDP, and inflation. In addition, macroeconomists develop models explaining the relationships between these factors.
These models, and the forecasts they produce, are used by government entities to aid in constructing and evaluating economic, monetary, and fiscal policy. Businesses use the models to set strategies in domestic and global markets, and investors use them to predict and plan for movements in various asset classes.
Properly applied, economic theories can illuminate how economies function and the long-term consequences of particular policies and decisions. Macroeconomic theory can also help individual businesses and investors make better decisions through a more thorough understanding of the effects of broad economic trends and policies on their own industries.
History of Macroeconomics
While the term "macroeconomics" dates back to the 1940s, many of the field's core concepts have been subjects of study for much longer. Topics like unemployment, prices, growth, and trade have concerned economists since the beginning of the discipline in the 1700s. Elements of earlier work from Adam Smith and John Stuart Mill addressed issues that would now be recognized as the domain of macroeconomics.
In its modern form, macroeconomics is often defined as starting with John Maynard Keynes and his book The General Theory of Employment, Interest, and Money in 1936. In it, Keynes explained the fallout from the Great Depression, when goods went unsold and workers were unemployed.
Throughout the 20th century, Keynesian economics, as Keynes' theories became known, diverged into several other schools of thought.
Before the popularization of Keynes' theories, economists generally did not differentiate between microeconomics and macroeconomics. The same microeconomic laws of supply and demand that operate in individual goods markets were understood to interact between individual markets to bring the economy into a general equilibrium, as described by Leon Walras.
The link between goods markets and large-scale financial variables such as price levels and interest rates was explained through the unique role that money plays in the economy as a medium of exchange by economists such as Knut Wicksell, Irving Fisher, and Ludwig von Mises.
Macroeconomics vs. Microeconomics
Macroeconomics differs from microeconomics, which focuses on smaller factors that affect choices made by individuals. Individuals are typically classified into subgroups, such as buyers, sellers, and business owners. These actors interact with each other according to the laws of supply and demand for resources, using money and interest rates as pricing mechanisms for coordination. Factors studied in both microeconomics and macroeconomics typically influence one another.
A key distinction between microeconomics and macroeconomics is that macroeconomic aggregates can sometimes behave in very different ways or even the opposite of similar microeconomic variables. For example, Keynes referenced the so-called Paradox of Thrift, which argues that individuals save money to build wealth on a microeconomic level. However, when everyone tries to increase their savings at once, it can contribute to a slowdown in the economy and less wealth in the aggregate, macroeconomic level. This is because there would be a reduction in spending, affecting business revenues, and lowering worker pay.
Limits of Macroeconomics
It is also important to understand the limitations of economic theory. Theories are often created in a vacuum and lack specific real-world details like taxation, regulation, and transaction costs. The real world is also decidedly complicated and includes matters of social preference and conscience that do not lend themselves to mathematical analysis.
It is common to find the phrase ceterus paribus, loosely translated as "all else being equal," in economic theories and discussions. Economists use this phrase to focus on specific relationships between variables being discussed, while assuming all other variables remain fixed.
Even with the limits of economic theory, it is important and worthwhile to follow significant macroeconomic indicators like GDP, inflation, and unemployment. This is because the performance of companies, and by extension their stocks, is significantly influenced by the economic conditions in which the companies operate.
Likewise, it can be invaluable to understand which theories are currently in favor, and how they may be influencing a particular government administration. Such economic theories can say much about how a government will approach taxation, regulation, government spending, and similar policies. By better understanding economics and the ramifications of economic decisions, investors can get at least a glimpse of the probable future and act accordingly with confidence.
Macroeconomic Schools of Thought
The field of macroeconomics is organized into many different schools of thought, with differing views on how the markets and their participants operate.
Classical
Classical economists held that prices, wages, and rates are flexible and markets tend to clear unless prevented from doing so by government policy; these ideas build on Adam Smith's original theories. The term “classical economists” is not actually a school of macroeconomic thought but a label applied first by Karl Marx and later by Keynes to denote previous economic thinkers with whom they disagreed.
Keynesian
Keynesian economics was founded mainly based on the works of John Maynard Keynes and was the beginning of macroeconomics as a separate area of study from microeconomics. Keynesians focus on aggregate demand as the principal factor in issues like unemployment and the business cycle.
Keynesian economists believe that the business cycle can be managed by active government intervention through fiscal policy, where governments spend more in recessions to stimulate demand or spend less in expansions to decrease it. They also believe in monetary policy, where a central bank stimulates lending with lower rates or restricts it with higher ones.
Keynesian economists also believe that certain rigidities in the system, particularly sticky prices, prevent the proper clearing of supply and demand.
Monetarist
The Monetarist school is a branch of Keynesian economics credited mainly to the works of Milton Friedman. Working within and extending Keynesian models, Monetarists argue that monetary policy is generally a more effective and desirable policy tool to manage aggregate demand than fiscal policy. However, monetarists also acknowledge limits to monetary policy that make fine-tuning the economy ill-advised and instead tend to prefer adherence to policy rules that promote stable inflation rates.
New Classical
The New Classical school, along with the New Keynesians, is mainly built on integrating microeconomic foundations into macroeconomics to resolve the glaring theoretical contradictions between the two subjects.
The New Classical school emphasizes the importance of microeconomics and models based on that behavior. New Classical economists assume that all agents try to maximize their utility and have rational expectations, which they incorporate into macroeconomic models. New Classical economists believe that unemployment is largely voluntary and that discretionary fiscal policy destabilizes, while inflation can be controlled with monetary policy.
New Keynesian
The New Keynesian school also attempts to add microeconomic foundations to traditional Keynesian economic theories. While New Keynesians accept that households and firms operate based on rational expectations, they still maintain that there are a variety of market failures, including sticky prices and wages. Because of this "stickiness," the government can improve macroeconomic conditions through fiscal and monetary policy.
Austrian
The Austrian School is an older school of economics that is seeing some resurgence in popularity. Austrian economic theories mainly apply to microeconomic phenomena. However, like the so-called classical economists, they never strictly separated microeconomics and macroeconomics.
Austrian theories also have important implications for what are otherwise considered macroeconomic subjects. In particular, the Austrian business cycle theory explains broadly synchronized (macroeconomic) swings in economic activity across markets due to monetary policy and the role that money and banking play in linking (microeconomic) markets to each other and across time.
Macroeconomic Indicators
Macroeconomics is a rather broad field, but two specific research areas dominate the discipline. The first area looks at the factors that determine long-term economic growth. The other looks at the causes and consequences of short-term fluctuations in national income and employment, also known as the business cycle.
Economic Growth
Economic growth refers to an increase in aggregate production in an economy. Macroeconomists try to understand the factors that either promote or retard economic growth to support economic policies that will support development, progress, and rising living standards.
Economists can use many indicators to measure economic performance. These indicators fall into 10 categories:
Gross Domestic Product indicators: Measure how much the economy produces
Consumer Spending indicators: Measure how much capital consumers feed back into the economy
Income and Savings indicators: Measure how much consumers make and save
Industry Performance indicators: Measure GDP by industry
International Trade and Investment indicators: Indicate the balance of payments between trade partners, how much is traded, and how much is invested internationally
Prices and Inflation indicators: Indicate fluctuations in prices paid for goods and services and changes in currency purchasing power
Investment in Fixed Assets indicators: Indicate how much capital is tied up in fixed assets
Employment indicators: Show employment by industry, state, county, and other areas
Government indicators: Show how much the government spends and receives
Special indicators: Include all other economic indicators, such as distribution of personal income, global value chains, healthcare spending, small business well-being, and more
The Business Cycle
Superimposed over long-term macroeconomic growth trends, the levels and rates of change of significant macroeconomic variables such as employment and national output go through fluctuations. These fluctuations are called expansions, peaks, recessions, and troughs—they also occur in that order. When charted on a graph, these fluctuations show that businesses perform in cycles; thus, it is called the business cycle.
The National Bureau of Economic Research (NBER) measures the business cycle, which uses GDP and Gross National Income to date the cycle.
The NBER is also the agency that declares the beginning and end of recessions and expansions.
How to Influence Macroeconomics
Because macroeconomics is such a broad area, positively influencing the economy is challenging and takes much longer than changing the individual behaviors within microeconomics. Therefore, economies need to have an entity dedicated to researching and identifying techniques that can influence large-scale changes.
In the U.S., the Federal Reserve is the central bank with a mandate of promoting maximum employment and price stability. These two factors have been identified as essential to positively influencing change at the macroeconomic level.
To influence change, the Fed implements monetary policy through tools it has developed over the years, which work to affect its dual mandates. It has the following tools it can use:
Federal Funds Rate Range: A target range set by the Fed that guides interest rates on overnight lending between depository institutions to boost short-term borrowing
Open Market Operations: Purchase and sell securities on the open market to change the supply of reserves
Discount Window and Rate: Lending to depository institutions to help banks manage liquidity
Reserve Requirements: Maintaining a reserve to help banks maintain liquidity
Interest on Reserve Balances: Encourages banks to hold reserves for liquidity and pays them interest for doing so
Overnight Repurchase Agreement Facility: A supplementary tool used to help control the federal funds rate by selling securities and repurchasing them the next day at a more favorable rate
Term Deposit Facility: Reserve deposits with a term, used to drain reserves from the banking system
Central Bank Liquidity Swaps: Established swap lines for central banks from select countries to improve liquidity conditions in the U.S. and participating countries' central banks
Foreign and International Monetary Authorities Repo Facility: A facility for institutions to enter repurchase agreements with the Fed to act as a backstop for liquidity
Standing Overnight Repurchase Agreement Facility: A facility to encourage or discourage borrowing above a set rate, which helps to control the effective federal funds rate
The Fed continuously updates the tools it uses to influence the economy, so it has a list of many other previously used tools it can implement again if needed.
Board of Governors of the Federal Reserve System. "Policy Tools | Expired Tools."
What is the most important concept in all of macroeconomics?
The most important concept in all of macroeconomics is said to be output, which refers to the total amount of good and services a country produces. Output is often considered a snapshot of an economy at a given moment.
What are the 3 Major Concerns of Macroeconomics?
Three major macroeconomic concerns are the unemployment level, inflation, and economic growth.
Why Is Macroeconomics Important?
Macroeconomics helps a government evaluate how an economy is performing and decide on actions it can take to increase or slow growth.
The Bottom Line
Macroeconomics is a field of study used to evaluate overall economic performance and develop actions that can positively affect an economy. Economists work to understand how specific factors and actions affect output, input, spending, consumption, inflation, and employment.
The study of economics began long ago, but the field didn't start evolving into its current form until the 1700s. Macroeconomics now plays a large part in government and business decision-making.
Additional Information
Macroeconomics is a branch of economics that deals with the performance, structure, behavior, and decision-making of an economy as a whole. This includes regional, national, and global economies. Macroeconomists study topics such as output/GDP (gross domestic product) and national income, unemployment (including unemployment rates), price indices and inflation, consumption, saving, investment, energy, international trade, and international finance.
Macroeconomics and microeconomics are the two most general fields in economics. The focus of macroeconomics is often on a country (or larger entities like the whole world) and how its markets interact to produce large-scale phenomena that economists refer to as aggregate variables. In microeconomics the focus of analysis is often a single market, such as whether changes in supply or demand are to blame for price increases in the oil and automotive sectors. From introductory classes in "principles of economics" through doctoral studies, the macro/micro divide is institutionalized in the field of economics. Most economists identify as either macro- or micro-economists.
Macroeconomics is traditionally divided into topics along different time frames: the analysis of short-term fluctuations over the business cycle, the determination of structural levels of variables like inflation and unemployment in the medium (i.e. unaffected by short-term deviations) term, and the study of long-term economic growth. It also studies the consequences of policies targeted at mitigating fluctuations like fiscal or monetary policy, using taxation and government expenditure or interest rates, respectively, and of policies that can affect living standards in the long term, e.g. by affecting growth rates.
Macroeconomics as a separate field of research and study is generally recognized to start in 1936, when John Maynard Keynes published his The General Theory of Employment, Interest and Money, but its intellectual predecessors are much older. Since World War II, various macroeconomic schools of thought like Keynesians, monetarists, new classical and new Keynesian economists have made contributions to the development of the macroeconomic research mainstream.
]]>
Gist
Microeconomics studies the decisions of individuals and firms to allocate resources of production, exchange, and consumption. Microeconomics deals with prices and production in single markets and the interaction between different markets but leaves the study of economy-wide aggregates to macroeconomics.
Summary
Microeconomics is a branch of economics that studies the behaviour of individual consumers and firms. Unlike macroeconomics, which attempts to understand how the collective behaviour of individual agents shapes aggregate economic outcomes, microeconomics focuses on the detailed study of the agents themselves, by using rigorous mathematical techniques to better describe and understand the decision-making mechanisms involved.
The branch of microeconomics that deals with household behaviour is called consumer theory. Consumer theory is built on the concept of utility: the economic measure of happiness, which increases as consumption of certain goods increases. What consumers want to consume is captured by their utility function, which measures the happiness derived from consuming a set of goods. Consumers, however, are also bound by a budget constraint, which limits the number or kinds of goods and services they can purchase. The consumers are modeled as utility maximizers: they will try to purchase the optimal number of goods that maximizes their utility, given their budget.
The branch of microeconomics that deals with firm behaviour is called producer theory. Producer theory views firms as entities that turn inputs—such as capital, land, and labour—into output by using a certain level of technology. Input prices and availability, as well as the level of production technology, bind firms to a certain production capacity. The goal of the firm is to produce the amount of output that maximizes its profits, subject to its input and technology constraints.
Consumers and firms interact with each other across several markets. One such market is the goods market, in which firms make up the supply side and consumers who buy their products make up the demand side. Different goods market structures require microeconomists to adopt different modeling strategies. For example, a firm operating as a monopoly will face different constraints than a firm operating with many competitors in a competitive market. The microeconomist must therefore take the structure of the goods market into account when describing a firm’s behaviour.
Microeconomists constantly strive to improve the accuracy of their models of consumer and firm behaviour. On the consumer side, their efforts include rigorous mathematical modeling of utility that incorporates altruism, habit formation, and other behavioral influences on decision making. Behavioral economics is a field within microeconomics that crosses interdisciplinary boundaries to study the psychological, social, and cognitive aspects of individual decision making by using sophisticated mathematical models and natural experiments.
On the producer side, industrial organization has grown into a field within microeconomics that focuses on the detailed study of the structure of firms and how they operate in different markets. Labour economics, another field of microeconomics, studies the interactions of workers and firms in the labour market.
Details
Microeconomics is the social science that studies the implications of incentives and decisions, specifically how those affect the utilization and distribution of resources on an individual level. Microeconomics shows how and why different goods have different values, how individuals and businesses conduct and benefit from efficient production and exchange, and how individuals best coordinate and cooperate with one another. Generally speaking, microeconomics provides a more detailed understanding of individuals, firms, and markets, whereas macroeconomics provides a more aggregate view of economies.
KEY TAKEAWAYS
* Microeconomics studies the decisions of individuals and firms to allocate resources of production, exchange, and consumption.
* Microeconomics deals with prices and production in single markets and the interaction between different markets but leaves the study of economy-wide aggregates to macroeconomics.
* Microeconomists formulate various types of models based on logic and observed human behavior and test the models against real-world observations.
Understanding Microeconomics
Microeconomics is the study of what is likely to happen—also known as tendencies—when individuals make choices in response to changes in incentives, prices, resources, or methods of production. Individual actors are often grouped into microeconomic subgroups, such as buyers, sellers, and business owners. These groups create the supply and demand for resources, using money and interest rates as a pricing mechanism for coordination.
The Uses of Microeconomics
Microeconomics can be applied in a positive or normative sense. Positive microeconomics describes economic behavior and explains what to expect if certain conditions change. If a manufacturer raises the prices of cars, positive microeconomics says consumers will tend to buy fewer than before. If a major copper mine collapses in South America, the price of copper will tend to increase, because supply is restricted. Positive microeconomics could help an investor see why Apple Inc. stock prices might fall if consumers buy fewer iPhones. It could also explain why a higher minimum wage might force The Wendy's Company to hire fewer workers.
These explanations, conclusions, and predictions of positive microeconomics can then also be applied normatively to prescribe what people, businesses, and governments should do in order to attain the most valuable or beneficial patterns of production, exchange, and consumption among market participants. This extension of the implications of microeconomics from what is to what ought to be or what people ought to do also requires at least the implicit application of some sort of ethical or moral theory or principles, which usually means some form of utilitarianism.
Method of Microeconomics
Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).
The Marshallian and Walrasian methods fall under the larger umbrella of neoclassical microeconomics. Neoclassical economics focuses on how consumers and producers make rational choices to maximize their economic well being, subject to the constraints of how much income and resources they have available.
Neoclassical economists make simplifying assumptions about markets—such as perfect knowledge, infinite numbers of buyers and sellers, homogeneous goods, or static variable relationships—in order to construct mathematical models of economic behavior. These methods attempt to represent human behavior in functional mathematical language, which allows economists to develop mathematically testable models of individual markets. Neoclassicals believe in constructing measurable hypotheses about economic events, then using empirical evidence to see which hypotheses work best. In this way, they follow in the “logical positivism” or “logical empiricism” branch of philosophy. Microeconomics applies a range of research methods, depending on the question being studied and the behaviors involved.
Basic Concepts of Microeconomics
The study of microeconomics involves several key concepts, including (but not limited to):
* Incentives and behaviors: How people, as individuals or in firms, react to the situations with which they are confronted.
* Utility theory: Consumers will choose to purchase and consume a combination of goods that will maximize their happiness or “utility,” subject to the constraint of how much income they have available to spend.
* Production theory: This is the study of production, or the process of converting inputs into outputs. Producers seek to choose the combination of inputs and methods of combining them that will minimize cost in order to maximize their profits.
* Price theory: Utility and production theory interact to produce the theory of supply and demand, which determine prices in a competitive market. In a perfectly competitive market, it concludes that the price demanded by consumers is the same supplied by producers. That results in economic equilibrium.
Where Is Microeconomics Used?
Microeconomics has a wide variety of uses. For example, policymakers may use microeconomics to understand the effect of setting a minimum wage or subsidizing production of certain commodities. Businesses may use it to analyze pricing or production choices. Individuals may use it to assess purchasing and spending decisions.
What is Utility in Microeconomics?
In the field of microeconomics, utility refers to the degree of satisfaction that an individual receives when making an economic decision. The concept is important because decision-makers are often assumed to seek maximum utility when making choices within a market.
How Important Is Microeconomics in Our Daily Life?
Microeconomics is critical to daily life, even in ways that may not be evident to those engaging in it. Take, for example, the case of someone who is looking to buy a car. Microeconomic principles play a central role in individual decision-making. They will likely consider various incentives, such as rebates or low interest rates, when assessing whether or not to purchase a vehicle. They will likely select a make and model based on maximizing utility while also staying within their income constraints. On the other side of the scenario, a car company will have made similar microeconomic considerations in the production and supply of cars into the market.
The Bottom Line
Microeconomics is a field of study focused on the decision-making of individuals and firms within economies. This is in contrast with macroeconomics, a field that examines economies on a broader level. Microeconomics may look at the incentives that may influence individuals to make certain purchases, how they seek to maximize utility, and how they react to restraints. For firms, microeconomics may look at how producers decide what to produce, in what quantities, and what inputs to use based on minimizing costs and maximizing profits. Microeconomists formulate various types of models based on logic and observed human behavior and test the models against real-world observations.
Additional Information
Microeconomics is a branch of economics that studies the behavior of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. Microeconomics focuses on the study of individual markets, sectors, or industries as opposed to the national economy as a whole, which is studied in macroeconomics.
One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses[citation needed]. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results.
While microeconomics focuses on firms and individuals, macroeconomics focuses on the sum total of economic activity, dealing with the issues of growth, inflation, and unemployment—and with national policies relating to these issues. Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e., based upon basic assumptions about micro-level behavior.
Assumptions and definitions
Microeconomic study historically has been performed according to general equilibrium theory, developed by Léon Walras in Elements of Pure Economics (1874) and partial equilibrium theory, introduced by Alfred Marshall in Principles of Economics (1890).
Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive.
The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable.
Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed.
The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well.
The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence.
The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as the primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as revealed preference theory.
The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions.
Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had previously existed.
This is studied in the field of collective action and public choice theory. "Optimal welfare" usually takes on a Paretian norm, which is a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory.
The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set.
Allocation of scarce resources
Individuals and firms need to allocate limited resources to ensure all agents in the economy are well off. Firms decide which goods and services to produce considering low costs involving labour, materials and capital as well as potential profit margins. Consumers choose the good and services they want that will maximize their happiness taking into account their limited wealth.
The government can make these allocation decisions or they can be independently made by the consumers and firms. For example, in the former Soviet Union, the government played a part in informing car manufacturers which cars to produce and which consumers will gain access to a car.
]]>
Gist
Eugenics is a set of beliefs and practices that aim to improve the genetic quality of a human population.
Summary
Eugenics is the selection of desired heritable characteristics in order to improve future generations, typically in reference to humans. The term eugenics was coined in 1883 by British explorer and natural scientist Francis Galton, who, influenced by Charles Darwin’s theory of natural selection, advocated a system that would allow “the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable.” Social Darwinism, the popular theory in the late 19th century that life for humans in society was ruled by “survival of the fittest,” helped advance eugenics into serious scientific study in the early 1900s. By World War I many scientific authorities and political leaders supported eugenics. However, it ultimately failed as a science in the 1930s and ’40s, when the assumptions of eugenicists became heavily criticized and the Nazis used eugenics to support the extermination of entire races.
Early history
Although eugenics as understood today dates from the late 19th century, efforts to select matings in order to secure offspring with desirable traits date from ancient times. Plato’s Republic (c. 378 BCE) depicts a society where efforts are undertaken to improve human beings through selective breeding. Later, Italian philosopher and poet Tommaso Campanella, in City of the Sun (1623), described a utopian community in which only the socially elite are allowed to procreate. Galton, in Hereditary Genius (1869), proposed that a system of arranged marriages between men of distinction and women of wealth would eventually produce a gifted race. In 1865 the basic laws of heredity were discovered by the father of modern genetics, Gregor Mendel. His experiments with peas demonstrated that each physical trait was the result of a combination of two units (now known as genes) and could be passed from one generation to another. However, his work was largely ignored until its rediscovery in 1900. This fundamental knowledge of heredity provided eugenicists—including Galton, who influenced his cousin Charles Darwin—with scientific evidence to support the improvement of humans through selective breeding.
The advancement of eugenics was concurrent with an increasing appreciation of Darwin’s account for change or evolution within society—what contemporaries referred to as social Darwinism. Darwin had concluded his explanations of evolution by arguing that the greatest step humans could make in their own history would occur when they realized that they were not completely guided by instinct. Rather, humans, through selective reproduction, had the ability to control their own future evolution. A language pertaining to reproduction and eugenics developed, leading to terms such as positive eugenics, defined as promoting the proliferation of “good stock,” and negative eugenics, defined as prohibiting marriage and breeding between “defective stock.” For eugenicists, nature was far more contributory than nurture in shaping humanity.
During the early 1900s eugenics became a serious scientific study pursued by both biologists and social scientists. They sought to determine the extent to which human characteristics of social importance were inherited. Among their greatest concerns were the predictability of intelligence and certain deviant behaviours. Eugenics, however, was not confined to scientific laboratories and academic institutions. It began to pervade cultural thought around the globe, including the Scandinavian countries, most other European countries, North America, Latin America, Japan, China, and Russia. In the United States the eugenics movement began during the Progressive Era and remained active through 1940. It gained considerable support from leading scientific authorities such as zoologist Charles B. Davenport, plant geneticist Edward M. East, and geneticist and Nobel Prize laureate Hermann J. Muller. Political leaders in favour of eugenics included U.S. Pres. Theodore Roosevelt, Secretary of State Elihu Root, and Associate Justice of the Supreme Court John Marshall Harlan. Internationally, there were many individuals whose work supported eugenic aims, including British scientists J.B.S. Haldane and Julian Huxley and Russian scientists Nikolay K. Koltsov and Yury A. Filipchenko.
Galton had endowed a research fellowship in eugenics in 1904 and, in his will, provided funds for a chair of eugenics at University College, London. The fellowship and later the chair were occupied by Karl Pearson, a brilliant mathematician who helped to create the science of biometry, the statistical aspects of biology. Pearson was a controversial figure who believed that environment had little to do with the development of mental or emotional qualities. He felt that the high birth rate of the poor was a threat to civilization and that the “higher” races must supplant the “lower.” His views gave countenance to those who believed in racial and class superiority. Thus, Pearson shares the blame for the discredit later brought on eugenics.
In the United States, the Eugenics Record Office (ERO) was opened at Cold Spring Harbor, Long Island, New York, in 1910 with financial support from the legacy of railroad magnate Edward Henry Harriman. Whereas ERO efforts were officially overseen by Charles B. Davenport, director of the Station for Experimental Study of Evolution (one of the biology research stations at Cold Spring Harbor), ERO activities were directly superintended by Harry H. Laughlin, a professor from Kirksville, Missouri. The ERO was organized around a series of missions. These missions included serving as the national repository and clearinghouse for eugenics information, compiling an index of traits in American families, training fieldworkers to gather data throughout the United States, supporting investigations into the inheritance patterns of particular human traits and diseases, advising on the eugenic fitness of proposed marriages, and communicating all eugenic findings through a series of publications. To accomplish these goals, further funding was secured from the Carnegie Institution of Washington, John D. Rockefeller, Jr., the Battle Creek Race Betterment Foundation, and the Human Betterment Foundation.
Prior to the founding of the ERO, eugenics work in the United States was overseen by a standing committee of the American Breeder’s Association (eugenics section established in 1906), chaired by ichthyologist and Stanford University president David Starr Jordan. Research from around the globe was featured at three international congresses, held in 1912, 1921, and 1932. In addition, eugenics education was monitored in Britain by the English Eugenics Society (founded by Galton in 1907 as the Eugenics Education Society) and in the United States by the American Eugenics Society.
Following World War I, the United States gained status as a world power. A concomitant fear arose that if the healthy stock of the American people became diluted with socially undesirable traits, the country’s political and economic strength would begin to crumble. The maintenance of world peace by fostering democracy, capitalism, and, at times, eugenics-based schemes was central to the activities of “the Internationalists,” a group of prominent American leaders in business, education, publishing, and government. One core member of this group, the New York lawyer Madison Grant, aroused considerable pro-eugenic interest through his best-selling book The Passing of the Great Race (1916). Beginning in 1920, a series of congressional hearings was held to identify problems that immigrants were causing the United States. As the country’s “eugenics expert,” Harry Laughlin provided tabulations showing that certain immigrants, particularly those from Italy, Greece, and Eastern Europe, were significantly overrepresented in American prisons and institutions for the “feebleminded.” Further data were construed to suggest that these groups were contributing too many genetically and socially inferior people. Laughlin’s classification of these individuals included the feebleminded, the insane, the criminalistic, the epileptic, the inebriate, the diseased—including those with tuberculosis, leprosy, and syphilis—the blind, the deaf, the deformed, the dependent, chronic recipients of charity, paupers, and “ne’er-do-wells.” Racial overtones also pervaded much of the British and American eugenics literature. In 1923 Laughlin was sent by the U.S. secretary of labour as an immigration agent to Europe to investigate the chief emigrant-exporting nations. Laughlin sought to determine the feasibility of a plan whereby every prospective immigrant would be interviewed before embarking to the United States. He provided testimony before Congress that ultimately led to a new immigration law in 1924 that severely restricted the annual immigration of individuals from countries previously claimed to have contributed excessively to the dilution of American “good stock.”
Immigration control was but one method to control eugenically the reproductive stock of a country. Laughlin appeared at the centre of other U.S. efforts to provide eugenicists greater reproductive control over the nation. He approached state legislators with a model law to control the reproduction of institutionalized populations. By 1920, two years before the publication of Laughlin’s influential Eugenical Sterilization in the United States (1922), 3,200 individuals across the country were reported to have been involuntarily sterilized. That number tripled by 1929, and by 1938 more than 30,000 people were claimed to have met this fate. More than half of the states adopted Laughlin’s law, with California, Virginia, and Michigan leading the sterilization campaign. Laughlin’s efforts secured staunch judicial support in 1927. In the precedent-setting case of Buck v. Bell, Supreme Court Justice Oliver Wendell Holmes, Jr., upheld the Virginia statute and claimed, “It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind.”
Popular support for eugenics
During the 1930s eugenics gained considerable popular support across the United States. Hygiene courses in public schools and eugenics courses in colleges spread eugenic-minded values to many. A eugenics exhibit titled “Pedigree-Study in Man” was featured at the Chicago World’s Fair in 1933–34. Consistent with the fair’s “Century of Progress” theme, stations were organized around efforts to show how favourable traits in the human population could best be perpetuated. Contrasts were drawn between the emulative presidential Roosevelt family and the degenerate “Ishmael” family (one of several pseudonymous family names used, the rationale for which was not given). By studying the passage of ancestral traits, fairgoers were urged to adopt the progressive view that responsible individuals should pursue marriage ever mindful of eugenics principles. Booths were set up at county and state fairs promoting “fitter families” contests, and medals were awarded to eugenically sound families. Drawing again upon long-standing eugenic practices in agriculture, popular eugenic advertisements claimed it was about time that humans received the same attention in the breeding of better babies that had been given to livestock and crops for centuries.
Anti-eugenics sentiment
Anti-eugenics sentiment began to appear after 1910 and intensified during the 1930s. Most commonly it was based on religious grounds. For example, the 1930 papal encyclical Casti connubii condemned reproductive sterilization, though it did not specifically prohibit positive eugenic attempts to amplify the inheritance of beneficial traits. Many Protestant writings sought to reconcile age-old Christian warnings about the heritable sins of the father to pro-eugenic ideals. Indeed, most of the religion-based popular writings of the period supported positive means of improving the physical and moral makeup of humanity.
In the early 1930s Nazi Germany adopted American measures to identify and selectively reduce the presence of those deemed to be “socially inferior” through involuntary sterilization. A rhetoric of positive eugenics in the building of a master race pervaded Rassenhygiene (racial hygiene) movements. When Germany extended its practices far beyond sterilization in efforts to eliminate the Jewish and other non-Aryan populations, the United States became increasingly concerned over its own support of eugenics. Many scientists, physicians, and political leaders began to denounce the work of the ERO publicly. After considerable reflection, the Carnegie Institution formally closed the ERO at the end of 1939.
During the aftermath of World War II, eugenics became stigmatized such that many individuals who had once hailed it as a science now spoke disparagingly of it as a failed pseudoscience. Eugenics was dropped from organization and publication names. In 1954 Britain’s Annals of Eugenics was renamed Annals of Human Genetics. In 1972 the American Eugenics Society adopted the less-offensive name Society for the Study of Social Biology. Its publication, once popularly known as the Eugenics Quarterly, had already been renamed Social Biology in 1969.
U.S. Senate hearings in 1973, chaired by Sen. Ted Kennedy, revealed that thousands of U.S. citizens had been sterilized under federally supported programs. The U.S. Department of Health, Education, and Welfare proposed guidelines encouraging each state to repeal their respective sterilization laws. Other countries, most notably China, continue to support eugenics-directed programs openly in order to ensure the genetic makeup of their future.
The “new eugenics”
Despite the dropping of the term eugenics, eugenic ideas remained prevalent in many issues surrounding human reproduction. Medical genetics, a post-World War II medical specialty, encompasses a wide range of health concerns, from genetic screening and counseling to fetal gene manipulation and the treatment of adults suffering from hereditary disorders. Because certain diseases (e.g., hemophilia and Tay-Sachs disease) are now known to be genetically transmitted, many couples choose to undergo genetic screening, in which they learn the chances that their offspring have of being affected by some combination of their hereditary backgrounds. Couples at risk of passing on genetic defects may opt to remain childless or to adopt children. Furthermore, it is now possible to diagnose certain genetic defects in the unborn. Many couples choose to terminate a pregnancy that involves a genetically disabled offspring. These developments have reinforced the eugenic aim of identifying and eliminating undesirable genetic material.
Counterbalancing this trend, however, has been medical progress that enables victims of many genetic diseases to live fairly normal lives. Direct manipulation of harmful genes is also being studied. If perfected, it could obviate eugenic arguments for restricting reproduction among those who carry harmful genes. Such conflicting innovations have complicated the controversy surrounding what many call the “new eugenics.” Moreover, suggestions for expanding eugenics programs, which range from the creation of sperm banks for the genetically superior to the potential cloning of human beings, have met with vigorous resistance from the public, which often views such programs as unwarranted interference with nature or as opportunities for abuse by authoritarian regimes.
Applications of the Human Genome Project are often referred to as “Brave New World” genetics or the “new eugenics,” in part because they have helped to dramatically increase knowledge of human genetics. In addition, 21st-century technologies such as gene editing, which can potentially be used to treat disease or to alter traits, have further renewed concerns. However, the ethical, legal, and social implications of such tools are monitored much more closely than were early 20th-century eugenics programs. Applications generally are more focused on the reduction of genetic diseases than on improving intelligence.
Still, with or without the use of the term, many eugenics-related concerns are reemerging as a new group of individuals decide how to regulate the application of genetics science and technology. This gene-directed activity, in attempting to improve upon nature, may not be that distant from what Galton implied in 1909 when he described eugenics as the “study of agencies, under social control, which may improve or impair” future generations.
Details
Eugenics is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have attempted to alter human gene pools by excluding people and groups judged to be inferior or promoting those judged to be superior. In recent years, the term has seen a revival in bioethical discussions on the usage of new technologies such as CRISPR and genetic screening, with heated debate around whether these technologies should be considered eugenics or not.
The concept predates the term; Plato suggested applying the principles of selective breeding to humans around 400 BCE. Early advocates of eugenics in the 19th century regarded it as a way of improving groups of people. In contemporary usage, the term eugenics is closely associated with scientific racism. Modern bioethicists who advocate new eugenics characterize it as a way of enhancing individual traits, regardless of group membership.
While eugenic principles have been practiced as early as ancient Greece, the contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. , Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock. Such programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. Those deemed "unfit to reproduce" often included people with mental or physical disabilities, people who scored in the low ranges on different IQ tests, criminals and "deviants", and members of disfavored minority groups.
The eugenics movement became associated with Nazi Germany and the Holocaust when the defense of many of the defendants at the Nuremberg trials of 1945 to 1946 attempted to justify their human-rights abuses by claiming there was little difference between the Nazi eugenics programs and the US eugenics programs. In the decades following World War II, with more emphasis on human rights, many countries began to abandon eugenics policies, although some Western countries (the United States, Canada, and Sweden among them) continued to carry out forced sterilizations. Since the 1980s and 1990s, with new assisted reproductive technology procedures available, such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989), and cytoplasmic transfer (first performed in 1996), concern has grown about the possible revival of a more potent form of eugenics after decades of promoting human rights.
A criticism of eugenics policies is that, regardless of whether negative or positive policies are used, they are susceptible to abuse because the genetic selection criteria are determined by whichever group has political power at the time. Furthermore, many criticize negative eugenics in particular as a violation of basic human rights, seen since 1968's Proclamation of Tehran, as including the right to reproduce. Another criticism is that eugenics policies eventually lead to a loss of genetic diversity, thereby resulting in inbreeding depression due to a loss of genetic variation. Yet another criticism of contemporary eugenics policies is that they propose to permanently and artificially disrupt millions of years of human evolution, and that attempting to create genetic lines "clean" of "disorders" can have far-reaching ancillary downstream effects in the genetic ecology, including negative effects on immunity and on species resilience. Eugenics is commonly seen in popular media, as highlighted by series like Resident Evil.
Modern eugenics
Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products".
In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.
Lee Kuan Yew, the founding father of Singapore, promoted eugenics as late as 1983. A proponent of nature over nurture, he stated that "intelligence is 80% nature and 20% nurture", and attributed the successes of his children to genetics. In his speeches, Lee urged highly educated women to have more children, claiming that "social delinquents" would dominate unless their fertility rate increased. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. In 1985, incentives were significantly reduced after public uproar.
In October 2015, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology.
The National Human Genome Research Institute says that eugenics is "inaccurate", "scientifically erroneous and immoral".
Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.
Prenatal screening has been called by some a contemporary form of eugenics because it may lead to abortions of fetuses with undesirable traits.
A system was proposed by California State Senator Nancy Skinner to compensate victims of the well-documented examples of prison sterilizations resulting from California's eugenics programs, but this did not pass by the bill's 2018 deadline in the Legislature.
Meanings and types
The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, drawing on the recent work of his half-cousin Charles Darwin. Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.
The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu ("good" or "well") and the suffix -genēs ("born"); Galton intended it to replace the word "stirpiculture", which he had used previously but which had come to be mocked due to its perceived sexual overtones. Galton defined eugenics as "the study of all agencies under human control which can improve or impair the racial quality of future generations".
The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience.
Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today.
Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. Black states the following about the pseudoscientific past of eugenics: "As American eugenic pseudoscience thoroughly infused the scientific journals of the first three decades of the twentieth century, Nazi-era eugenics placed its unmistakable stamp on the medical literature of the twenties, thirties and forties." Black says that eugenics was the pseudoscience aimed at "improving" the human race, used by Adolf Hitler to "try to legitimize his anti- Semitism by medicalizing it, and wrapping it in the more palatable pseudoscientific facade of eugenics."
Early eugenicists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. These included Karl Pearson and Walter Weldon, who worked on this at the University College London. In his lecture "Darwinism, Medical Progress and Eugenics", Pearson claimed that everything concerning eugenics fell into the field of medicine.
Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit.
]]>
Gist
The word geology means 'Study of the Earth'. Also known as geoscience or earth science, Geology is the primary Earth science and looks at how the earth formed, its structure and composition, and the types of processes acting on it.
Summary
Geology is the fields of study concerned with the solid Earth. Included are sciences such as mineralogy, geodesy, and stratigraphy.
An introduction to the geochemical and geophysical sciences logically begins with mineralogy, because Earth’s rocks are composed of minerals—inorganic elements or compounds that have a fixed chemical composition and that are made up of regularly aligned rows of atoms. Today one of the principal concerns of mineralogy is the chemical analysis of the some 3,000 known minerals that are the chief constituents of the three different rock types: sedimentary (formed by diagenesis of sediments deposited by surface processes); igneous (crystallized from magmas either at depth or at the surface as lavas); and metamorphic (formed by a recrystallization process at temperatures and pressures in the Earth’s crust high enough to destabilize the parent sedimentary or igneous material). Geochemistry is the study of the composition of these different types of rocks.
During mountain building, rocks became highly deformed, and the primary objective of structural geology is to elucidate the mechanism of formation of the many types of structures (e.g., folds and faults) that arise from such deformation. The allied field of geophysics has several subdisciplines, which make use of different instrumental techniques. Seismology, for example, involves the exploration of the Earth’s deep structure through the detailed analysis of recordings of elastic waves generated by earthquakes and man-made explosions. Earthquake seismology has largely been responsible for defining the location of major plate boundaries and of the dip of subduction zones down to depths of about 700 kilometres at those boundaries. In other subdisciplines of geophysics, gravimetric techniques are used to determine the shape and size of underground structures; electrical methods help to locate a variety of mineral deposits that tend to be good conductors of electricity; and paleomagnetism has played the principal role in tracking the drift of continents.
Geomorphology is concerned with the surface processes that create the landscapes of the world—namely, weathering and erosion. Weathering is the alteration and breakdown of rocks at the Earth’s surface caused by local atmospheric conditions, while erosion is the process by which the weathering products are removed by water, ice, and wind. The combination of weathering and erosion leads to the wearing down or denudation of mountains and continents, with the erosion products being deposited in rivers, internal drainage basins, and the oceans. Erosion is thus the complement of deposition. The unconsolidated accumulated sediments are transformed by the process of diagenesis and lithification into sedimentary rocks, thereby completing a full cycle of the transfer of matter from an old continent to a young ocean and ultimately to the formation of new sedimentary rocks. Knowledge of the processes of interaction of the atmosphere and the hydrosphere with the surface rocks and soils of the Earth’s crust is important for an understanding not only of the development of landscapes but also (and perhaps more importantly) of the ways in which sediments are created. This in turn helps in interpreting the mode of formation and the depositional environment of sedimentary rocks. Thus the discipline of geomorphology is fundamental to the uniformitarian approach to the Earth sciences according to which the present is the key to the past.
Geologic history provides a conceptual framework and overview of the evolution of the Earth. An early development of the subject was stratigraphy, the study of order and sequence in bedded sedimentary rocks. Stratigraphers still use the two main principles established by the late 18th-century English engineer and surveyor William Smith, regarded as the father of stratigraphy: (1) that younger beds rest upon older ones and (2) different sedimentary beds contain different and distinctive fossils, enabling beds with similar fossils to be correlated over large distances. Today biostratigraphy uses fossils to characterize successive intervals of geologic time, but as relatively precise time markers only to the beginning of the Cambrian Period, about 540,000,000 years ago. The geologic time scale, back to the oldest rocks, some 4,280,000,000 years ago, can be quantified by isotopic dating techniques. This is the science of geochronology, which in recent years has revolutionized scientific perception of Earth history and which relies heavily on the measured parent-to-daughter ratio of radiogenic isotopes.
Paleontology is the study of fossils and is concerned not only with their description and classification but also with an analysis of the evolution of the organisms involved. Simple fossil forms can be found in early Precambrian rocks as old as 3,500,000,000 years, and it is widely considered that life on Earth must have begun before the appearance of the oldest rocks. Paleontological research of the fossil record since the Cambrian Period has contributed much to the theory of evolution of life on Earth.
Several disciplines of the geologic sciences have practical benefits for society. The geologist is responsible for the discovery of minerals (such as lead, chromium, nickel, and tin), oil, gas, and coal, which are the main economic resources of the Earth; for the application of knowledge of subsurface structures and geologic conditions to the building industry; and for the prevention of natural hazards or at least providing early warning of their occurrence.
Astrogeology is important in that it contributes to understanding the development of the Earth within the solar system. The U.S. Apollo program of manned missions to the Moon, for example, provided scientists with firsthand information on lunar geology, including observations on such features as meteorite craters that are relatively rare on Earth. Unmanned space probes have yielded significant data on the surface features of many of the planets and their satellites. Since the 1970s even such distant planetary systems as those of Jupiter, Saturn, and Uranus have been explored by probes.
Details
Geology is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science.
Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates.
Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering.
Geological material
The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods.
Mineral
Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and ordered atomic composition.
Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for:
Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull.
Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color.
Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help name the mineral.
Hardness: The resistance of a mineral to scratching.
Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes.
Specific gravity: the weight of a specific volume of a mineral.
Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing.
Magnetism: Involves using a magnet to test for magnetism.
Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt).
Rock
The rock cycle shows the relationship between igneous, sedimentary, and metamorphic rocks.
A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle illustrates the relationships among them.
When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. It can then be turned into a metamorphic rock by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify. Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks.
To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric.
Unlithified material
Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock.[6] This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time.
Magma
Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization.
Whole-Earth structure:
Plate tectonics
In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading[8][9] and the global distribution of mountain terrain and seismicity.
There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics.
The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries.
For example:
Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart.
Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another.
Transform boundaries, such as the San Andreas Fault system, resulted in widespread powerful earthquakes. Plate tectonics also has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle.
Earth structure
The Earth's layered structure. (1) inner core; (2) outer core; (3) lower mantle; (4) upper mantle; (5) lithosphere; (6) crust (part of the lithosphere)
Earth layered structure. Typical wave paths from earthquakes like these gave early seismologists insights into the layered structure of the Earth
Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth.
Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a crust and lithosphere on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model.
Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth.
Additional Information
Geology is the study of the nonliving things that the Earth is made of.[1][2] Geology is the study of rocks in the Earth's crust. People who study geology are called geologists.[3] Some geologists study minerals (mineralogist) and the useful substances the rocks contain such as ores and fossil fuels. Geologists also study the history of the Earth.
Some of the important events in the Earth's history are floods, volcanic eruptions, earthquakes, orogeny (mountain building), and plate tectonics (movement of continents).
Subjects
Geology is divided into special subjects that study one part of geology. Some of these subjects and what they focus on are:
Geomorphology – the shape of Earth's surface (its morphology)
Historical geology – the events that shaped the Earth over the last 4.5 billion years
Hydrogeology – underground water
Palaeontology – fossils; evolutionary histories
Petrology – rocks, how they form, where they are from, and what that implies
Mineralogy – minerals
Sedimentology – sediments (clays, sands, gravels, soils, etc.)
Stratigraphy – layered sedimentary rocks and how they were deposited
Petroleum geology – petroleum deposits in sedimentary rocks
Structural geology – folds, faults, and mountain-building;
Volcanology – volcanoes on land or under the ocean
Seismology – earthquakes and strong ground-motion
Engineering geology – geologic hazards (such as landslides and earthquakes) applied to civil engineering[12][13]
Geotechnical engineering: It is also called Geotechnics. It is branch of geology that deals with the engineering behavior of earth materials.
Types of rock
Rocks can be very different from each other. Some are very hard and some are soft. Some rocks are very common, while others are rare. However, all the different rocks belong to three categories or types, igneous, sedimentary and metamorphic.
Igneous rock is rock that has been made by volcanic action. Igneous rock is made when the lava (melted rock on the surface of the Earth) or magma (melted rock below the surface of the Earth) cools and becomes hard.
Sedimentary rock is rock that has been made from sediment. Sediment is solid pieces of stuff that are moved by wind, water, or glaciers, and dropped somewhere. Sediment can be made from clay, sand, gravel and the bodies and shells of animals. The sediment gets dropped in a layer, usually in water at the bottom of a river or sea. As the sediment piles up, the lowers layers get squashed together. Slowly they set hard into rock.
Metamorphic rock is rock that has been changed. Sometimes an igneous or a sedimentary rock is heated or squashed under the ground, so that it changes. Metamorphic rock is often harder than the rock that it was before it changed. Marble and slate are among the metamorphic rocks that people use to make things.
Faults
All three kinds of rock can be changed by being heated and squeezed by forces in the Earth. When this happens, faults (cracks) may appear in the rock. Geologists can learn a lot about the history of the rock by studying the patterns of the fault lines. Earthquakes are caused when a fault breaks suddenly.
Soil
Soil is the stuff on the ground made of lots of particles (or tiny pieces). The particles of soil come from rocks that have broken down, and from rotting leaves and animals bodies. Soil covers a lot of the surface of the Earth. Plants of all sorts grow in soil.
To find out more about types of rocks, see the rock (geology) article. To find out more about soil, see the soil article.
Principles of Stratigraphy
Geologists use some simple ideas which help them to understand the rocks they are studying. The following ideas were worked out in the early days of stratigraphy by people like Nicolaus Steno, James Hutton and William Smith:
* Understanding the past: Geologist James Hutton said "The present is the key to the past". He meant that the sort of changes that are happening to the Earth's surface now are the same sorts of things that happened in the past. *Geologists can understand things that happened millions of years ago, by looking at the changes which are happening today.
* Horizontal strata: The layers in a sedimentary rock must have been horizontal (flat) when they were deposited (laid down).
* The age of the strata: Layers at the bottom must be older than layers at the top, unless all the rocks have been turned over.
* In sedimentary rocks that are made of sand or gravel, the sand or gravel must have come from an older rock.
* The age of faults: If there is a crack or fault in a rock, then the fault is younger than the rock. Rocks are in strata (lots of layers). A geologist can see if the faults go through all the layer, or only some. This helps to tell the age of the rocks.
* The age of a rock which cuts through other rocks: If an igneous rock cuts across sedimentary layers, it must be younger than the sedimentary rock.
* The relative age of fossils: A fossil in one rock type must be about the same age as the same type of fossil in the same type of rock in a different place. Likewise, a fossil in a rock layer below must be earlier than one in a higher layer.
]]>