Gist
Blood tests are common medical tests. You may have a blood test as part of a routine physical examination or because you have certain symptoms.
There are many different blood tests. Some tests focus on your blood cells and platelets. Some evaluate substances in your blood such as electrolytes, proteins and hormones. Others measure certain minerals in your blood.
Regardless of why you’re having a blood test, it’s important to remember that blood tests help healthcare providers diagnose health issues. But blood test results aren’t diagnoses. An abnormal blood test result may not mean you have a serious medical condition.
Summary
A blood test is a laboratory analysis performed on a blood sample that is usually extracted from a vein in the arm using a hypodermic needle, or via fingerprick. Multiple tests for specific blood components, such as a glucose test or a cholesterol test, are often grouped together into one test panel called a blood panel or blood work. Blood tests are often used in health care to determine physiological and biochemical states, such as disease, mineral content, pharmaceutical drug effectiveness, and organ function. Typical clinical blood panels include a basic metabolic panel or a complete blood count. Blood tests are also used in drug tests to detect drug abuse.
Extraction
A venipuncture is useful as it is a minimally invasive way to obtain cells and extracellular fluid (plasma) from the body for analysis. Blood flows throughout the body, acting as a medium that provides oxygen and nutrients to tissues and carries waste products back to the excretory systems for disposal. Consequently, the state of the bloodstream affects or is affected by, many medical conditions. For these reasons, blood tests are the most commonly performed medical tests.
If only a few drops of blood are needed, a fingerstick is performed instead of a venipuncture.
In dwelling arterial, central venous and peripheral venous lines can also be used to draw blood.
Phlebotomists, laboratory practitioners and nurses are those in charge of extracting blood from a patient. However, in special circumstances, and/or emergency situations, paramedics and physicians extract the blood. Also, respiratory therapists are trained to extract arterial blood to examine arterial blood gases.
Types of tests:
Biochemical analysis
A basic metabolic panel measures sodium, potassium, chloride, bicarbonate, blood urea nitrogen (BUN), magnesium, creatinine, glucose, and sometimes calcium. Tests that focus on cholesterol levels can determine LDL and HDL cholesterol levels, as well as triglyceride levels.
Some tests, such as those that measure glucose or a lipid profile, require fasting (or no food consumption) eight to twelve hours prior to the drawing of the blood sample.
For the majority of tests, blood is usually obtained from the patient's vein. Other specialized tests, such as the arterial blood gas test, require blood extracted from an artery. Blood gas analysis of arterial blood is primarily used to monitor carbon dioxide and oxygen levels related to pulmonary function, but is also used to measure blood pH and bicarbonate levels for certain metabolic conditions.
While the regular glucose test is taken at a certain point in time, the glucose tolerance test involves repeated testing to determine the rate at which glucose is processed by the body.
Blood tests are also used to identify autoimmune diseases and Immunoglobulin E-mediated food allergies (see also Radioallergosorbent test).
Details
Blood testing is the lab analysis of the blood found in your body. Blood testing is usually prescribed if you need to keep track of the progress of a specific treatment, manage health conditions such as high cholesterol, diabetes, illness, or routine checkup. Blood testing is widespread and is usually done by the doctor themselves or at your local diagnostic centre. Knowing the different blood test names and meanings can help you better understand their uses and benefits.
Why do blood tests matter?
A blood test is carried out for a wide variety of reasons:
* To find out how well vital organs such as your heart, liver, kidneys, or thyroid are working
* To help diagnose lifestyle diseases like diabetes, cancer, coronary heart disease, or AIDS/HIV
* To understand whether the prescribed medicine is working
* To diagnose clotting or bleeding disorders
* To identify health issues at an early stage
* To monitor chronic diseases and health conditions
The blood test reports help your doctor get a basic idea of your overall health and prescribe specialised tests to get an accurate diagnosis.
Types of blood tests
The list of blood tests can be very long. Here are some of the common ones that you need to know about.
Complete blood count (CBC)
This routine blood test checks ten different components of the blood, including red blood cells, white blood cells, platelets, haemoglobin, and hematocrit. Any abnormalities in the normal levels of these components can indicate
* nutritional deficiencies,
* anaemia,
* blood cancer,
* infections,
* problems with the immune system, or
* clotting problems.
You may have to undergo a few follow-up tests to get a more precise diagnosis of your issues.
Basic metabolic panel (BMP)
This test checks for the following eight components in the blood:
* Glucose
* Calcium
* Sodium
* Potassium
* Bicarbonate
* Chloride
* Blood urea nitrogen
* Creatinine
You may be asked to avoid eating anything for at least 8 hours before the sample is taken, depending on the parameters you need to measure and the doctor's instructions. Abnormal results in this test may be a result of
* diabetes,
* kidney disease, or
* hormone imbalance.
Comprehensive metabolic panel (CMP)
In this test, all the parameters checked in the BMP are measured along with a few others, like
* total protein,
* albumin,
* alkaline phosphatase,
* alanine aminotransferase,
* aspartate amino transferase, and
* bilirubin.
Abnormalities in these tests can indicate many health issues depending on whether the levels of components are higher or lower than the normal range.
Lipid panel
Lipid test is used to check the levels of two types of cholesterol in the body:
* High-density lipoprotein (HDL): It is also known as good cholesterol because it helps the liver remove harmful toxins from the body by breaking them down into waste.
* Low-density lipoprotein (LDL): It is also called bad cholesterol as it can cause plaque which clogs the blood vessels and increase the risks of heart disease.
Thyroid panel
The thyroid panel blood test, or the thyroid function test, is used to identify how well your thyroid functions when reacting to or producing hormones like
* Thyroxine,
* Triiodothyronine, and
* Thyroid-stimulating hormone.
Abnormal levels of hormones in the body can indicate conditions such as
* thyroid growth disorders,
* low protein levels, and
* abnormal levels of estrogen or testosterone.
Cardiac biomarkers
An enzyme is a protein which helps the body carry out chemical processes like breaking down food or clotting blood. Some of the standard blood tests for enzymes include
* Creatine kinase
* Troponin
* Creatine kinase-MB
Abnormal levels of enzymes indicate a wide range of issues that may need further testing.
Sexually transmitted infection (STI) tests
Most STIs are diagnosed with the help of a blood test or a blood and urine test combined. Some of the common STIs that can be identified using a blood test are
* Gonorrhea
* herpes
* HIV
* syphilis
* chlamydia
In some cases, blood tests have their limitations. For example, a test for HIV will be able to detect the virus only after a month of the infection.
Coagulation panel
A coagulation test is used to measure how long it takes for your blood to clot as well as how effectively it clots. Clotting is important for wound healing; however, if a clot forms in an artery or vein, it can block the blood flow to vital organs and cause health issues.
The results of these tests help doctors diagnose
* haemophilia (the condition in which you bleed excessively),
* liver conditions,
* Vitamin K deficiency,
* leukaemia, and
* thrombosis.
Electrolyte panel
This blood test helps measure the levels of different minerals in your body. Any imbalance in these levels may indicate problems with vital organs like the kidneys, lungs, or heart. Along with all the parameters in the BMP and CMP, this test also checks the magnesium and anion gap levels.
Allergy testing
An allergy blood test can identify increased levels of immunoglobulin E (IgE). It can help detect allergies for such as pollen, pets, various food items, and other substances.
Autoimmune diseases
An autoimmune disease is a result of your immune system accidentally attacking your body instead of protecting it from parasites, cancer, and viruses. Autoimmune tests include the following:
* C-reactive protein tests
* peripheral blood smears
* erythrocyte sedimentation rate
* CE complement blood tests
* antinuclear antibody tests
After preliminary testing, blood tests can also be used to identify specialised issues such as cancer, heart disease, and endocrine system disorders.
Conclusion
A blood test can give you an overall idea of your health. It can be used to identify diseases or illnesses at an early stage. It also helps your healthcare provider understand how well your body is responding to treatment. Most people get routine blood tests at least once a year to stay on top of their health. You can talk to your doctor about some routine tests to ensure you are in optimal health or opt for medical testing packages available at Metropolis Labs.
Additional Information
Blood tests are very common. They help doctors check for certain diseases and conditions. They also help check the function of your organs and show how well treatments are working.
Complete blood count (CBC)
The complete blood count (CBC) is one of the most common blood tests. It is often done as part of a routine checkup. This test measures many different parts of your blood, including red blood cells, white blood cells, and platelets.
* Red blood cell levels that are higher or lower than normal could be a sign of dehydration, anemia, or bleeding. Red blood cells carry oxygen from your lungs to the rest of your body.
* White blood cell levels that are higher or lower than normal could be a sign of infection, blood cancer, or an immune system disorder. White blood cells are part of your immune system, which fights infections and diseases.
* Platelet levels that are higher or lower than normal may be a sign of a clotting disorder or a bleeding disorder. Platelets are blood cell fragments that help your blood clot. They stick together to seal cuts or breaks on blood vessel walls and stop bleeding.
* Hemoglobin levels that are lower than normal may be a sign of anemia, sickle cell disease, or thalassemia. Hemoglobin is an iron-rich protein in red blood cells that carries oxygen.
* Hematocrit levels that are too high might mean you’re dehydrated. Low hematocrit levels may be a sign of anemia. Hematocrit is a measure of how much space red blood cells take up in your blood.
* Mean corpuscular volume (MCV) levels that are lower than normal may be a sign of anemia or thalassemia. MCV is a measure of the average size of your red blood cells.
Blood chemistry tests/basic metabolic panel
The basic metabolic panel (BMP) is a group of tests that measures different naturally occurring chemicals in the blood. These tests usually are done on the fluid (plasma) part of blood. The tests can give providers information about your organs, such as the heart, kidneys, and liver.
The BMP includes blood glucose, calcium, and electrolyte tests, as well as blood tests that measure kidney function. Some of these tests require you to fast (not eat any food) before the test, and others don't. Your provider will tell you how to prepare for the test(s) you're having.
Blood enzyme tests
Blood enzyme tests may be used to check for heart attack. Enzymes are chemicals that help control chemical reactions in your body. There are many types of blood enzyme tests. The ones for heart attack include troponin and creatine kinase (CK) tests.
Blood levels of troponin go up when a person has muscle damage, including damage to the heart muscle. In addition, an enzyme called CK-MB is released into the blood when the heart muscle is damaged. High levels of CK-MB in the blood can mean that you've had a heart attack.
Lipoprotein panel
A lipoprotein panel, also called a lipid panel or lipid profile, measures the levels of LDL and HDL cholesterol and triglycerides in your blood. Cholesterol and triglyceride levels that are higher or lower than normal may be signs of higher risk of coronary heart disease.
A lipoprotein panel gives information about your:
* Total cholesterol
* LDL ("bad") cholesterol, which is the main source of cholesterol buildup and blockages in the arteries
* HDL ("good") cholesterol, which helps decrease cholesterol blockages in the arteries
* Triglycerides, which are a type of fat in your blood
Most people will need to fast for 9 to 12 hours before a lipoprotein panel.
Blood clotting tests
Blood clotting tests are sometimes called a coagulation panel. These tests check proteins in your blood that affect the blood clotting process. Levels that are higher or lower than normal might suggest that you're at risk of bleeding or developing clots in your blood vessels.
Blood clotting tests also are used to monitor people who are taking medicines to lower the risk of blood clots. Warfarin and heparin are two examples of such medicines.
Bone marrow tests
Bone marrow tests check whether your bone marrow is healthy and making normal amounts of blood cells. The two bone marrow tests are aspiration and biopsy.
* Aspiration collects a small amount of bone marrow fluid through a larger needle.
* Biopsy tests are often done at the same time as the aspiration test. A biopsy test collects a small amount of bone marrow tissue through a larger needle.
These tests can help find the cause of low or high blood cell counts. They also play an important role in checking how well treatments for certain types of cancers, such as leukemia or lymphoma, are working.
Before this procedure, be sure to tell your provider about current medicines you are taking, known allergies to medicines, if you are pregnant, or if you have a bleeding disorder.
Bone marrow tests can be done in a hospital or doctor’s office or clinic. You may be awake for your test and may be given medicine to relax you during the test. You may also be under anesthesia for this test, if recommended by your care team. You will lie on your side or stomach or back, depending on where your provider obtains the samples from. Your provider will clean and numb the top ridge of the hipbone or rib bone, where the needle will be inserted. You may feel a brief, sharp pain when the needle is inserted and when the bone marrow is aspirated. The bone marrow samples will be studied in a laboratory.
After your test, you will have a small bandage on the site where the needle was inserted. Most people go home the same day. You will need a ride home if you received medicines to relax you during the test. You may have mild discomfort but likely won’t have any pain after the test. Your doctor may have you take an over-the-counter pain medicine. Call your provider if you are in serious pain or if you develop symptoms including:
* Fever
* Redness
* Swelling
* Discharge at the needle injection site.
]]>
Gist
Echo is
i) a repetition of sound produced by the reflection of sound waves from a wall, mountain, or other obstructing surface.
ii) a sound heard again near its source after being reflected.
Summary
An echo is a repetition or imitation of sound. When sound waves hit a hard surface they might reflect, making the sound bounce and repeat. If you agree with someone, you might echo his or her statement.
Poet Don Marquis said, “Writing a book of poetry is like dropping a rose petal down the Grand Canyon and waiting for the echo.” The word echo came from the Greek word for "sound." In Greek mythology, Echo was a nymph who could only repeat the last words of others. You were frightened when you thought someone was following you, until you realized you were only hearing the echo of your own footsteps.
Details
In audio signal processing and acoustics, an echo is a reflection of sound that arrives at the listener with a delay after the direct sound. The delay is directly proportional to the distance of the reflecting surface from the source and the listener. Typical examples are the echo produced by the bottom of a well, by a building, or by the walls of an enclosed room and an empty room.
Etymology
The word echo derives from the Greek ēchō), itself from (ēchos), 'sound'. Echo in Greek mythology was a mountain nymph whose ability to speak was cursed, leaving her able only to repeat the last words spoken to her.
Nature
Echolocation organs of a toothed whale, which produce echoes and receive sounds. Arrows illustrate the outgoing and incoming path of sound.
Some animals use echo for location sensing and navigation, such as cetaceans (dolphins and whales) and bats in a process known as echolocation. Echoes are also the basis of sonar technology.
Acoustic phenomenon
Acoustic waves are reflected by walls or other hard surfaces, such as mountains and privacy fences. The reason of reflection may be explained as a discontinuity in the propagation medium. This can be heard when the reflection returns with sufficient magnitude and delay to be perceived distinctly. When sound, or the echo itself, is reflected multiple times from multiple surfaces, the echo is characterized as a reverberation.
The human ear cannot distinguish echo from the original direct sound if the delay is less than 1/10 of a second. The velocity of sound in dry air is approximately 343 m/s at a temperature of 25 °C. Therefore, the reflecting object must be more than 17.2m from the sound source for echo to be perceived by a person located at the source. When a sound produces an echo in two seconds, the reflecting object is 343m away. In nature, canyon walls or rock cliffs facing water are the most common natural settings for hearing echoes. The strength of echo is frequently measured in dB sound pressure level (SPL) relative to the directly transmitted wave. Echoes may be desirable (as in sonar) or undesirable (as in telephone systems).
Use of echo
In sonar, utrasonic waves are more energetic than audible sounds. They can travel undeviated through a long distance, confined to a narrow beam, and are not easily absorbed in the medium. Hence, sound ranging and echo depth sounding uses ultrasonic waves. Ultrasonic waves are sent in all directions from the ship and are received at the receiver after the reflection from an obstacle(enemy ship, iceberg, or sunken ship. Using the formula d = (V*t)/2, the distance from the obstacle is found. Echo depth sounding is the process of finding the depth of the sea using this process.[citation needed] In the medical field, ultrasonic waves of sound are used in ultrasonography and echo cardiography.
Echo in music
In music performance and recording, electric echo effects have been used since the 1950s. The Echoplex is a tape delay effect, first made in 1959 that recreates the sound of an acoustic echo. Designed by Mike Battle, the Echoplex set a standard for the effect in the 1960s and was used by most of the notable guitar players of the era; original Echoplexes are highly sought after. While Echoplexes were used heavily by guitar players (and the occasional bass player, such as Chuck Rainey, or trumpeter, such as Don Ellis), many recording studios also used the Echoplex. Beginning in the 1970s, Market built the solid-state Echoplex for Maestro. In the 2000s, most echo effects units use electronic or digital circuitry to recreate the echo effect.
]]>
Gist
Oil is any of numerous unctuous combustible substances that are liquid or can be liquefied easily on warming, are soluble in ether but not in water, and leave a greasy stain on paper or cloth.
Summary
Oil is any greasy substance that is liquid at room temperature and insoluble in water. There are many types, such as essential oil; orris oil; mineral oil; whale oil; pine oil; linseed oil; perilla oil; fish oil; tall oil; citronella oil. There is also cooking oil, such as olive, vegetable, Canola, and argan.
Fixed oils and fats have the same chemical composition: they consist chiefly of glycerides, resulting from a reaction between an alcohol called glycerol and certain members of a group of compounds known as fatty acids. Along with proteins and carbohydrates, the glyceride oils and fats constitute the three main classes of food. Besides their nutritive importance, these oils have a variety of industrial uses. Linseed, tung, and other drying oils (i.e., those that are highly unsaturated) and large quantities of soybean, sunflower, and safflower oils are used in paints, varnishes, and alkyd resins. Such oils are particularly well suited for this application because, when exposed to air, they absorb oxygen and polymerize readily, forming thin layers as a skin or protective film. Considerable quantities of specialty oils and sulfonated oils are used in leather dressing and textile manufacture. Some other glyceride oils have properties of medicinal value. Castor oil, for example, has a strong purgative action; fish-liver oils are sources of vitamins A and D; and others such as lard, olive oil, and almond oil serve as vehicles in pharmaceutical preparations. Chaulmoogra oil, which contains unique fatty acids with a cyclic (cyclopentenyl) structure, has been used in the treatment of Hansen’s disease (leprosy).
Details
An oil is any nonpolar chemical substance that is composed primarily of hydrocarbons and is hydrophobic (does not mix with water) and lipophilic (mixes with other oils). Oils are usually flammable and surface active. Most oils are unsaturated lipids that are liquid at room temperature.
The general definition of oil includes classes of chemical compounds that may be otherwise unrelated in structure, properties, and uses. Oils may be animal, vegetable, or petrochemical in origin, and may be volatile or non-volatile. They are used for food (e.g., olive oil), fuel (e.g., heating oil), medical purposes (e.g., mineral oil), lubrication (e.g. motor oil), and the manufacture of many types of paints, plastics, and other materials. Specially prepared oils are used in some religious ceremonies and rituals as purifying agents.
Etymology
First attested in English 1176, the word oil comes from Old French oile, from Latin oleum, which in turn comes from the Greek (elaion), "olive oil, oil" and that from (elaia), "olive tree", "olive fruit". The earliest attested forms of the word are the Mycenaean Greek written in the Linear B syllabic script.
Types:
Organic oils
Organic oils are produced in remarkable diversity by plants, animals, and other organisms through natural metabolic processes. Lipid is the scientific term for the fatty acids, steroids and similar chemicals often found in the oils produced by living things, while oil refers to an overall mixture of chemicals. Organic oils may also contain chemicals other than lipids, including proteins, waxes (class of compounds with oil-like properties that are solid at common temperatures) and alkaloids.
Lipids can be classified by the way that they are made by an organism, their chemical structure and their limited solubility in water compared to oils. They have a high carbon and hydrogen content and are considerably lacking in oxygen compared to other organic compounds and minerals; they tend to be relatively nonpolar molecules, but may include both polar and nonpolar regions as in the case of phospholipids and steroids.
Mineral oils
Crude oil, or petroleum, and its refined components, collectively termed petrochemicals, are crucial resources in the modern economy. Crude oil originates from ancient fossilized organic materials, such as zooplankton and algae, which geochemical processes convert into oil. The name "mineral oil" is a misnomer, in that minerals are not the source of the oil—ancient plants and animals are. Mineral oil is organic. However, it is classified as "mineral oil" instead of as "organic oil" because its organic origin is remote (and was unknown at the time of its discovery), and because it is obtained in the vicinity of rocks, underground traps, and sands. Mineral oil also refers to several specific distillates of crude oil.
Applications:
Cooking
Several edible vegetable and animal oils, and also fats, are used for various purposes in cooking and food preparation. In particular, many foods are fried in oil much hotter than boiling water. Oils are also used for flavoring and for modifying the texture of foods (e.g. stir fry). Cooking oils are derived either from animal fat, as butter, lard and other types, or plant oils from olive, maize, sunflower and many other species.
Cosmetics
Oils are applied to hair to give it a lustrous look, to prevent tangles and roughness and to stabilize the hair to promote growth.
Religion
Oil has been used throughout history as a religious medium. It is often considered a spiritually purifying agent and is used for anointing purposes. As a particular example, holy anointing oil has been an important ritual liquid for Judaism and Christianity.
Health
Oils have been consumed since ancient times. Oils hold lots of fats and medical properties. A good example is olive oil. Olive oil holds a lot of fats within it which is why it was also used in lighting in ancient Greece and Rome. So people would use it to bulk out food so they would have more energy to burn through the day. Olive oil was also used to clean the body in this time as it would trap the moisture in the skin while pulling the grime to the surface. It was used as an ancient form of unsophisticated soap. It was applied on the skin then scrubbed off with a wooden stick pulling off the excess grime and creating a layer where new grime could form but be easily washed off in the water as oil is hydrophobic. Fish oils hold the omega-3 fatty acid. This fatty acid helps with inflammation and reduces fat in the bloodstream.
Painting
Color pigments are easily suspended in oil, making it suitable as a supporting medium for paints. The oldest known extant oil paintings date from 650 AD.
Heat transfer
Oils are used as coolants in oil cooling, for instance in electric transformers. Heat transfer oils are used both as coolants, for heating (e.g. in oil heaters) and in other applications of heat transfer.
Lubrication
Given that they are non-polar, oils do not easily adhere to other substances. This makes them useful as lubricants for various engineering purposes. Mineral oils are more commonly used as machine lubricants than biological oils are. Whale oil is preferred for lubricating clocks, because it does not evaporate, leaving dust, although its use was banned in the US in 1980.
It is a long-running myth that spermaceti from whales has still been used in NASA projects such as the Hubble Space Telescope and the Voyager probe because of its extremely low freezing temperature. Spermaceti is not actually an oil, but a mixture mostly of wax esters, and there is no evidence that NASA has used whale oil.
Fuel
Some oils burn in liquid or aerosol form, generating light, and heat which can be used directly or converted into other forms of energy such as electricity or mechanical work. In order to obtain many fuel oils, crude oil is pumped from the ground and is shipped via oil tanker or a pipeline to an oil refinery. There, it is converted from crude oil to diesel fuel (petrodiesel), ethane (and other short-chain alkanes), fuel oils (heaviest of commercial fuels, used in ships/furnaces), gasoline (petrol), jet fuel, kerosene, benzene (historically), and liquefied petroleum gas. A 42-US-gallon (35 imp gal; 160 L) barrel of crude oil produces approximately 10 US gallons (8.3 imp gal; 38 L) of diesel, 4 US gallons (3.3 imp gal; 15 L) of jet fuel, 19 US gallons (16 imp gal; 72 L) of gasoline, 7 US gallons (5.8 imp gal; 26 L) of other products, 3 US gallons (2.5 imp gal; 11 L) split between heavy fuel oil and liquified petroleum gases, and 2 US gallons (1.7 imp gal; 7.6 L) of heating oil. The total production of a barrel of crude into various products results in an increase to 45 US gallons (37 imp gal; 170 L).
In the 18th and 19th centuries, whale oil was commonly used for lamps, which was replaced with natural gas and then electricity.
Chemical feedstock
Crude oil can be refined into a wide variety of component hydrocarbons. Petrochemicals are the refined components of crude oil and the chemical products made from them. They are used as detergents, fertilizers, medicines, paints, plastics, synthetic fibers, and synthetic rubber.
Organic oils are another important chemical feedstock, especially in green chemistry.
Additional Information
In general, oil is a liquid that is made up of organic molecules. However, in the context of the world's energy sector oil, or more specifically, crude oil is the liquid fossil fuel that is extracted from the ground. Roughly 1/3 of the world's primary energy comes from this primary fuel. Chemically, oil is composed mainly out of carbon and hydrogen with other trace elements. Since oil is made mostly out of carbon and hydrogen atoms, it is known as a hydrocarbon (although from a chemical standpoint, it's often not a true hydrocarbon). The specific chemical makeup of crude oil can vary drastically depending on where it was drilled and under which conditions it was formed.
Oil formed millions of years ago when living organic matter died and was buried before it could be decomposed in the presence of air. This locked the carbon underground where heat and pressure led to chemical and physical changes. These changes, over long periods of time, transformed the once-photosynthetic energy from the Sun into the energy stored in the oil itself. Because oil is the main liquid component of petroleum, it is referred to as a petrochemical.
History
Oil has been used extensively through history, even when not being used to fuel vehicles or generate electricity. Historically, oil was used as a waterproofing agent and in some medicines, but was found only in natural seeps where the oil came above ground. On August 27, 1859 oil was pumped out of the ground for the first time by Edwin Drake in Pennsylvania and thousands of wells have been drilled since. Initially, most oil was turned into kerosene and used as fuel for lamps, but over time it grew to be used for fuelling cars and generating electricity.
Extraction
Conventional oil is held beneath the ground in traps or reservoirs, held in the tiny pore spaces of porous and permeable rock. Unconventional oil, primarily shale oil is held tightly in non-permeable shale deposits and thus more difficult to extract, requiring hydraulic fracturing to access. Generally, extraction requires a well that is drilled into a reservoir containing crude oil. The well can be vertical, directional, or horizontal depending on how much access to the deposit is needed. Directional and horizontal drilling allows more of the well to be in the deposit itself, increasing the flow of the oil. After this, the oil is extracted and refined. It can be distilled or undergo hydrocarbon cracking to create products and fuels that will be useful.
Use
Oil is used for many different things, and is used extensively for transportation. Some ways that oil can be used either before or after refining are:
* Transportation Fuels
* Fertilizer
* Heating
* Plastics
* Solvents
* Electrical Generation
Some of these uses require more refining of crude oil to become useful, but they all utilize oil in some way. According to the EIA, the majority of oil usage in the United States is from transportation (through the use of gasoline and diesel) accounting for 2/3 of all the oil used.
Oil is particularly useful as a fuel because of its high energy density. As previously mentioned, the original energy source of oil is the Sun, as the energy stored within dead organic matter is what creates crude oil over time. When burned in the presence of oxygen, oil undergoes a hydrocarbon combustion reaction, creating carbon dioxide and water vapour. The energy released during combustion is dependent on the energy density of the specific substance undergoing combustion. Crude oil has a relatively high energy density, with 1 kilogram of crude oil containing .
Environmental Impacts
Although oil is currently an extremely important fuel, the production of carbon dioxide through the combustion of crude oil and its refined products is contributing to climate change. In addition to carbon dioxide and other emissions produced during the burning of oil products, the production, transport, refining, and drilling processes all have their own associated environmental impacts. Some chemicals produced contribute to smog, while others are greenhouse gases that contribute to the warming of the Earth. Some of the more harmful pollutants include NOx and carbon monoxide. Atmospheric emissions are not the only issue, as the destruction of land used during extraction and the possibility of an oil spill can destroy potentially significant ecological areas.
]]>
Gist
Heparin is obtained from liver, lung, mast cells, and other cells of vertebrates. Heparin is a well-known and commonly used anticoagulant which has antithrombotic properties. Heparin inhibits reactions that lead to the clotting of blood and the formation of fibrin clots both in vitro and in vivo.
Summary
* Heparin injectable solution only comes as a generic drug. It doesn’t have a brand-name version.
* Heparin comes in two forms. One is an injectable solution, which you inject under your skin. The other is a solution that’s injected intravenously (into one of your veins). Only your doctor can give you the intravenous form.
* Heparin injectable solution is a blood thinner that’s used to treat and prevent blood clots.
Important warnings
* Low platelet levels warning. This drug can decrease your platelet levels. This is known as heparin-induced thrombocytopenia (HIT), which can eventually lead to the formation of blood clots in your veins. These clots can form even several weeks after you stop taking heparin. Your doctor will check you for low platelet levels.
* Bleeding risk warning. This drug may cause serious bleeding. This happens because this drug reduces your body’s ability to make your blood clot. Heparin may cause you to bruise more easily. It also may take your body longer to stop bleeding. This can cause death in rare cases. Let your doctor know if you have frequent nosebleeds, unusual bleeding from your gums, periods that are heavier than normal, red or brown urine, or dark or tarry stools. Also let your doctor know if you vomit blood, if your vomit looks like coffee grounds, or if you have headaches, dizziness, or weakness.
What is heparin?
Heparin is a prescription drug. It comes as a self-injectable solution that you inject under your skin. It also comes as a solution that a healthcare provider injects intravenously (into one of your veins). You can only receive the intravenous form in the hospital.
For the injectable solution, you’ll receive your first injection at a hospital. A healthcare provider will show you how to give yourself the injection. You will give yourself the remaining doses at home.
Heparin injectable solution is only available as a generic drug.
Why it’s used
Heparin is a blood thinner that’s used to treat and prevent blood clots. These can include venous thrombosis, pulmonary embolism, and peripheral arterial embolism.
How it works
Heparin belongs to a class of drugs called anticoagulants. A class of drugs is a group of medications that work in a similar way. These drugs are often used to treat similar conditions.
Heparin works by disrupting the formation of blood clots in your veins. It can prevent blood clots from forming, or stop clots that have already formed from getting larger.
Heparin side effects
Heparin injectable solution doesn’t cause drowsiness, but it can cause other side effects.
More common side effects
The more common side effects of this drug include:
* bruising more easily
* bleeding that takes longer to stop
* irritation, pain, redness, or sores at the injection site
* allergic reactions, such as hives, chills, and fever
* increased liver enzymes on liver function test results
If these effects are mild, they may go away within a few days or a couple of weeks. If they’re more severe or don’t go away, talk to your doctor or pharmacist.
Serious side effects
Serious side effects and their symptoms can include the following:
Severe bleeding. Symptoms can include:
* bruising more easily
* unexpected bleeding or bleeding that lasts a long time, such as:
* unusual bleeding from your gums
* frequent nosebleeds
* periods that are heavier than normal
* pink or brown urine
* dark, tarry stool (may be a sign of bleeding in your stomach)
* severe bleeding or bleeding that you can’t stop
* coughing up blood or blood clots
* vomit that contains blood or looks like coffee grounds
* headaches
* weakness
* dizziness
Serious allergic reactions. Symptoms can include:
* skin tissue death at the injection site
* chills
* fever
* rash and hives
* itching
* burning
* shortness of breath
* swelling of your face, lips, throat, or tongue
Heparin-induced thrombocytopenia. This is low platelet levels caused by heparin use. It can cause new or worsening clots in your blood vessels. These may lead to a stroke or heart attack. Symptoms of new or worsening blood clots can include:
* reddening and swelling of one leg or arm
* coughing up blood
Disclaimer: Our goal is to provide you with the most relevant and current information. However, because drugs affect each person differently, we cannot guarantee that this information includes all possible side effects. This information is not a substitute for medical advice. Always discuss possible side effects with a healthcare provider who knows your medical history.
Heparin may interact with other medications
Heparin injectable solution can interact with other medications, vitamins, or herbs you may be taking. An interaction is when a substance changes the way a drug works. This can be harmful or prevent the drug from working well.
To help avoid interactions, your doctor should manage all of your medications carefully. Be sure to tell your doctor about all medications, vitamins, or herbs you’re taking. To find out how this drug might interact with something else you’re taking, talk to your doctor or pharmacist.
Examples of drugs that can cause interactions with heparin are listed below.
Interactions that can increase your risk of side effects
Taking heparin with certain drugs can increase your risk of bleeding and make you bruise more easily. Examples of these drugs include:
* aspirin
* nonsteroidal anti-inflammatory drugs such as celecoxib, ibuprofen, and naproxen
* antiplatelet drugs such as clopidogrel and dipyridamole
* hydroxychloroquine
* herbal supplements such as ginkgo biloba, fish oil, and garlic
Interactions that can make heparin less effective
When used with heparin, certain drugs can make heparin less effective. Examples of these drugs include:
* digoxin
* tetracycline antibiotics such as doxycycline and minocycline
* nicotine
* nitrates, such as isosorbide, mononitrate, and nitroglycerin
* antihistamines such as diphenhydramine
Disclaimer: Our goal is to provide you with the most relevant and current information. However, because drugs interact differently in each person, we cannot guarantee that this information includes all possible interactions. This information is not a substitute for medical advice. Always speak with your healthcare provider about possible interactions with all prescription drugs, vitamins, herbs and supplements, and over-the-counter drugs that you are taking.
Details
Heparin, also known as unfractionated heparin (UFH), is a medication and naturally occurring glycosaminoglycan. Since heparins depend on the activity of antithrombin, they are considered anticoagulants. Specifically it is also used in the treatment of heart attacks and unstable angina. It is given intravenously or by injection under the skin. Other uses for its anticoagulant properties include inside blood specimen test tubes and kidney dialysis machines.
Common side effects include bleeding, pain at the injection site, and low blood platelets. Serious side effects include heparin-induced thrombocytopenia. Greater care is needed in those with poor kidney function.
Heparin is contraindicated for suspected cases of vaccine-induced pro-thrombotic immune thrombocytopenia (VIPIT) secondary to SARS-CoV-2 vaccination, as heparin may further increase the risk of bleeding in an anti-PF4/heparin complex autoimmune manner, in favor of alternative anticoagulant medications (such as argatroban or danaparoid).
Heparin appears to be relatively safe for use during pregnancy and breastfeeding. Heparin is produced by basophils and mast cells in all mammals.
The discovery of heparin was announced in 1916. It is on the World Health Organization's List of Essential Medicines. A fractionated version of heparin, known as low molecular weight heparin, is also available.
History
Heparin was discovered by Jay McLean and William Henry Howell in 1916, although it did not enter clinical trials until 1935. It was originally isolated from dog liver cells, hence its name (hēpar is Greek for 'liver'; hepar + -in).
McLean was a second-year medical student at Johns Hopkins University, and was working under the guidance of Howell investigating pro-coagulant preparations, when he isolated a fat-soluble phosphatide anticoagulant in canine liver tissue. In 1918, Howell coined the term 'heparin' for this type of fat-soluble anticoagulant. In the early 1920s, Howell isolated a water-soluble polysaccharide anticoagulant, which he also termed 'heparin', although it was different from the previously discovered phosphatide preparations. McLean's work as a surgeon probably changed the focus of the Howell group to look for anticoagulants, which eventually led to the polysaccharide discovery.
In the 1930s, several researchers were investigating heparin. Erik Jorpes at Karolinska Institutet published his research on the structure of heparin in 1935, which made it possible for the Swedish company Vitrum AB to launch the first heparin product for intravenous use in 1936. Between 1933 and 1936, Connaught Medical Research Laboratories, then a part of the University of Toronto, perfected a technique for producing safe, nontoxic heparin that could be administered to patients, in a saline solution. The first human trials of heparin began in May 1935, and, by 1937, it was clear that Connaught's heparin was safe, easily available, and effective as a blood anticoagulant. Prior to 1933, heparin was available in small amounts, was extremely expensive and toxic, and, as a consequence, of no medical value.
Heparin production experienced a break in the 1990s. Until then, heparin was mainly obtained from cattle tissue, which was a by-product of the meat industry, especially in North America. With the rapid spread of BSE, more and more manufacturers abandoned this source of supply. As a result, global heparin production became increasingly concentrated in China, where the substance was now procured from the expanding industry of breeding and slaughtering hog. The dependence of medical care on the meat industry assumed threatening proportions in the wake of the COVID-19 pandemic. In 2020, several studies demonstrated the efficacy of heparin in mitigating severe disease progression, as its anticoagulant effect counteracted the formation of immunothrombosis. However, the availability of heparin on the world market was decreased, because concurrently a renewed swine flu epidemic had reduced significant portions of the Chinese hog population. The situation was further exacerbated by the fact that mass slaughterhouses around the world became corona hotspots themselves and were forced to close temporarily. In less affluent countries, the resulting heparin shortage also led to worsened health care beyond the treatment of covid, for example through the cancellation of cardiac surgeries.
Medical use
Heparin acts as an anticoagulant, preventing the formation of clots and extension of existing clots within the blood. While heparin itself does not break down clots that have already formed (unlike tissue plasminogen activator), it allows the body's natural clot lysis mechanisms to work normally to break down clots that have formed. Heparin is generally used for anticoagulation for the following conditions:
* Acute coronary syndrome, e.g., NSTEMI
* Atrial fibrillation
* Deep-vein thrombosis and pulmonary embolism (both prevention and treatment)
* Other thrombotic states and conditions
* Cardiopulmonary bypass for heart surgery
* ECMO circuit for extracorporeal life support
* Hemofiltration
* Indwelling central or peripheral venous catheters
Heparin and its low-molecular-weight derivatives (e.g., enoxaparin, dalteparin, tinzaparin) are effective in preventing deep vein thromboses and pulmonary emboli in people at risk, but no evidence indicates any one is more effective than the other in preventing mortality.
In angiography, 2 to 5 units/mL of unfractionated heparin saline flush is used to prevent the clotting of blood in guidewires, sheaths, and catheters, thus preventing thrombus from dislodging from these devices into the circulatory system.
Unfractionated heparin is used in hemodialysis. Comparing to low-molecular-weight heparin, unfractionated heparin does not have prolonged anticoagulation action after dialysis, and low cost. However, the short duration of action for heparin would require it to maintain continuous infusion to maintain its action. Meanwhile, unfractionated heparin has higher risk of heparin-induced thrombocytopenia.
Adverse effects
A serious side-effect of heparin is heparin-induced thrombocytopenia (HIT), caused by an immunological reaction that makes platelets a target of immunological response, resulting in the degradation of platelets, which causes thrombocytopenia. This condition is usually reversed on discontinuation, and in general can be avoided with the use of synthetic heparins. Not all patients with heparin antibodies will develop thrombocytopenia. Also, a benign form of thrombocytopenia is associated with early heparin use, which resolves without stopping heparin. Approximately one-third of patients with diagnosed heparin-induced thrombocytopenia will ultimately develop thrombotic complications.
Two non-hemorrhagic side-effects of heparin treatment are known. The first is elevation of serum aminotransferase levels, which has been reported in as many as 80% of patients receiving heparin. This abnormality is not associated with liver dysfunction, and it disappears after the drug is discontinued. The other complication is hyperkalemia, which occurs in 5 to 10% of patients receiving heparin, and is the result of heparin-induced aldosterone suppression. The hyperkalemia can appear within a few days after the onset of heparin therapy. More rarely, the side-effects alopecia and osteoporosis can occur with chronic use.
As with many drugs, overdoses of heparin can be fatal. In September 2006, heparin received worldwide publicity when three prematurely born infants died after they were mistakenly given overdoses of heparin at an Indianapolis hospital.
Contraindications
Heparin is contraindicated in those with risk of bleeding (especially in people with uncontrolled blood pressure, liver disease, and stroke), severe liver disease, or severe hypertension.
Antidote to heparin
Protamine sulfate has been given to counteract the anticoagulant effect of heparin (1 mg per 100 units of heparin that had been given over the past 6 hours). It may be used in those who overdose on heparin or to reverse heparin's effect when it is no longer needed.
Physiological function
Heparin's normal role in the body is unclear. Heparin is usually stored within the secretory granules of mast cells and released only into the vasculature at sites of tissue injury. It has been proposed that, rather than anticoagulation, the main purpose of heparin is defense at such sites against invading bacteria and other foreign materials. In addition, it is observed across a number of widely different species, including some invertebrates that do not have a similar blood coagulation system. It is a highly sulfated glycosaminoglycan. It has the highest negative charge density of any known biological molecule.
Additional Information
Heparin is an anticoagulant drug that is used to prevent blood clots from forming during and after surgery and to treat various heart, lung, and circulatory disorders in which there is an increased risk of blood clot formation. Discovered in 1922 by American physiologist William Henry Howell, heparin is a naturally occurring mixture of mucopolysaccharides that is present in the human body in tissues of the liver and lungs. Most commercial heparin is obtained from cow lungs or pig intestines. Heparin was originally used to prevent the clotting of blood taken for laboratory tests. Its use as a therapy for patients who already have a blood clot in a vein (venous thrombosis) began in the 1940s; low-dose heparin treatment to prevent blood clots from forming in patients who are at high risk for pulmonary embolisms and other clotting disorders was introduced in the early 1970s.
The biological activity of heparin depends on the presence of antithrombin III, a substance in blood plasma that binds and deactivates serum clotting factors. Heparin is poorly absorbed by the intestine, so it must be given intravenously or subcutaneously. Because of its anticlotting effect, the drug creates a significant risk of excessive bleeding, which may be reversed with protamine, a protein that neutralizes heparin’s anticoagulant effect. Other adverse effects of heparin include thrombocytopenia (reduced number of circulating platelets) and hypersensitivity reactions.
]]>
Gist
Electricity is a type of energy that consists of the movement of electrons between two points when there is a potential difference between them, making it possible to generate what is known as an electric current. Let's see a practical example to understand it better. What happens when we turn on the light switch?
Summary
Electricity is the set of physical phenomena associated with the presence and motion of matter possessing an electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.
The presence of either a positive or negative electric charge produces an electric field. The motion of electric charges is an electric current and produces a magnetic field. In most applications, Coulomb's law determines the force acting on an electric charge. Electric potential is the work done to move an electric charge from one point to another within an electric field, typically measured in volts.
Electricity plays a central role in many modern technologies, serving in electric power where electric current is used to energise equipment, and in electronics dealing with electrical circuits involving active components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.
The study of electrical phenomena dates back to antiquity, with theoretical understanding progressing slowly until the 17th and 18th centuries. The development of the theory of electromagnetism in the 19th century marked significant progress, leading to electricity's industrial and residential application by electrical engineers by the century's end. This rapid expansion in electrical technology at the time was the driving force for the Second Industrial Revolution, with electricity's versatility driving transformations in industry and society. Electricity is integral to applications spanning transport, heating, lighting, communications, and computation, making it the foundation of modern industrial society.[
Details
Electricity is a phenomenon associated with stationary or moving electric charges. Electric charge is a fundamental property of matter and is borne by elementary particles. In electricity the particle involved is the electron, which carries a charge designated, by convention, as negative. Thus, the various manifestations of electricity are the result of the accumulation or motion of numbers of electrons.
Electrostatics
Electrostatics is the study of electromagnetic phenomena that occur when there are no moving charges—i.e., after a static equilibrium has been established. Charges reach their equilibrium positions rapidly because the electric force is extremely strong. The mathematical methods of electrostatics make it possible to calculate the distributions of the electric field and of the electric potential from a known configuration of charges, conductors, and insulators. Conversely, given a set of conductors with known potentials, it is possible to calculate electric fields in regions between the conductors and to determine the charge distribution on the surface of the conductors. The electric energy of a set of charges at rest can be viewed from the standpoint of the work required to assemble the charges; alternatively, the energy also can be considered to reside in the electric field produced by this assembly of charges. Finally, energy can be stored in a capacitor; the energy required to charge such a device is stored in it as electrostatic energy of the electric field.
Coulomb’s law
Static electricity is a familiar electric phenomenon in which charged particles are transferred from one body to another. For example, if two objects are rubbed together, especially if the objects are insulators and the surrounding air is dry, the objects acquire equal and opposite charges and an attractive force develops between them. The object that loses electrons becomes positively charged, and the other becomes negatively charged. The force is simply the attraction between charges of opposite sign. The properties of this force were described above; they are incorporated in the mathematical relationship known as Coulomb’s law.
Capacitance
A useful device for storing electrical energy consists of two conductors in close proximity and insulated from each other. A simple example of such a storage device is the parallel-plate capacitor. If positive charges with total charge +Q are deposited on one of the conductors and an equal amount of negative charge −Q is deposited on the second conductor, the capacitor is said to have a charge Q.
Principle of the capacitor
To understand how a charged capacitor stores energy, consider the following charging process. With both plates of the capacitor initially uncharged, a small amount of negative charge is removed from the lower plate and placed on the upper plate. Thus, little work is required to make the lower plate slightly positive and the upper plate slightly negative. As the process is repeated, however, it becomes increasingly difficult to transport the same amount of negative charge, since the charge is being moved toward a plate that is already negatively charged and away from a plate that is positively charged. The negative charge on the upper plate repels the negative charge moving toward it, and the positive charge on the lower plate exerts an attractive force on the negative charge being moved away. Therefore, work has to be done to charge the capacitor.
Where and how is this energy stored? The negative charges on the upper plate are attracted toward the positive charges on the lower plate and could do work if they could leave the plate. Because they cannot leave the plate, however, the energy is stored. A mechanical analogy is the potential energy of a stretched spring. Another way to understand the energy stored in a capacitor is to compare an uncharged capacitor with a charged capacitor. In the uncharged capacitor, there is no electric field between the plates; in the charged capacitor, because of the positive and negative charges on the inside surfaces of the plates, there is an electric field between the plates with the field lines pointing from the positively charged plate to the negatively charged one. The energy stored is the energy that was required to establish the field.
Dielectrics, polarization, and electric dipole moment
The amount of charge stored in a capacitor is the product of the voltage and the capacity. What limits the amount of charge that can be stored on a capacitor? The voltage can be increased, but electric breakdown will occur if the electric field inside the capacitor becomes too large. The capacity can be increased by expanding the electrode areas and by reducing the gap between the electrodes. In general, capacitors that can withstand high voltages have a relatively small capacity. If only low voltages are needed, however, compact capacitors with rather large capacities can be manufactured. One method for increasing capacity is to insert between the conductors an insulating material that reduces the voltage because of its effect on the electric field. Such materials are called dielectrics (substances with no free charges). When the molecules of a dielectric are placed in the electric field, their negatively charged electrons separate slightly from their positively charged cores. With this separation, referred to as polarization, the molecules acquire an electric dipole moment. A cluster of charges with an electric dipole moment is often called an electric dipole.
Is there an electric force between a charged object and uncharged matter, such as a piece of wood? Surprisingly, the answer is yes, and the force is attractive. The reason is that under the influence of the electric field of a charged object, the negatively charged electrons and positively charged nuclei within the atoms and molecules are subjected to forces in opposite directions. As a result, the negative and positive charges separate slightly. Such atoms and molecules are said to be polarized and to have an electric dipole moment. The molecules in the wood acquire an electric dipole moment in the direction of the external electric field. The polarized molecules are attracted toward the charged object because the field increases in the direction of the charged object.
Applications of capacitors
Capacitors have many important applications. They are used, for example, in digital circuits so that information stored in large computer memories is not lost during a momentary electric power failure; the electric energy stored in such capacitors maintains the information during the temporary loss of power. Capacitors play an even more important role as filters to divert spurious electric signals and thereby prevent damage to sensitive components and circuits caused by electric surges. How capacitors provide such protection is discussed below in the section Transient response.
Direct electric current:
Basic phenomena and principles
Many electric phenomena occur under what is termed steady-state conditions. This means that such electric quantities as current, voltage, and charge distributions are not affected by the passage of time. For instance, because the current through a filament inside a car headlight does not change with time, the brightness of the headlight remains constant. An example of a nonsteady-state situation is the flow of charge between two conductors that are connected by a thin conducting wire and that initially have an equal but opposite charge. As current flows from the positively charged conductor to the negatively charged one, the charges on both conductors decrease with time, as does the potential difference between the conductors. The current therefore also decreases with time and eventually ceases when the conductors are discharged.
In an electric circuit under steady-state conditions, the flow of charge does not change with time and the charge distribution stays the same. Since charge flows from one location to another, there must be some mechanism to keep the charge distribution constant. In turn, the values of the electric potentials remain unaltered with time. Any device capable of keeping the potentials of electrodes unchanged as charge flows from one electrode to another is called a source of electromotive force, or simply an emf.
By some external means, an electric field is established inside the wire in a direction along its length. The electrons that are free to move will gain some speed. Since they have a negative charge, they move in the direction opposite that of the electric field. The current i is defined to have a positive value in the direction of flow of positive charges. If the moving charges that constitute the current i in a wire are electrons, the current is a positive number when it is in a direction opposite to the motion of the negatively charged electrons. (If the direction of motion of the electrons were also chosen to be the direction of a current, the current would have a negative value.) The current is the amount of charge crossing a plane transverse to the wire per unit time—i.e., in a period of one second.
The unit of current is the ampere (A); one ampere equals one coulomb per second. A useful quantity related to the flow of charge is current density, the flow of current per unit area. Symbolized by J, it has a magnitude of i/A and is measured in amperes per square metre.
Wires of different materials have different current densities for a given value of the electric field E; for many materials, the current density is directly proportional to the electric field.
Conductors, insulators, and semiconductors
Materials are classified as conductors, insulators, or semiconductors according to their electric conductivity. The classifications can be understood in atomic terms. Electrons in an atom can have only certain well-defined energies, and, depending on their energies, the electrons are said to occupy particular energy levels. In a typical atom with many electrons, the lower energy levels are filled, each with the number of electrons allowed by a quantum mechanical rule known as the Pauli exclusion principle. Depending on the element, the highest energy level to have electrons may or may not be completely full. If two atoms of some element are brought close enough together so that they interact, the two-atom system has two closely spaced levels for each level of the single atom. If 10 atoms interact, the 10-atom system will have a cluster of 10 levels corresponding to each single level of an individual atom. In a solid, the number of atoms and hence the number of levels is extremely large; most of the higher energy levels overlap in a continuous fashion except for certain energies in which there are no levels at all. Energy regions with levels are called energy bands, and regions that have no levels are referred to as band gaps.
The highest energy band occupied by electrons is the valence band. In a conductor, the valence band is partially filled, and since there are numerous empty levels, the electrons are free to move under the influence of an electric field; thus, in a metal the valence band is also the conduction band. In an insulator, electrons completely fill the valence band; and the gap between it and the next band, which is the conduction band, is large. The electrons cannot move under the influence of an electric field unless they are given enough energy to cross the large energy gap to the conduction band. In a semiconductor, the gap to the conduction band is smaller than in an insulator. At room temperature, the valence band is almost completely filled. A few electrons are missing from the valence band because they have acquired enough thermal energy to cross the band gap to the conduction band; as a result, they can move under the influence of an external electric field. The “holes” left behind in the valence band are mobile charge carriers but behave like positive charge carriers.
For many materials, including metals, resistance to the flow of charge tends to increase with temperature. For example, an increase of 5° C (9° F) increases the resistivity of copper by 2 percent. In contrast, the resistivity of insulators and especially of semiconductors such as silicon and germanium decreases rapidly with temperature; the increased thermal energy causes some of the electrons to populate levels in the conduction band where, influenced by an external electric field, they are free to move. The energy difference between the valence levels and the conduction band has a strong influence on the conductivity of these materials, with a smaller gap resulting in higher conduction at lower temperatures.
The values of electric resistivities show an extremely large variation in the capability of different materials to conduct electricity. The principal reason for the large variation is the wide range in the availability and mobility of charge carriers within the materials. The copper wire, for example, has many extremely mobile carriers; each copper atom has approximately one free electron, which is highly mobile because of its small mass. An electrolyte, such as a saltwater solution, is not as good a conductor as copper. The sodium and chlorine ions in the solution provide the charge carriers. The large mass of each sodium and chlorine ion increases as other attracted ions cluster around them. As a result, the sodium and chlorine ions are far more difficult to move than the free electrons in copper. Pure water also is a conductor, although it is a poor one because only a very small fraction of the water molecules are dissociated into ions. The oxygen, nitrogen, and argon gases that make up the atmosphere are somewhat conductive because a few charge carriers form when the gases are ionized by radiation from radioactive elements on Earth as well as from extraterrestrial cosmic rays (i.e., high-speed atomic nuclei and electrons). Electrophoresis is an interesting application based on the mobility of particles suspended in an electrolytic solution. Different particles (proteins, for example) move in the same electric field at different speeds; the difference in speed can be used to separate the contents of the suspension.
A current flowing through a wire heats it. This familiar phenomenon occurs in the heating coils of an electric range or in the hot tungsten filament of an electric light bulb. This ohmic heating is the basis for the fuses used to protect electric circuits and prevent fires; if the current exceeds a certain value, a fuse, which is made of an alloy with a low melting point, melts and interrupts the flow of current.
In certain materials, however, the power dissipation that manifests itself as heat suddenly disappears if the conductor is cooled to a very low temperature. The disappearance of all resistance is a phenomenon known as superconductivity. As mentioned earlier, electrons acquire some average drift velocity v under the influence of an electric field in a wire. Normally the electrons, subjected to a force because of an electric field, accelerate and progressively acquire greater speed. Their velocity is, however, limited in a wire because they lose some of their acquired energy to the wire in collisions with other electrons and in collisions with atoms in the wire. The lost energy is either transferred to other electrons, which later radiate, or the wire becomes excited with tiny mechanical vibrations referred to as phonons. Both processes heat the material. The term phonon emphasizes the relationship of these vibrations to another mechanical vibration—namely, sound. In a superconductor, a complex quantum mechanical effect prevents these small losses of energy to the medium. The effect involves interactions between electrons and also those between electrons and the rest of the material. It can be visualized by considering the coupling of the electrons in pairs with opposite momenta; the motion of the paired electrons is such that no energy is given up to the medium in inelastic collisions or phonon excitations. One can imagine that an electron about to “collide” with and lose energy to the medium could end up instead colliding with its partner so that they exchange momentum without imparting any to the medium.
A superconducting material widely used in the construction of electromagnets is an alloy of niobium and titanium. This material must be cooled to a few degrees above absolute zero temperature, −263.66° C (or 9.5 K), in order to exhibit the superconducting property. Such cooling requires the use of liquefied helium, which is rather costly. During the late 1980s, materials that exhibit superconducting properties at much higher temperatures were discovered. These temperatures are higher than the −196° C of liquid nitrogen, making it possible to use the latter instead of liquid helium. Since liquid nitrogen is plentiful and cheap, such materials may provide great benefits in a wide variety of applications, ranging from electric power transmission to high-speed computing.
Electromotive force
A 12-volt automobile battery can deliver current to a circuit such as that of a car radio for a considerable length of time, during which the potential difference between the terminals of the battery remains close to 12 volts. The battery must have a means of continuously replenishing the excess positive and negative charges that are located on the respective terminals and that are responsible for the 12-volt potential difference between the terminals. The charges must be transported from one terminal to the other in a direction opposite to the electric force on the charges between the terminals. Any device that accomplishes this transport of charge constitutes a source of electromotive force. A car battery, for example, uses chemical reactions to generate electromotive force. The Van de Graaff generator is a mechanical device that produces an electromotive force. Invented by the American physicist Robert J. Van de Graaff in the 1930s, this type of particle accelerator has been widely used to study subatomic particles. Because it is conceptually simpler than a chemical source of electromotive force, the Van de Graaff generator will be discussed first.
An insulating conveyor belt carries positive charge from the base of the Van de Graaff machine to the inside of a large conducting dome. The charge is removed from the belt by the proximity of sharp metal electrodes called charge remover points. The charge then moves rapidly to the outside of the conducting dome. The positively charged dome creates an electric field, which points away from the dome and provides a repelling action on additional positive charges transported on the belt toward the dome. Thus, work is done to keep the conveyor belt turning. If a current is allowed to flow from the dome to ground and if an equal current is provided by the transport of charge on the insulating belt, equilibrium is established and the potential of the dome remains at a constant positive value. In this example, the current from the dome to ground consists of a stream of positive ions inside the accelerating tube, moving in the direction of the electric field. The motion of the charge on the belt is in a direction opposite to the force that the electric field of the dome exerts on the charge. This motion of charge in a direction opposite the electric field is a feature common to all sources of electromotive force.
In the case of a chemically generated electromotive force, chemical reactions release energy. If these reactions take place with chemicals in close proximity to each other (e.g., if they mix), the energy released heats the mixture. To produce a voltaic cell, these reactions must occur in separate locations. A copper wire and a zinc wire poked into a lemon make up a simple voltaic cell. The potential difference between the copper and the zinc wires can be measured easily and is found to be 1.1 volts; the copper wire acts as the positive terminal. Such a “lemon battery” is a rather poor voltaic cell capable of supplying only small amounts of electric power. Another kind of 1.1-volt battery constructed with essentially the same materials can provide much more electricity. In this case, a copper wire is placed in a solution of copper sulfate and a zinc wire in a solution of zinc sulfate; the two solutions are connected electrically by a potassium chloride salt bridge. (A salt bridge is a conductor with ions as charge carriers.) In both kinds of batteries, the energy comes from the difference in the degree of binding between the electrons in copper and those in zinc. Energy is gained when copper ions from the copper sulfate solution are deposited on the copper electrode as neutral copper ions, thus removing free electrons from the copper wire. At the same time, zinc atoms from the zinc wire go into solution as positively charged zinc ions, leaving the zinc wire with excess free electrons. The result is a positively charged copper wire and a negatively charged zinc wire. The two reactions are separated physically, with the salt bridge completing the internal circuit.
Alternating electric currents:
Basic phenomena and principles
Many applications of electricity and magnetism involve voltages that vary in time. Electric power transmitted over large distances from generating plants to users involves voltages that vary sinusoidally in time, at a frequency of 60 hertz (Hz) in the United States and Canada and 50 hertz in Europe. (One hertz equals one cycle per second.) This means that in the United States, for example, the current alternates its direction in the electric conducting wires so that each second it flows 60 times in one direction and 60 times in the opposite direction. Alternating currents (AC) are also used in radio and television transmissions. In an AM (amplitude-modulation) radio broadcast, electromagnetic waves with a frequency of around one million hertz are generated by currents of the same frequency flowing back and forth in the antenna of the station. The information transported by these waves is encoded in the rapid variation of the wave amplitude. When voices and music are broadcast, these variations correspond to the mechanical oscillations of the sound and have frequencies from 50 to 5,000 hertz. In an FM (frequency-modulation) system, which is used by both television and FM radio stations, audio information is contained in the rapid fluctuation of the frequency in a narrow range around the frequency of the carrier wave.
Circuits that can generate such oscillating currents are called oscillators; they include, in addition to transistors, such basic electrical components as resistors, capacitors, and inductors. As was mentioned above, resistors dissipate heat while carrying a current. Capacitors store energy in the form of an electric field in the volume between oppositely charged electrodes. Inductors are essentially coils of conducting wire; they store magnetic energy in the form of a magnetic field generated by the current in the coil. All three components provide some impedance to the flow of alternating currents. In the case of capacitors and inductors, the impedance depends on the frequency of the current. With resistors, impedance is independent of frequency and is simply the resistance. This is easily seen from Ohm’s law, when it is written as i = V/R. For a given voltage difference V between the ends of a resistor, the current varies inversely with the value of R. The greater the value R, the greater is the impedance to the flow of electric current. Before proceeding to circuits with resistors, capacitors, inductors, and sinusoidally varying electromotive forces, the behaviour of a circuit with a resistor and a capacitor will be discussed to clarify transient behaviour and the impedance properties of the capacitor.
Bioelectric effects
Bioelectricity refers to the generation or action of electric currents or voltages in biological processes. Bioelectric phenomena include fast signaling in nerves and the triggering of physical processes in muscles or glands. There is some similarity among the nerves, muscles, and glands of all organisms, possibly because fairly efficient electrochemical systems evolved early. Scientific studies tend to focus on the following: nerve or muscle tissue; such organs as the heart, brain, eye, ear, stomach, and certain glands; electric organs in some fish; and potentials associated with damaged tissue.
Electric activity in living tissue is a cellular phenomenon, dependent on the cell membrane. The membrane acts like a capacitor, storing energy as electrically charged ions on opposite sides of the membrane. The stored energy is available for rapid utilization and stabilizes the membrane system so that it is not activated by small disturbances.
Cells capable of electric activity show a resting potential in which their interiors are negative by about 0.1 volt or less compared with the outside of the cell. When the cell is activated, the resting potential may reverse suddenly in sign; as a result, the outside of the cell becomes negative and the inside positive. This condition lasts for a short time, after which the cell returns to its original resting state. This sequence, called depolarization and repolarization, is accompanied by a flow of substantial current through the active cell membrane, so that a “dipole-current source” exists for a short period. Small currents flow from this source through the aqueous medium containing the cell and are detectable at considerable distances from it. These currents, originating in active membrane, are functionally significant very close to their site of origin but must be considered incidental at any distance from it. In electric fish, however, adaptations have occurred, and this otherwise incidental electric current is actually utilized. In some species the external current is apparently used for sensing purposes, while in others it is used to stun or kill prey. In both cases, voltages from many cells add up in series, thus assuring that the specialized functions can be performed. Bioelectric potentials detected at some distance from the cells generating them may be as small as the 20 or 30 microvolts associated with certain components of the human electroencephalogram or the millivolt of the human electrocardiogram. On the other hand, electric eels can deliver electric shocks with voltages as large as 1,000 volts.
In addition to the potentials originating in nerve or muscle cells, relatively steady or slowly varying potentials (often designated dc) are known. These dc potentials occur in the following cases: in areas where cells have been damaged and where ionized potassium is leaking (as much as 50 millivolts); when one part of the brain is compared with another part (up to one millivolt); when different areas of the skin are compared (up to 10 millivolts); within pockets in active glands, e.g., follicles in the thyroid (as high as 60 millivolts); and in special structures in the inner ear (about 80 millivolts).
A small electric shock caused by static electricity during cold, dry weather is a familiar experience. While the sudden muscular reaction it engenders is sometimes unpleasant, it is usually harmless. Even though static potentials of several thousand volts are involved, a current exists for only a brief time and the total charge is very small. A steady current of two milliamperes through the body is barely noticeable. Severe electrical shock can occur above 10 milliamperes, however. Lethal current levels range from 100 to 200 milliamperes. Larger currents, which produce burns and unconsciousness, are not fatal if the victim is given prompt medical care. (Above 200 milliamperes, the heart is clamped during the shock and does not undergo ventricular fibrillation.) Prevention clearly includes avoiding contact with live electric wiring; risk of injury increases considerably if the skin is wet, as the electric resistance of wet skin may be hundreds of times smaller than that of dry skin.
Additional Information
Humans have an intimate relationship with electricity, to the point that it's virtually impossible to separate your life from it. Sure, you can flee from the world of crisscrossing power lines and live your life completely off the grid, but even at the loneliest corners of the world, electricity exists. If it's not lighting up the storm clouds overhead or crackling in a static spark at your fingertips, then it's moving through the human nervous system, animating the brain's will in every flourish, breath and unthinking heartbeat.
When the same mysterious force energizes a loved one's touch, a stroke of lightning and a George Foreman Grill, a curious duality ensues: We take electricity for granted one second and gawk at its power the next. More than two and a half centuries have passed since Benjamin Franklin and others proved lightning was a form of electricity, but it's still hard not to flinch when a particularly violent flash lights up the horizon. On the other hand, no one ever waxes poetic over a cell phone charger.
Electricity powers our world and our bodies. Harnessing its energy is both the domain of imagined sorcery and humdrum, everyday life -- from Emperor Palpatine toasting Luke Skywalker, to the simple act of ejecting the "Star Wars" disc from your PC. Despite our familiarity with its effects, many people fail to understand exactly what electricity is -- a ubiquitous form of energy resulting from the motion of charged particles, like electrons. When put to the question, even acclaimed inventor Thomas Edison merely defined it as "a mode of motion" and "a system of vibrations."
]]>
Gist
A magnet is a piece of metal with a strong attraction to another metal object. The attraction a magnet produces is called a "magnetic field."
You might cover the front of your refrigerator with magnets, which stick to its metal surface. Other kinds of magnets are even more powerful, strong enough to pick up entire cars, for example. Most magnets are made of iron or an iron alloy, and magnets are at the heart of many common items like cassette tapes, credit cards, toys, and compasses.
Definition of a Magnet: (physics) a device that attracts iron and produces a magnetic field.
Summary
A magnet is a special kind of metal. When a magnet goes near a special kind of metal or another magnet, and the poles (sides) touching are opposite, it will pull, or attract the other object closer. If the two poles are the same, the magnet and the other object will push away, or repel, from each other. This attraction and repulsion is called magnetism. Magnets can make some other metals into magnets when they are rubbed together.
Types of magnet
Soft magnets (meaning impermanent magnets) are often used in electromagnets. These increase (often hundreds or thousands of times) the magnetic field of a wire that carries an electrical current and is wrapped around the magnet. The field also increases with the current.
Permanent magnets have ferromagnetism. They occur naturally in some rocks, particularly lodestone, but are now commonly manufactured. A magnet's magnetism decreases when it is heated and increases when it is cooled. It has to be heated at around 1,000 degrees Celsius (1,830 °F). Like poles (S-pole and S-pole/N-pole and N-pole) will repel each other while unlike poles (N-pole and S-pole) will attract each other.
Magnets are only attracted to special metals. Iron, cobalt and nickel are magnetic. Metals that have iron in them attract magnets well. Steel is one. Metals like brass, copper, zinc and aluminum are not attracted to magnets. Non-magnetic materials such as wood and glass are not attracted to magnets as they do not have magnetic materials in them.
Rare earth magnets
Lanthanoid elements can make very strong magnets. The spin of their electrons can be aligned, resulting in very strong magnetic fields. So these elements are used for high-strength magnets when their high price is not a concern. The most common types of rare-earth magnets are samarium–cobalt and neodymium–iron–boron (NIB) magnets.
Natural magnets
Natural/permanent magnets are not artificial. They are a kind of rock called lodestone or magnetite.
The compass
A compass uses the Earth's magnetic field, and points to the North magnetic pole. A north side of the magnet is attracted to the south side of another magnet. However, the north side of the compass points to the north pole, this can only mean that the "north pole" is really the magnetic south, and the "South magnetic pole" is really the magnetic north.
Discovery
Ancient people discovered magnetism from lodestones (or magnetite) which are naturally magnetized pieces of iron ore. Lodestones, suspended so they could turn, were the first magnetic compasses.
The earliest known surviving descriptions of magnets and their properties are from Anatolia, India, and China about 2500 years ago. The properties of lodestones and their affinity for iron were written of by Pliny the Elder in his encyclopedia Naturalis Historia.
Details
A Magnet is any material capable of attracting iron and producing a magnetic field outside itself. By the end of the 19th century all the known elements and many compounds had been tested for magnetism, and all were found to have some magnetic property. The most common was the property of diamagnetism, the name given to materials exhibiting a weak repulsion by both poles of a magnet. Some materials, such as chromium, showed paramagnetism, being capable of weak induced magnetization when brought near a magnet. This magnetization disappears when the magnet is removed. Only three elements, iron, nickel, and cobalt, showed the property of ferromagnetism (i.e., the capability of remaining permanently magnetized).
Magnetization process
The quantities now used in characterizing magnetization were defined and named by William Thomson (Lord Kelvin) in 1850. The symbol B denotes the magnitude of magnetic flux density inside a magnetized body, and the symbol H denotes the magnitude of magnetizing force, or magnetic field, producing it. The two are represented by the equation B = μH, in which the Greek letter mu, μ, symbolizes the permeability of the material and is a measure of the intensity of magnetization that can be produced in it by a given magnetic field. The modern units of the International Standard (SI) system for B are teslas (T) or webers per square metre (Wb/m^2) and for H are amperes per metre (A/m). The units were formerly called, respectively, gauss and oersted. The units of μ are henrys per metre.
All ferromagnetic materials exhibit the phenomenon of hysteresis, a lag in response to changing forces based on energy losses resulting from internal friction. If B is measured for various values of H and the results are plotted in graphic form, the result is a loop of the type shown in the accompanying figure, called a hysteresis loop. The name describes the situation in which the path followed by the values of B while H is increasing differs from that followed as H is decreasing. With the aid of this diagram, the characteristics needed to describe the performance of a material to be used as a magnet can be defined. Bs is the saturation flux density and is a measure of how strongly the material can be magnetized. Br is the remanent flux density and is the residual, permanent magnetization left after the magnetizing field is removed; this latter is obviously a measure of quality for a permanent magnet. It is usually measured in webers per square metre. In order to demagnetize the specimen from its remanent state, it is necessary to apply a reversed magnetizing field, opposing the magnetization in the specimen. The magnitude of field necessary to reduce the magnetization to zero is Hc, the coercive force, measured in amperes per metre. For a permanent magnet to retain its magnetization without loss over a long period of time, Hc should be as large as possible. The combination of large Br and large Hc will generally be found in a material with a large saturation flux density that requires a large field to magnetize it. Thus, permanent-magnet materials are often characterized by quoting the maximum value of the product of B and H, (BH)max, which the material can achieve. This product (BH)max is a measure of the minimum vol ume of permanent-magnet material required to produce a required flux density in a given gap and is sometimes referred to as the energy product.
It was suggested in 1907 that a ferromagnetic material is composed of a large number of small volumes called domains, each of which is magnetized to saturation. In 1931 the existence of such domains was first demonstrated by direct experiment. The ferromagnetic body as a whole appears unmagnetized when the directions of the individual domain magnetizations are distributed at random. Each domain is separated from its neighbours by a domain wall. In the wall region, the direction of magnetization turns from that of one domain to that of its neighbour. The process of magnetization, starting from a perfect unmagnetized state, comprises three stages: (1) Low magnetizing field. Reversible movements of the domain walls occur such that domains oriented in the general direction of the magnetizing field grow at the expense of those unfavourably oriented; the walls return to their original position on removal of the magnetizing field, and there is no remanent magnetization. (2) Medium magnetizing field. Larger movements of domain walls occur, many of which are irreversible, and the volume of favourably oriented domains is much increased. On removal of the field, all the walls do not return to their original positions, and there is a remanent magnetization. (3) High magnetizing field. Large movements of domain walls occur such that many are swept out of the specimen completely. The directions of magnetization in the remaining domains gradually rotate, as the field is increased, until the magnetization is everywhere parallel to the field and the material is magnetized to saturation. On removal of the field, domain walls reappear and the domain magnetizations may rotate away from the original field direction. The remanent magnetization has its maximum value.
The values of Br, Hc, and (BH)max will depend on the ease with which domain walls can move through the material and domain magnetization can rotate. Discontinuities or imperfections in the material provide obstacles to domain wall movement. Thus, once the magnetizing field has driven the wall past an obstacle, the wall will not be able to return to its original position unless a reversed field is applied to drive it back again. The effect of these obstacles is, therefore, to increase the remanence. Conversely, in a pure, homogeneous material, in which there are few imperfections, it will be easy to magnetize the material to saturation with relatively low fields, and the remanent magnetization will be small.
Demagnetization and magnetic anisotropy. As far as domain rotation is concerned, there are two important factors to be considered, demagnetization and magnetic anisotropy (exhibition of different magnetic properties when measured along axes in different directions). The first of these concerns the shape of a magnetized specimen. Any magnet generates a magnetic field in the space surrounding it. The direction of the lines of force of this field, defined by the direction of the force exerted by the field on a (hypothetical) single magnetic north pole, is opposite to the direction of field used to magnetize it originally. Thus, every magnet exists in a self-generated field that has a direction such as to tend to demagnetize the specimen. This phenomenon is described by the demagnetizing factor. If the magnetic lines of force can be confined to the magnet and not allowed to escape into the surrounding medium, the demagnetizing effect will be absent. Thus a toroidal (ring-shaped) magnet, magnetized around its perimeter so that all the lines of force are closed loops within the material, will not try to demagnetize itself. For bar magnets, demagnetization can be minimized by keeping them in pairs, laid parallel with north and south poles adjacent and with a soft-iron keeper laid across each end.
The relevance of demagnetization to domain rotation arises from the fact that the demagnetizing field may be looked upon as a store of magnetic energy. Like all natural systems, the magnet, in the absence of constraints, will try to maintain its magnetization in a direction such as to minimize stored energy; i.e., to make the demagnetizing field as small as possible. To rotate the magnetization away from this minimum-energy position requires work to be done to provide the increase in energy stored in the increased demagnetizing field. Thus, if an attempt is made to rotate the magnetization of a domain away from its natural minimum-energy position, the rotation can be said to be hindered in the sense that work must be done by an applied field to promote the rotation against the demagnetizing forces. This phenomenon is often called shape anisotropy because it arises from the domain’s geometry which may, in turn, be determined by the overall shape of the specimen.
Similar minimum-energy considerations are involved in the second mechanism hindering domain rotation, namely magnetocrystalline anisotropy. It was first observed in 1847 that in crystals of magnetic material there appeared to exist preferred directions for the magnetization. This phenomenon has to do with the symmetry of the atomic arrangements in the crystal. For example, in iron, which has a cubic crystalline form, it is easier to magnetize the crystal along the directions of the edges of the cube than in any other direction. Thus the six cube-edge directions are easy directions of magnetization, and the magnetization of the crystal is termed anisotropic.
Magnetic anisotropy can also be induced by strain in a material. The magnetization tends to align itself in accordance with or perpendicular to the direction of the in-built strain. Some magnetic alloys also exhibit the phenomenon of induced magnetic anisotropy. If an external magnetic field is applied to the material while it is annealed at a high temperature, an easy direction for magnetization is found to be induced in a direction coinciding with that of the applied field.
The above description explains why steel makes a better permanent magnet than does soft iron. The carbon in steel causes the precipitation of tiny crystallites of iron carbide in the iron that form what is called a second phase. The phase boundaries between the precipitate particles and the host iron form obstacles to domain wall movement, and thus the coercive force and remanence are raised compared with pure iron.
The best permanent magnet, however, would be one in which the domain walls were all locked permanently in position and the magnetizations of all the domains were aligned parallel to each other. This situation can be visualized as the result of assembling the magnet from a large number of particles having a high value of saturation magnetization, each of which is a single domain, each having a uniaxial anisotropy in the desired direction, and each aligned with its magnetization parallel to all the others.
Powder magnets
The problem of producing magnets composed of compacted powders is essentially that of controlling particle sizes so that they are small enough to comprise a single domain and yet not so small as to lose their ferromagnetic properties altogether. The advantage of such magnets is that they can readily be molded and machined into desired shapes. The disadvantage of powder magnets is that when single-domain particles are packed together they are subject to strong magnetic interactions that reduce the coercive force and, to a lesser extent, the remanent magnetization. The nature of the interaction is essentially a reduction of a given particle’s demagnetizing field caused by the presence of its neighbours, and the interaction limits the maximum values of Hc and (BH)max that can be achieved. More success has attended the development of magnetic alloys.
High anisotropy alloys
The materials described above depend on shape for their large uniaxial anisotropy. Much work has also been done on materials having a large uniaxial magnetocrystalline anisotropy. Of these, the most successful have been cobalt–platinum (CoPt) and manganese–bismuth (MnBi) alloys.
Alnico alloys
High coercive force will be obtained where domain wall motion can be inhibited. This condition can occur in an alloy in which two phases coexist, especially if one phase is a finely divided precipitate in a matrix of the other. Alloys containing the three elements iron, nickel, and aluminum show just such behaviour; and permanent magnet materials based on this system, with various additives, such as cobalt, copper, or titanium, are generally referred to as Alnico alloys.
Rare-earth
–cobalt alloys. Isolated atoms of many elements have finite magnetic moments (i.e., the atoms are themselves tiny magnets). When the atoms are brought together in the solid form of the element, however, most interact in such a way that their magnetism cancels out and the solid is not ferromagnetic. Only in iron, nickel, and cobalt, of the common elements, does the cancelling-out process leave an effective net magnetic moment per atom in the vicinity of room temperature and above. Unfortunately, however, it loses its ferromagnetism at temperatures above 16° C (60° F) so that it is not of practical importance. Several of the rare-earth elements show ferromagnetic behaviour at extremely low temperatures, and many of them have large atomic moments. They are not, however, of great practical value.
Barium ferrites
Barium ferrite, essentially BaO:6Fe2O3, is a variation of the basic magnetic iron-oxide magnetite but has a hexagonal crystalline form. This configuration gives it a very high uniaxial magnetic anisotropy capable of producing high values of Hc. The powdered material can be magnetically aligned and then compacted and sintered. The temperature and duration of the sintering process determines the size of the crystallites and provides a means of tailoring the properties of the magnet. For very small crystallites the coercive force is high and the remanence is in the region of half the saturation flux density. Larger crystallites give higher Br but lower Hc. This material has been widely used in the television industry for focussing magnets for television tubes.
A further development of commercial importance is to bond the powdered ferrite by a synthetic resin or rubber to give either individual moldings or extruded strips, or sheets, that are semiflexible and can be cut with knives. This material has been used as a combination gasket (to make airtight) and magnetic closure for refrigerator doors.
Permeable materials
A wide range of magnetic devices utilizing magnetic fields, such as motors, generators, transformers, and electromagnets, require magnetic materials with properties quite contrary to those required for good permanent magnets. Such materials must be capable of being magnetized to a high value of flux density in relatively small magnetic fields and then must lose this magnetization completely on removal of the field.
Because iron has the highest value of magnetic moment per atom of the three ferromagnetic metals, it remains the best material for applications where a high-saturation flux density is required. Extensive investigations have been undertaken to determine how to produce iron as free from imperfections as possible, in order to attain the easiest possible domain wall motion. The presence of such elements as carbon, sulfur, oxygen, and nitrogen, even in small amounts, is particularly harmful; and thus sheet materials used in electrical equipment have a total impurity content of less than 0.4 percent.
Important advantages are obtained by alloying iron with a small amount (about 4 percent) of silicon. The added silicon reduces the magnetocrystalline anisotropy of the iron and hence its coercive force and hysteresis loss. Although there is a reduction in the saturation flux density, this loss is outweighed by the other advantages, which include increased electrical resistivity. The latter is important in applications where the magnetic flux alternates because this induces eddy currents in the magnetic material. The lower the resistivity and the higher the frequency of the alternations, the higher are these currents. They produce a loss of energy by causing heating of the material and will be minimized, at a given frequency, by raising the resistivity of the material.
By a suitable manufacturing process, silicon-iron sheet material can be produced with a high degree of preferred orientation of the crystallites. The material then has a preferred direction of magnetization, and in this direction high permeability and low loss are attained. Commercially produced material has about 3.2 percent silicon and is known as cold-reduced, grain-oriented silicon steel.
Alloys of nickel and iron in various proportions are given the general name Permalloy. As the proportion of nickel varies downward, the saturation magnetization increases, reaching a maximum at about 50 percent, falling to zero at 27 percent nickel, then rising again toward the value for pure iron. The magnetocrystalline anisotropy also falls from the value for pure nickel to a very low value in the region of 80 percent nickel, rising only slowly thereafter. Highest value of permeability is at 78.5 percent nickel, which is called Permalloy A. The maximum relative permeability, which can reach a value in the region of 1,000,000 in carefully prepared Permalloy A, makes the alloy useful and superior to iron and silicon iron at low flux densities.
In addition to barium ferrite, which has a hexagonal crystal form, most of the ferrites of the general formula MeO·Fe2O3, in which Me is a metal, are useful magnetically. They have a different crystalline form called spinel after the mineral spinel (MgAl2O4), which crystallizes in the cubic system. All the spinel ferrites are soft magnetic materials; that is, they exhibit low coercive force and narrow hysteresis loops. Furthermore, they all have a high electrical resistivity and high relative permeabilities, thus making them suitable for use in high-frequency electronic equipment. Their saturation magnetization, however, is low compared with the alloys, and this property limits their use in high-field, high-power transformers. They are hard, brittle, ceramic-like materials and are difficult to machine. Nevertheless, they are widely used, most importantly in computer memories.
Additional Information
A magnet is a material or object that produces a magnetic field. This magnetic field is invisible but is responsible for the most notable property of a magnet: a force that pulls on other ferromagnetic materials, such as iron, steel, nickel, cobalt, etc. and attracts or repels other magnets.
A permanent magnet is an object made from a material that is magnetized and creates its own persistent magnetic field. An everyday example is a refrigerator magnet used to hold notes on a refrigerator door. Materials that can be magnetized, which are also the ones that are strongly attracted to a magnet, are called ferromagnetic (or ferrimagnetic). These include the elements iron, nickel and cobalt and their alloys, some alloys of rare-earth metals, and some naturally occurring minerals such as lodestone. Although ferromagnetic (and ferrimagnetic) materials are the only ones attracted to a magnet strongly enough to be commonly considered magnetic, all other substances respond weakly to a magnetic field, by one of several other types of magnetism.
Ferromagnetic materials can be divided into magnetically "soft" materials like annealed iron, which can be magnetized but do not tend to stay magnetized, and magnetically "hard" materials, which do. Permanent magnets are made from "hard" ferromagnetic materials such as alnico and ferrite that are subjected to special processing in a strong magnetic field during manufacture to align their internal microcrystalline structure, making them very hard to demagnetize. To demagnetize a saturated magnet, a certain magnetic field must be applied, and this threshold depends on coercivity of the respective material. "Hard" materials have high coercivity, whereas "soft" materials have low coercivity. The overall strength of a magnet is measured by its magnetic moment or, alternatively, the total magnetic flux it produces. The local strength of magnetism in a material is measured by its magnetization.
An electromagnet is made from a coil of wire that acts as a magnet when an electric current passes through it but stops being a magnet when the current stops. Often, the coil is wrapped around a core of "soft" ferromagnetic material such as mild steel, which greatly enhances the magnetic field produced by the coil.
Discovery and development
Ancient people learned about magnetism from lodestones (or magnetite) which are naturally magnetized pieces of iron ore. The word magnet was adopted in Middle English from Latin magnetum "lodestone", ultimately from Greek (magnētis [lithos]) meaning "[stone] from Magnesia", a place in Anatolia where lodestones were found (today Manisa in modern-day Turkey). Lodestones, suspended so they could turn, were the first magnetic compasses. The earliest known surviving descriptions of magnets and their properties are from Anatolia, India, and China around 2500 years ago. The properties of lodestones and their affinity for iron were written of by Pliny the Elder in his encyclopedia Naturalis Historia.
In 11th century China, it was discovered that quenching red hot iron in the Earth's magnetic field would leave the iron permanently magnetized. This led to the development of the navigational compass, as described in Dream Pool Essays in 1088. By the 12th to 13th centuries AD, magnetic compasses were used in navigation in China, Europe, the Arabian Peninsula and elsewhere.
A straight iron magnet tends to demagnetize itself by its own magnetic field. To overcome this, the horseshoe magnet was invented by Daniel Bernoulli in 1743. A horseshoe magnet avoids demagnetization by returning the magnetic field lines to the opposite pole.
In 1820, Hans Christian Ørsted discovered that a compass needle is deflected by a nearby electric current. In the same year André-Marie Ampère showed that iron can be magnetized by inserting it in an electrically fed solenoid. This led William Sturgeon to develop an iron-cored electromagnet in 1824. Joseph Henry further developed the electromagnet into a commercial product in 1830–1831, giving people access to strong magnetic fields for the first time. In 1831 he built an ore separator with an electromagnet capable of lifting 750 pounds (340 kg).
]]>
Gist
The law of conservation of energy states that energy can neither be created nor destroyed - only converted from one form of energy to another. This means that a system always has the same amount of energy, unless it's added from the outside. This is particularly confusing in the case of non-conservative forces, where energy is converted from mechanical energy into thermal energy, but the overall energy does remain the same. The only way to use energy is to transform energy from one form to another.
Summary
Conservation of energy is the principle of physics according to which the energy of interacting bodies or particles in a closed system remains constant. The first kind of energy to be recognized was kinetic energy, or energy of motion. In certain particle collisions, called elastic, the sum of the kinetic energy of the particles before collision is equal to the sum of the kinetic energy of the particles after collision. The notion of energy was progressively widened to include other forms. The kinetic energy lost by a body slowing down as it travels upward against the force of gravity was regarded as being converted into potential energy, or stored energy, which in turn is converted back into kinetic energy as the body speeds up during its return to Earth. For example, when a pendulum swings upward, kinetic energy is converted to potential energy. When the pendulum stops briefly at the top of its swing, the kinetic energy is zero, and all the energy of the system is in potential energy. When the pendulum swings back down, the potential energy is converted back into kinetic energy. At all times, the sum of potential and kinetic energy is constant. Friction, however, slows down the most carefully constructed mechanisms, thereby dissipating their energy gradually. During the 1840s it was conclusively shown that the notion of energy could be extended to include the heat that friction generates. The truly conserved quantity is the sum of kinetic, potential, and thermal energy. For example, when a block slides down a slope, potential energy is converted into kinetic energy. When friction slows the block to a stop, the kinetic energy is converted into thermal energy. Energy is not created or destroyed but merely changes forms, going from potential to kinetic to thermal energy. This version of the conservation-of-energy principle, expressed in its most general form, is the first law of thermodynamics. The conception of energy continued to expand to include energy of an electric current, energy stored in an electric or a magnetic field, and energy in fuels and other chemicals. For example, a car moves when the chemical energy in its gasoline is converted into kinetic energy of motion.
With the advent of relativity physics (1905), mass was first recognized as equivalent to energy. The total energy of a system of high-speed particles includes not only their rest mass but also the very significant increase in their mass as a consequence of their high speed. After the discovery of relativity, the energy-conservation principle has alternatively been named the conservation of mass-energy or the conservation of total energy.
When the principle seemed to fail, as it did when applied to the type of radioactivity called beta decay (spontaneous electron ejection from atomic nuclei), physicists accepted the existence of a new subatomic particle, the neutrino, that was supposed to carry off the missing energy rather than reject the conservation principle. Later, the neutrino was experimentally detected.
Energy conservation, however, is more than a general rule that persists in its validity. It can be shown to follow mathematically from the uniformity of time. If one moment of time were peculiarly different from any other moment, identical physical phenomena occurring at different moments would require different amounts of energy, so that energy would not be conserved.
Details
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time. In the case of a closed system the principle says that the total amount of energy within the system can only be changed through energy entering or leaving the system. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
Classically, conservation of energy was distinct from conservation of mass. However, special relativity shows that mass is related to energy and vice versa by
, the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass can itself be converted to pure energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.Given the stationary-action principle, conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time.
A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, conservation of energy can arguably be violated by general relativity on the cosmological scale.
Mass–energy equivalence
Matter is composed of atoms and what makes up atoms. Matter has intrinsic or rest mass. In the limited range of recognized experience of the nineteenth century, it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of rest energy. This means that rest mass can be converted to or from equivalent amounts of (non-material) forms of energy, for example, kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth-century experience, rest mass is not conserved, unlike the total mass or total energy. All forms of energy contribute to the total mass and total energy.
For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons which have electromagnetic radiant energy but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total mass nor the total energy of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass.
Thus, conservation of energy (total, including material or rest energy) and conservation of mass (total, not just rest) are one (equivalent) law. In the 18th century, these had appeared as two seemingly-distinct laws.
Special relativity
With the discovery of special relativity by Henri Poincaré and Albert Einstein, the energy was proposed to be a component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated).
The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of a particle or object (including internal kinetic energy in systems) is proportional to the rest mass or invariant mass, as described by the famous equation
Thus, the rule of conservation of energy over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation.
General relativity
General relativity introduces new phenomena. In an expanding universe, photons spontaneously redshift and tethers spontaneously gain tension; if vacuum energy is positive, the total vacuum energy of the universe appears to spontaneously increase as the volume of space increases. Some scholars claim that energy is no longer meaningfully conserved in any identifiable form.
John Baez's view is that energy–momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress–energy–momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics, notably the Friedmann–Lemaître–Robertson–Walker metric that appears to govern the universe, do not satisfy these constraints and energy conservation is not well defined. Besides being dependent on the coordinate system, pseudotensor energy is dependent on the type of pseudotensor in use; for example, the energy exterior to a Kerr–Newman black hole is twice as large when calculated from Møller's pseudotensor as it is when calculated using the Einstein pseudotensor.
For asymptotically flat universes, Einstein and others salvage conservation of energy by introducing a specific global gravitational potential energy that cancels out mass-energy changes triggered by spacetime expansion or contraction. This global energy has no well-defined density and cannot technically be applied to a non-asymptotically flat universe; however, for practical purposes this can be finessed, and so by this view, energy is conserved in our universe. Alan Guth even famously stated that the universe might be "the ultimate free lunch", and theorized that, when accounting for gravitational potential energy, the net energy of the Universe is zero.
]]>
Gist
Oscillation is when something "vibrates", or repeats the same pattern. Many things in nature move back-and-forth or up-and down when pushed or struck. In time, natural oscillators slow down and stop because of friction.
Examples
* The pendulum of a "grandfather clock", for example, is a very slow oscillator.
* The strings of pianos and string instruments "oscillate" when struck by a hammer.
* An ocean surface wave is the result of water moving up and down.
* Circuits powered by electricity can "oscillate". Such circuits can be used to make sounds or radio waves.
* Different chemicals, when you mix them together in the right order, can make some new things. But then these things change back to the original ones, making this repeating pattern. These are called chemical oscillators.
Details
The oscillation of an item is the recurrent switching back and forth between two locations. It is often called periodic motion since it appears to return to itself constantly. For instance, an up-and-down motion caused by a spring's weight or a sine wave caused by a pendulum swinging from side to side.
Oscillation is the repetitive or periodic variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples of oscillation include a swinging pendulum and alternating current. Oscillations can be used in physics to approximate complex interactions, such as those between atoms.
Oscillations occur not only in mechanical systems but also in dynamic systems in virtually every area of science: for example the beating of the human heart (for circulation), business cycles in economics, predator–prey population cycles in ecology, geothermal geysers in geology, vibration of strings in guitar and other string instruments, periodic firing of nerve cells in the brain, and the periodic swelling of Cepheid variable stars in astronomy. The term vibration is precisely used to describe a mechanical oscillation.
Oscillation, especially rapid oscillation, may be an undesirable phenomenon in process control and control theory (e.g. in sliding mode control), where the aim is convergence to stable state. In these cases it is called chattering or flapping, as in valve chatter, and route flapping.
Simple harmonic oscillation
The simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air table or ice surface. The system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted. The time taken for an oscillation to occur is often referred to as the oscillatory period.
The systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy which is converted into potential energy stored in the spring at the extremes of its path. The spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium.
Damped oscillations
All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. This is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system. The simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator.
Damped oscillators are created when a resistive force is introduced, which is dependent on the first derivative of the position, or in this case velocity. The differential equation created by Newton's second law adds in this resistive force with an arbitrary constant b. This example assumes a linear dependence on velocity.
]]>
The matter in our universe (level 0) is formed by the universe of atoms (level -1).
This could lead us to also say that the matter in the atomic universe (level -1) is formed by the universe of sub-atoms (level -2).
And the matter in the sub-atomic universe (level -2) is formed by the universe of sub-sub-atoms (level -3).
etc.
In the opposite direction, our huge universe (level 0) could be seen by a much bigger universe (level 1) as a small piece of matter in it... the same view applies between (level 1) and (level 2)... etc.
Long ago, humans were aware of Earth as being the center of Existence.
Then, humans became aware of the huge universe (level 0)
Lately, humans became aware of the atomic universe (level -1)
Will humans exist long enough to be aware of the upper universe (level 1) and/or the next lower one (level -2)?
After all, it is just a thought.
]]>Gist
What Is Chronic Kidney Disease? Chronic kidney disease (CKD) means your kidneys are damaged and can't filter blood the way they should. The main risk factors for developing kidney disease are diabetes, high blood pressure, heart disease, and a family history of kidney failure.
Summary
Chronic kidney disease (CKD)—or chronic renal failure (CRF), as it was historically termed—is a term that encompasses all degrees of decreased kidney function, from damaged–at risk through mild, moderate, and severe chronic kidney failure. CKD is a worldwide public health problem. In the United States, there is a rising incidence and prevalence of kidney failure, with poor outcomes and high cost.
CKD is more prevalent in the elderly population Almost half of the patients with CKD are older than 70 years. However, while younger patients with CKD typically experience progressive loss of kidney function, 30% of patients over 65 years of age with CKD have stable disease.
CKD is associated with an increased risk of cardiovascular disease and end-stage kidney disease (ESKD). Kidney disease is the 10th leading cause of death in the United States.
The Kidney Disease Outcomes Quality Initiative (KDOQI) of the National Kidney Foundation (NKF) established a definition and classification of CKD in 2002. The KDOQI and the international guideline group Kidney Disease Improving Global Outcomes (KDIGO) subsequently updated these guidelines. These guidelines have allowed better communication among physicians and have facilitated intervention at the different stages of the disease.
The guidelines define CKD as either kidney damage or a decreased glomerular filtration rate (GFR) of less than 60 mL/min/1.73 m^2 for at least 3 months. Whatever the underlying etiology, once the loss of nephrons and reduction of functional renal mass reaches a certain point, the remaining nephrons begin a process of irreversible sclerosis that leads to a progressive decline in the GFR.
Hyperparathyroidism is one of the pathologic manifestations of CKD.
Details
Chronic kidney disease (CKD) is a type of kidney disease in which a gradual loss of kidney function occurs over a period of months to years. Initially generally no symptoms are seen, but later symptoms may include leg swelling, feeling tired, vomiting, loss of appetite, and confusion. Complications can relate to hormonal dysfunction of the kidneys and include (in chronological order) high blood pressure (often related to activation of the renin–angiotensin system), bone disease, and anemia. Additionally CKD patients have markedly increased cardiovascular complications with increased risks of death and hospitalization.
Causes of chronic kidney disease include diabetes, high blood pressure, glomerulonephritis, and polycystic kidney disease. Risk factors include a family history of chronic kidney disease. Diagnosis is by blood tests to measure the estimated glomerular filtration rate (eGFR), and a urine test to measure albumin. Ultrasound or kidney biopsy may be performed to determine the underlying cause. Several severity-based staging systems are in use.
Screening at-risk people is recommended. Initial treatments may include medications to lower blood pressure, blood sugar, and cholesterol. Angiotensin converting enzyme inhibitors (ACEIs) or angiotensin II receptor antagonists (ARBs) are generally first-line agents for blood pressure control, as they slow progression of the kidney disease and the risk of heart disease. Loop diuretics may be used to control edema and, if needed, to further lower blood pressure. NSAIDs should be avoided. Other recommended measures include staying active, and certain dietary changes such as a low-salt diet and the right amount of protein. Treatments for anemia and bone disease may also be required. Severe disease requires hemodialysis, peritoneal dialysis, or a kidney transplant for survival.
Chronic kidney disease affected 753 million people globally in 2016 (417 million females and 336 million males.) In 2015, it caused 1.2 million deaths, up from 409,000 in 1990. The causes that contribute to the greatest number of deaths are high blood pressure at 550,000, followed by diabetes at 418,000, and glomerulonephritis at 238,000.
Signs and symptoms
CKD is initially without symptoms, and is usually detected on routine screening blood work by either an increase in serum creatinine, or protein in the urine. As the kidney function decreases, more unpleasant symptoms may emerge:
* Blood pressure is increased due to fluid overload and production of vasoactive hormones created by the kidney via the renin–angiotensin system, increasing the risk of developing hypertension and heart failure. People with CKD are more likely than the general population to develop atherosclerosis with consequent cardiovascular disease, an effect that may be at least partly mediated by uremic toxins. People with both CKD and cardiovascular disease have significantly worse prognoses than those with only cardiovascular disease.
* Urea accumulates, leading to azotemia and ultimately uremia (symptoms ranging from lethargy to pericarditis and encephalopathy). Due to its high systemic concentration, urea is excreted in eccrine sweat at high concentrations and crystallizes on skin as the sweat evaporates ("uremic frost").
* Potassium accumulates in the blood (hyperkalemia with a range of symptoms including malaise and potentially fatal cardiac arrhythmias). Hyperkalemia usually does not develop until the glomerular filtration rate falls to less than 20–25 mL/min/1.73 m^2, when the kidneys have decreased ability to excrete potassium. Hyperkalemia in CKD can be exacerbated by acidemia (which leads to extracellular shift of potassium) and from lack of insulin.
* Fluid overload symptoms may range from mild edema to life-threatening pulmonary edema.
* Hyperphosphatemia results from poor phosphate elimination in the kidney, and contributes to increased cardiovascular risk by causing vascular calcification. Circulating concentrations of fibroblast growth factor-23 (FGF-23) increase progressively as the kidney capacity for phosphate excretion declines, which may contribute to left ventricular hypertrophy and increased mortality in people with CKD .
* Hypocalcemia results from 1,25 dihydroxyvitamin D3 deficiency (caused by high FGF-23 and reduced kidney mass) and resistance to the action of parathyroid hormone.[30] Osteocytes are responsible for the increased production of FGF-23, which is a potent inhibitor of the enzyme 1-alpha-hydroxylase (responsible for the conversion of 25-hydroxycholecalciferol into 1,25 dihydroxyvitamin D3). Later, this progresses to secondary hyperparathyroidism, kidney osteodystrophy, and vascular calcification that further impairs cardiac function. An extreme consequence is the occurrence of the rare condition named calciphylaxis.
* Changes in mineral and bone metabolism that may cause 1) abnormalities of calcium, phosphorus (phosphate), parathyroid hormone, or vitamin D metabolism; 2) abnormalities in bone turnover, mineralization, volume, linear growth, or strength (kidney osteodystrophy); and 3) vascular or other soft-tissue calcification. CKD-mineral and bone disorders have been associated with poor outcomes.
* Metabolic acidosis may result from decreased capacity to generate enough ammonia from the cells of the proximal tubule. Acidemia affects the function of enzymes and increases excitability of cardiac and neuronal membranes by the promotion of hyperkalemia.
* Anemia is common and is especially prevalent in those requiring haemodialysis. It is multifactorial in cause, but includes increased inflammation, reduction in erythropoietin, and hyperuricemia leading to bone-marrow suppression. Hypoproliferative anemia occurs due to inadequate production of erythropoietin by the kidneys.
* In later stages, cachexia may develop, leading to unintentional weight loss, muscle wasting, weakness, and anorexia.
* Cognitive decline in patients experiencing CKD is an emerging symptom revealed in research literature. Current research suggests that patients with CKD face a 35-40% higher likelihood of cognitive decline and or dementia. This relation is dependent on the severity of CKD in each patient; although emerging literature indicates that patients at all stages of CKD will have a higher risk of developing these cognitive issues.
* Sexual dysfunction is very common in both men and women with CKD. A majority of men have a reduced gender drive, difficulty obtaining an erection, and reaching orgasm, and the problems get worse with age. Most women have trouble with sexual arousal, and painful menstruation and problems with performing and enjoying gender are common.
Additional Information
Chronic kidney disease (CKD), commonly known as chronic kidney failure, is a condition in which the kidneys gradually lose function. Wastes and surplus fluids are filtered from your blood and expelled as urine by your kidneys. When chronic kidney disease progresses, your body might accumulate harmful quantities of fluid, electrolytes, and wastes. You may have few indications or symptoms in the early stages of chronic renal disease. Chronic renal disease may not be seen until your kidney function has deteriorated severely.
Chronic renal disease refers to a group of disorders that harm your kidneys and reduce their ability to keep you healthy by performing the tasks outlined. If your kidney condition worsens, wastes in your blood might build up to dangerously high levels, making you unwell. High blood pressure, anemia (low blood count), weak bones, poor nutritional status, and nerve damage are all possible problems. Every 30 minutes, your kidneys, which are each about the size of a computer mouse, filter all of your blood. They put forth a lot of effort to get rid of wastes, poisons, and excess fluid. They also aid in the control of blood pressure, the stimulation of red blood cell synthesis, the maintenance of healthy bones, and the regulation of vital blood molecules.
Symptoms
The majority of people do not experience significant symptoms until their renal disease has progressed. You may, however, observe that you:
* Feel tired and have less energy
* Trouble while concentrating
* Poor appetite
* Trouble sleeping
* Muscle cramping at night
* Swollen feet and ankles
* Puffiness around the eyes
* Dry, itchy skin
* Often urination
Chronic renal disease can strike anyone at any age. However, some people are more susceptible to kidney disease than others. If you do any of the following, you may be at a higher risk for kidney disease:
* Diabetes
* High blood pressure
* Family history of kidney failure
]]>
Gist
Voltage is the pressure from an electrical circuit's power source that pushes charged electrons (current) through a conducting loop, enabling them to do work such as illuminating a light. In brief, voltage = pressure, and it is measured in volts (V).
Summary
Voltage, also known as electric pressure, electric tension, or (electric) potential difference, is the difference in electric potential between two points. In a static electric field, it corresponds to the work needed per unit of charge to move a test charge between the two points. In the International System of Units (SI), the derived unit for voltage is volt (V).
The voltage between points can be caused by the build-up of electric charge (e.g., a capacitor), and from an electromotive force (e.g., electromagnetic induction in generator, inductors, and transformers). On a macroscopic scale, a potential difference can be caused by electrochemical processes (e.g., cells and batteries), the pressure-induced piezoelectric effect, and the thermoelectric effect. Since it is the difference in electric potential, it is a physical scalar quantity.
A voltmeter can be used to measure the voltage between two points in a system. Often a common reference potential such as the ground of the system is used as one of the points. A voltage can represent either a source of energy or the loss, dissipation, or storage of energy.
Definition
The SI unit of work per unit charge is the joule per coulomb, where 1 volt = 1 joule (of work) per 1 coulomb (of charge). The old SI definition for volt used power and current; starting in 1990, the quantum Hall and Josephson effect were used, and in 2019 physical constants were given defined values for the definition of all SI units.
Voltage difference is denoted symbolically by
, simplified V, especially in English-speaking countries. Internationally, the symbol U is standardized. It is used, for instance, in the context of Ohm's or Kirchhoff's circuit laws.The electrochemical potential is the voltage that can be directly measured with a voltmeter. The Galvani potential that exists in structures with junctions of dissimilar materials is also work per charge but cannot be measured with a voltmeter in the external circuit.
Voltage is defined so that negatively charged objects are pulled towards higher voltages, while positively charged objects are pulled towards lower voltages. Therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage.
Historically, voltage has been referred to using terms like "tension" and "pressure". Even today, the term "tension" is still used, for example within the phrase "high tension" (HT) which is commonly used in thermionic valve (vacuum tube) based electronics.
Details
Voltage, also called electromotive force, is a quantitative expression of the potential difference in charge between two points in an electrical field.
The greater the voltage, the greater the flow of electrical current (that is, the quantity of charge carriers that pass a fixed point per unit of time) through a conducting or semiconducting medium for a given resistance to the flow. Voltage is symbolized by an uppercase italic letter V or E. The standard unit is the volt, symbolized by a non-italic uppercase letter V. One volt will drive one coulomb
charge carriers, such as electrons, through a resistance of one ohm in one second.Voltage can be direct or alternating. A direct voltage maintains the same polarity at all times. In an alternating voltage, the polarity reverses direction periodically. The number of complete cycles per second is the frequency, which is measured in hertz (one cycle per second), kilohertz, megahertz, gigahertz, or terahertz. An example of direct voltage is the potential difference between the terminals of an electrochemical cell. Alternating voltage exists between the terminals of a common utility outlet.
A voltage produces an electrostatic field, even if no charge carriers move (that is, no current flows). As the voltage increases between two points separated by a specific distance, the electrostatic field becomes more intense. As the separation increases between two points having a given voltage with respect to each other, the electrostatic flux density diminishes in the region between them.
Voltage describes the “pressure” that pushes electricity. The amount of voltage is indicated by a unit known as the volt (V), and higher voltages cause more electricity to flow to an electronic device. However, electronic devices are designed to operate at specific voltages; excessive voltage can damage their circuitry.
By contrast, too low a voltage can cause issues, too, by preventing circuits from operating and making the devices built around them useless. An understanding of voltage and of how to rectify associated issues is necessary in order to handle electronic devices appropriately and identify the underlying issues when problems occur.
The difference between voltage and current
As introduced above, a simple description of voltage would be “the ability to cause electricity to flow.” If you’re like most people, you have trouble envisioning what voltage is since you can’t view it directly with your eyes. To understand voltage, you must first understand electricity.
Electricity flows as a current. You can imagine it as a flow of water, like in a river. The water in rivers flows from mountains upstream to the ocean downstream. In other words, water flows from places with a high water height to places with a low water height. Electricity acts similarly: the concept of water height is analogous to electric potential, and electricity flows from places with high electric potential to places with low electric potential.
Electricity resembles the flow of water
.
The potential difference between two places can be expressed as a voltage. Voltage is the “pressure,” as it were, that makes electricity flow. In physics, voltage can be calculated using Ohm’s Law, which tells us that voltage equals resistance times current.
Resistance indicates the difficulty with which electricity flows. Imagine a water main. As the pipe grows smaller, resistance increases, and it becomes more difficult for the water to flow; at the same time, the strength of the flow increases. By contrast, as the pipe grows larger, water flows more readily, but the strength of the flow decreases. A similar situation applies to current. Resistance and current are proportional to voltage, meaning that as either increases, so too will voltage.
Method for measuring voltage
Multimeters (multi-testers) are used to measure voltage. In addition to voltage, multimeters can perform continuity checks and measure parameters such as current, resistance, temperature, and capacitance. Multimeters come in both analog and digital variants, but digital models are the easiest to use without mistakenly reading values since they display values directly.
To measure voltage with a multimeter, you connect positive and negative test leads and select a voltage measurement range. You then place the leads in contact with both ends of the circuit you wish to measure. When using an analog tester, you start with the largest voltage measurement range.
If the instrument does not respond, you then try progressively lower measurement ranges until you reach a range that can measure the circuit’s voltage. When using a digital tester, many models simplify the measurement process by adjusting the measurement range automatically.
Additional Information
Volt is the unit of electrical potential, potential difference and electromotive force in the metre–kilogram–second system (SI); it is equal to the difference in potential between two points in a conductor carrying one ampere current when the power dissipated between the points is one watt. An equivalent is the potential difference across a resistance of one ohm when one ampere is flowing through it. The volt is named in honour of the 18th–19th-century Italian physicist Alessandro Volta. These units are defined in accordance with Ohm’s law, that resistance equals the ratio of potential to current, and the respective units of ohm, volt, and ampere are used universally for expressing electrical quantities.
]]>
Gist
What is a Metal ? Metals. Metals are opaque, lustrous elements that are good conductors of heat and electricity. Most metals are malleable and ductile and are, in general, denser than the other elemental substances.
Summary
A Metal is any of a class of substances characterized by high electrical and thermal conductivity as well as by malleability, ductility, and high reflectivity of light.
Approximately three-quarters of all known chemical elements are metals. The most abundant varieties in the Earth’s crust are aluminum, iron, calcium, sodium, potassium, and magnesium. The vast majority of metals are found in ores (mineral-bearing substances), but a few such as copper, gold, platinum, and silver frequently occur in the free state because they do not readily react with other elements.
Metals are usually crystalline solids. In most cases, they have a relatively simple crystal structure distinguished by a close packing of atoms and a high degree of symmetry. Typically, the atoms of metals contain less than half the full complement of electrons in their outermost shell. Because of this characteristic, metals tend not to form compounds with each other. They do, however, combine more readily with nonmetals (e.g., oxygen and sulfur), which generally have more than half the maximum number of valence electrons. Metals differ widely in their chemical reactivity. The most reactive include lithium, potassium, and radium, whereas those of low reactivity are gold, silver, palladium, and platinum.
The high electrical and thermal conductivities of the simple metals (i.e., the non-transition metals of the periodic table) are best explained by reference to the free-electron theory. According to this concept, the individual atoms in such metals have lost their valence electrons to the entire solid, and these free electrons that give rise to conductivity move as a group throughout the solid. In the case of the more complex metals (i.e., the transition elements), conductivities are better explained by the band theory, which takes into account not only the presence of free electrons but also their interaction with so-called d electrons.
The mechanical properties of metals, such as hardness, ability to resist repeated stressing (fatigue strength), ductility, and malleability, are often attributed to defects or imperfections in their crystal structure. The absence of a layer of atoms in its densely packed structure, for example, enables a metal to deform plastically, and prevents it from being brittle.
Details
A metal (from Ancient Greek (métallon) 'mine, quarry, metal') is a material that, when freshly prepared, polished, or fractured, shows a lustrous appearance, and conducts electricity and heat relatively well. Metals are typically ductile (can be drawn into wires) and malleable (they can be hammered into thin sheets). These properties are the result of the metallic bond between the atoms or molecules of the metal.
A metal may be a chemical element such as iron; an alloy such as stainless steel; or a molecular compound such as polymeric sulfur nitride.
In physics, a metal is generally regarded as any substance capable of conducting electricity at a temperature of absolute zero. Many elements and compounds that are not normally classified as metals become metallic under high pressures. For example, the nonmetal iodine gradually becomes a metal at a pressure of between 40 and 170 thousand times atmospheric pressure. Equally, some materials regarded as metals can become nonmetals. Sodium, for example, becomes a nonmetal at pressure of just under two million times atmospheric pressure, though at even higher pressures it is expected to become a metal again.
In chemistry, two elements that would otherwise qualify (in physics) as brittle metals—math and antimony—are commonly instead recognised as metalloids due to their chemistry (predominantly non-metallic for math, and balanced between metallicity and nonmetallicity for antimony). Around 95 of the 118 elements in the periodic table are metals (or are likely to be such). The number is inexact as the boundaries between metals, nonmetals, and metalloids fluctuate slightly due to a lack of universally accepted definitions of the categories involved.
In astrophysics the term "metal" is cast more widely to refer to all chemical elements in a star that are heavier than helium, and not just traditional metals. In this sense the first four "metals" collecting in stellar cores through nucleosynthesis are carbon, nitrogen, oxygen, and neon, all of which are strictly non-metals in chemistry. A star fuses lighter atoms, mostly hydrogen and helium, into heavier atoms over its lifetime. Used in that sense, the metallicity of an astronomical object is the proportion of its matter made up of the heavier chemical elements.
Metals, as chemical elements, comprise 25% of the Earth's crust and are present in many aspects of modern life. The strength and resilience of some metals has led to their frequent use in, for example, high-rise building and bridge construction, as well as most vehicles, many home appliances, tools, pipes, and railroad tracks. Precious metals were historically used as coinage, but in the modern era, coinage metals have extended to at least 23 of the chemical elements.
The history of refined metals is thought to begin with the use of copper about 11,000 years ago. Gold, silver, iron (as meteoric iron), lead, and brass were likewise in use before the first known appearance of bronze in the fifth millennium BCE. Subsequent developments include the production of early forms of steel; the discovery of sodium—the first light metal—in 1809; the rise of modern alloy steels; and, since the end of World War II, the development of more sophisticated alloys.
Additional Information
Metals are a class of elements characterized by a tendency to give up electrons. Metals are good thermal and electrical conductors. At room temperature and normal atmospheric pressure, metals tend to be solids - except for mercury, which is a liquid. Metals are usually ductile, malleable, shiny, and can form alloys with other metals. Metals are tremendously important to a high energy society: they transport electricity in the electrical grid, and provide many services. Various manufacturing processes around the world uses more than 3 gigatonnes of metal every year. Industry uses more than 30 different metals, with the most used being iron (the biggest component of steel), chromium and manganese (both are added in small quantities to iron to make different types of steel), aluminum,and copper.
Generally, metallic elements are not found as isolated atoms, and instead group together into larger structures. These structures resemble large, extended sheets with a repeating pattern in how the atoms group together. It is this sheet-like structure that makes metals so useful, allowing them to be pressed into sheets that are used in building cars, containers, and even jewelry. In general, metallic atomic structures are highly organized and layers of these atoms stack to form a three-dimensional solid.
Unlike certain diatomic molecules - like molecular hydrogen or molecular oxygen - metals stay together because inside of the metal the electrons flow freely in a type of "electron sea" instead of sharing electrons between atoms. These electrons float freely around the nuclei inside of the metal. This free flowing "electron sea" explains why metals conduct electricity so well, as the movement of electrons produces an electric current, and there are so many electrons that are free to move inside of metals.
Energy Use for Metals
Metals are necessary to maintain society's modern standard of living. This demand contributes to energy demand because the extraction and processing of metals is extremely energy intensive. A significant fraction of the world's energy supply go into the mining of metals and turning them into useful products in the modern world. Since there is so much energy involved in metal processing, they have an incredible amount of embedded energy. Aluminum, in particular, requires a significant amount of energy to extract and process. When countries are modernizing their economies - in places like BRIC or N11 countries - large amounts of energy are invested to obtain metal for rapidly increasing infrastructure.
Metals account for roughly 20% of industrial energy use and 7% of all primary energy use in the world.
Metals are a Resource
Metals are an important resource for the world today with almost countless applications. The most widely used metal is also the least expensive: iron. Iron is the main component of steel. The electrical grid couldn't exist without copper and steel. In fact all areas of electricity production and distribution require metal, from turbines used to transform mechanical energy into electricity to the distribution stations that bring electricity to homes.
In addition to the electrical grid, metals are a principle component building construction (often in the form of steel reinforcement for concrete). Most vehicles are made from metal, from automobiles to airplanes. Common household appliances and electronic devices also use a significant amount of different metals as well. As technology advances, the number of elements used in society (including metals) increases.
]]>
Gist
Anticonvulsants, or antiepileptics, are an ever-growing class of medications that act through multiple different mechanisms to control seizures — antiepileptic toxicity commonly presents with a triad of symptoms, which includes central nervous system (CNS) depression, ataxia, and nystagmus.
Details
Anticonvulsants (also known as antiepileptic drugs, antiseizure drugs, and anti-seizure medications [ASM]) are a diverse group of pharmacological agents used in the treatment of epileptic seizures. Anticonvulsants are also increasingly being used in the treatment of bipolar disorder and borderline personality disorder, since many seem to act as mood stabilizers, and for the treatment of neuropathic pain. Anticonvulsants suppress the excessive rapid firing of neurons during seizures. Anticonvulsants also prevent the spread of the seizure within the brain.
Conventional antiepileptic drugs may block sodium channels or enhance γ-aminobutyric acid (GABA) function. Several antiepileptic drugs have multiple or uncertain mechanisms of action. Next to the voltage-gated sodium channels and components of the GABA system, their targets include GABAA receptors, the GAT-1 GABA transporter, and GABA transaminase. Additional targets include voltage-gated calcium channels, SV2A, and α2δ. By blocking sodium or calcium channels, antiepileptic drugs reduce the release of excitatory glutamate, whose release is considered to be elevated in epilepsy, but also that of GABA. This is probably a side effect or even the actual mechanism of action for some antiepileptic drugs, since GABA can itself, directly or indirectly, act proconvulsively. Another potential target of antiepileptic drugs is the peroxisome proliferator-activated receptor alpha.
Some anticonvulsants have shown antiepileptogenic effects in animal models of epilepsy. That is, they either prevent the development of epilepsy or can halt or reverse the progression of epilepsy. However, no drug has been shown in human trials to prevent epileptogenesis (the development of epilepsy in an individual at risk, such as after a head injury).
Terminology
Anticonvulsants are more accurately called antiepileptic drugs (abbreviated "AEDs"), and are often referred to as antiseizure drugs because they provide symptomatic treatment only and have not been demonstrated to alter the course of epilepsy.
Approval
The usual method of achieving approval for a drug is to show it is effective when compared against placebo, or that it is more effective than an existing drug. In monotherapy (where only one drug is taken) it is considered unethical by most to conduct a trial with placebo on a new drug of uncertain efficacy. This is because untreated epilepsy leaves the patient at significant risk of death. Therefore, almost all new epilepsy drugs are initially approved only as adjunctive (add-on) therapies. Patients whose epilepsy is uncontrolled by their medication (i.e., it is refractory to treatment) are selected to see if supplementing the medication with the new drug leads to an improvement in seizure control. Any reduction in the frequency of seizures is compared against a placebo. The lack of superiority over existing treatment, combined with lacking placebo-controlled trials, means that few modern drugs have earned FDA approval as initial monotherapy. In contrast, Europe only requires equivalence to existing treatments and has approved many more. Despite their lack of FDA approval, the American Academy of Neurology and the American Epilepsy Society still recommend a number of these new drugs as initial monotherapy.
]]>
Gist
Dendrites are the structures on the neuron, that function by receiving electrical messages. The functions of dendrites are to transfer the received information to the soma of the neuron. Other biological processes of Dendrites are: Dendrites receive the data or signals from another neuron.
Summary
Dendrites are a collection of highly branched, tapering processes extending from the cell body (soma) of a neuron which conduct impulses toward the cell body.
Unlike axons that are single, long processes which transmit impulses away from the cell body to other neurons, dendrites are a series of processes in the vicinity of the cell body which receive information from other neurons via synapses.
Dendrites are projections of a neuron (nerve cell) that receive signals (information) from other neurons. The transfer of information from one neuron to another is achieved through chemical signals and electric impulses, that is, electrochemical signals. The information transfer is usually received at the dendrites through chemical signals, then it travels to the cell body (soma), continues along the neuronal axon as electric impulses, and it is finally transferred onto the next neuron at the synapse, which is the place where the two neurons exchange information through chemical signals. At the synapse meet the end of one neuron and the beginning—the dendrites—of the other.
Dendrites Function
The functions of dendrites are to receive signals from other neurons, to process these signals, and to transfer the information to the soma of the neuron.
Receive Information
The dendrites resemble the branches of a tree in the sense that they extend from the soma or body of the neuron and open up into gradually smaller projections. At the end of these projections are the synapses, which is where the information transfer occurs. More specifically, synapses are the site where two neurons exchange signals: the upstream or pre-synaptic neuron releases neurotransmitters (usually at the end of the neuron, also called axonal terminal) and the downstream or post-synaptic neuron detects them (usually in the dendrites).
Details
A dendrite (from Greek δένδρον déndron, "tree") or dendron is a branched protoplasmic extension of a nerve cell that propagates the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons (usually via their axons) via synapses which are located at various points throughout the dendritic tree.
Dendrites play a critical role in integrating these synaptic inputs and in determining the extent to which action potentials are produced by the neuron.
Structure and function
Dendrites are one of two types of protoplasmic protrusions that extrude from the cell body of a neuron, the other type being an axon. Axons can be distinguished from dendrites by several features including shape, length, and function. Dendrites often taper off in shape and are shorter, while axons tend to maintain a constant radius and can be very long. Typically, axons transmit electrochemical signals and dendrites receive the electrochemical signals, although some types of neurons in certain species lack specialized axons and transmit signals via their dendrites. Dendrites provide an enlarged surface area to receive signals from axon terminals of other neurons. The dendrite of a large pyramidal cell receives signals from about 30,000 presynaptic neurons. Excitatory synapses terminate on dendritic spines, tiny protrusions from the dendrite with a high density of neurotransmitter receptors. Most inhibitory synapses directly contact the dendritic shaft.
Synaptic activity causes local changes in the electrical potential across the plasma membrane of the dendrite. This change in membrane potential will passively spread along the dendrite, but becomes weaker with distance without an action potential. To generate an action potential, many excitatory synapses have to be active at the same time, leading to strong depolarization of the dendrite and the cell body (soma). The action potential, which typically starts at the axon hillock, propagates down the length of the axon to the axon terminals where it triggers the release of neurotransmitters, but also backwards into the dendrite (retrograde propagation), providing an important signal for spike-timing-dependent plasticity (STDP).
Most synapses are axodendritic, involving an axon signaling to a dendrite. There are also dendrodendritic synapses, signaling from one dendrite to another. An autapse is a synapse in which the axon of one neuron transmits signals to its own dendrite.
The general structure of the dendrite is used to classify neurons into multipolar, bipolar and unipolar types. Multipolar neurons are composed of one axon and many dendritic trees. Pyramidal cells are multipolar cortical neurons with pyramid-shaped cell bodies and large dendrites that extend towards the surface of the cortex (apical dendrite). Bipolar neurons have two main dendrites at opposing ends of the cell body. Many inhibitory neurons have this morphology. Unipolar neurons, typical for insects, have a stalk that extends from the cell body that separates into two branches with one containing the dendrites and the other with the terminal buttons. In vertebrates, sensory neurons detecting touch or temperature are unipolar. Dendritic branching can be extensive and in some cases is sufficient to receive as many as 100,000 inputs to a single neuron.
History
The term dendrites was first used in 1889 by Wilhelm His to describe the number of smaller "protoplasmic processes" that were attached to a nerve cell. German anatomist Otto Friedrich Karl Deiters is generally credited with the discovery of the axon by distinguishing it from the dendrites.
Some of the first intracellular recordings in a nervous system were made in the late 1930s by Kenneth S. Cole and Howard J. Curtis. Swiss Rüdolf Albert von Kölliker and German Robert Remak were the first to identify and characterize the axonal initial segment. Alan Hodgkin and Andrew Huxley also employed the squid giant axon (1939) and by 1952 they had obtained a full quantitative description of the ionic basis of the action potential, leading the formulation of the Hodgkin–Huxley model. Hodgkin and Huxley were awarded jointly the Nobel Prize for this work in 1963. The formulas detailing axonal conductance were extended to vertebrates in the Frankenhaeuser–Huxley equations. Louis-Antoine Ranvier was the first to describe the gaps or nodes found on axons and for this contribution these axonal features are now commonly referred to as the Nodes of Ranvier. Santiago Ramón y Cajal, a Spanish anatomist, proposed that axons were the output components of neurons. He also proposed that neurons were discrete cells that communicated with each other via specialized junctions, or spaces, between cells, now known as a synapse. Ramón y Cajal improved a silver staining process known as Golgi's method, which had been developed by his rival, Camillo Golgi.
Dendrite development
During the development of dendrites, several factors can influence differentiation. These include modulation of sensory input, environmental pollutants, body temperature, and drug use. For example, rats raised in dark environments were found to have a reduced number of spines in pyramidal cells located in the primary visual cortex and a marked change in distribution of dendrite branching in layer 4 stellate cells. Experiments done in vitro and in vivo have shown that the presence of afferents and input activity per se can modulate the patterns in which dendrites differentiate.
Little is known about the process by which dendrites orient themselves in vivo and are compelled to create the intricate branching pattern unique to each specific neuronal class. One theory on the mechanism of dendritic arbor development is the Synaptotropic Hypothesis. The synaptotropic hypothesis proposes that input from a presynaptic to a postsynaptic cell (and maturation of excitatory synaptic inputs) eventually can change the course of synapse formation at dendritic and axonal arbors.
This synapse formation is required for the development of neuronal structure in the functioning brain. A balance between metabolic costs of dendritic elaboration and the need to cover receptive field presumably determine the size and shape of dendrites. A complex array of extracellular and intracellular cues modulates dendrite development including transcription factors, receptor-ligand interactions, various signaling pathways, local translational machinery, cytoskeletal elements, Golgi outposts and endosomes. These contribute to the organization of the dendrites on individual cell bodies and the placement of these dendrites in the neuronal circuitry. For example, it was shown that β-actin zipcode binding protein 1 (ZBP1) contributes to proper dendritic branching.
Other important transcription factors involved in the morphology of dendrites include CUT, Abrupt, Collier, Spineless, ACJ6/drifter, CREST, NEUROD1, CREB, NEUROG2 etc. Secreted proteins and cell surface receptors includes neurotrophins and tyrosine kinase receptors, BMP7, Wnt/dishevelled, EPHB 1–3, Semaphorin/plexin-neuropilin, slit-robo, netrin-frazzled, reelin. Rac, CDC42 and RhoA serve as cytoskeletal regulators and the motor protein includes KIF5, dynein, LIS1. Important secretory and endocytic pathways controlling the dendritic development include DAR3 /SAR1, DAR2/Sec23, DAR6/Rab1 etc. All these molecules interplay with each other in controlling dendritic morphogenesis including the acquisition of type specific dendritic arborization, the regulation of dendrite size and the organization of dendrites emanating from different neurons.
Types of dendritic patterns
Dendritic arborization, also known as dendritic branching, is a multi-step biological process by which neurons form new dendritic trees and branches to create new synapses. Dendrites in many organisms assume different morphological patterns of branching. The morphology of dendrites such as branch density and grouping patterns are highly correlated to the function of the neuron. Malformation of dendrites is also tightly correlated to impaired nervous system function.
Branching morphologies may assume an adendritic structure (not having a branching structure, or not tree-like), or a tree-like radiation structure. Tree-like arborization patterns can be spindled (where two dendrites radiate from opposite poles of a cell body with few branches, see bipolar neurons ), spherical (where dendrites radiate in a part or in all directions from a cell body, see cerebellar granule cells), laminar (where dendrites can either radiate planarly, offset from cell body by one or more stems, or multi-planarly, see retinal horizontal cells, retinal ganglion cells, retinal amacrine cells respectively), cylindrical (where dendrites radiate in all directions in a cylinder, disk-like fashion, see pallidal neurons), conical (dendrites radiate like a cone away from cell body, see pyramidal cells), or fanned (where dendrites radiate like a flat fan as in Purkinje cells).
Electrical properties
The structure and branching of a neuron's dendrites, as well as the availability and variation of voltage-gated ion conductance, strongly influences how the neuron integrates the input from other neurons. This integration is both temporal, involving the summation of stimuli that arrive in rapid succession, as well as spatial, entailing the aggregation of excitatory and inhibitory inputs from separate branches.
Dendrites were once thought to merely convey electrical stimulation passively. This passive transmission means that voltage changes measured at the cell body are the result of activation of distal synapses propagating the electric signal towards the cell body without the aid of voltage-gated ion channels. Passive cable theory describes how voltage changes at a particular location on a dendrite transmit this electrical signal through a system of converging dendrite segments of different diameters, lengths, and electrical properties. Based on passive cable theory one can track how changes in a neuron's dendritic morphology impacts the membrane voltage at the cell body, and thus how variation in dendrite architectures affects the overall output characteristics of the neuron.
Action potentials initiated at the axon hillock propagate back into the dendritic arbor. These back-propagating action potentials depolarize the dendritic membrane and provide a crucial signal for synapse modulation and long-term potentiation. Back-propagation is not completely passive, but modulated by the presence of dendritic voltage-gated potassium channels. Furthermore, in certain types of neurons, a train of back-propagating action potentials can induce a calcium action potential (a dendritic spike) at dendritic initiation zones.
Plasticity
Dendrites themselves appear to be capable of plastic changes during the adult life of animals, including invertebrates. Neuronal dendrites have various compartments known as functional units that are able to compute incoming stimuli. These functional units are involved in processing input and are composed of the subdomains of dendrites such as spines, branches, or groupings of branches. Therefore, plasticity that leads to changes in the dendrite structure will affect communication and processing in the cell. During development, dendrite morphology is shaped by intrinsic programs within the cell's genome and extrinsic factors such as signals from other cells. But in adult life, extrinsic signals become more influential and cause more significant changes in dendrite structure compared to intrinsic signals during development. In females, the dendritic structure can change as a result of physiological conditions induced by hormones during periods such as pregnancy, lactation, and following the estrous cycle. This is particularly visible in pyramidal cells of the CA1 region of the hippocampus, where the density of dendrites can vary up to 30%.
Recent experimental observations suggest that adaptation is performed in the neuronal dendritic trees, where the timescale of adaptation was observed to be as low as several seconds only. Certain machine learning architectures based on dendritic trees have shown to simplify the learning algorithm without affecting performance.
]]>
Gist
Lactose is a sugar that is naturally found in milk and milk products, like cheese or ice cream. In lactose intolerance, digestive symptoms are caused by lactose malabsorption. Lactose malabsorption is a condition in which your small intestine cannot digest, or break down, all the lactose you eat or drink.
Summary
Lactose is a carbohydrate containing one molecule of glucose and one of galactose linked together. Composing about 2 to 8 percent of the milk of all mammals, lactose is sometimes called milk sugar. It is the only common sugar of animal origin. Lactose can be prepared from whey, a by-product of the cheese-making process. Fermentation of lactose by microorganisms such as Lactobacillus acidophilus is part of the industrial production of lactic acid. Human lactose intolerance is indicated by diarrhea and abdominal bloating and discomfort; lactose intolerance also may be a cause of diarrhea in newborns.
Details
Lactose, or milk sugar, is a disaccharide sugar synthesized by galactose and glucose subunits and has the molecular formula C12H22O11. Lactose makes up around 2–8% of milk (by mass). The name comes from lac (gen. lactis), the Latin word for milk, plus the suffix -ose used to name sugars. The compound is a white, water-soluble, non-hygroscopic solid with a mildly sweet taste. It is used in the food industry.
Structure and reactions
Lactose is a disaccharide derived from the condensation of galactose and glucose, which form a β-1→4 glycosidic linkage. Its systematic name is β-D-galactopyranosyl-(1→4)-D-glucose. The glucose can be in either the α-pyranose form or the β-pyranose form, whereas the galactose can only have the β-pyranose form: hence α-lactose and β-lactose refer to the anomeric form of the glucopyranose ring alone. Detection reactions for lactose are the Woehlk- and Fearon's test. Both can be easily used in school experiments to visualise the different lactose content of different dairy products such as whole milk, lactose free milk, yogurt, buttermilk, coffee creamer, sour cream, kefir, etc.
Lactose is hydrolysed to glucose and galactose, isomerised in alkaline solution to lactulose, and catalytically hydrogenated to the corresponding polyhydric alcohol, lactitol. Lactulose is a commercial product, used for treatment of constipation.
Occurrence and isolation
Lactose composes about 2–8% of milk by weight. Several million tons are produced annually as a by-product of the dairy industry.
Whey or milk plasma is the liquid remaining after milk is curdled and strained, for example in the production of cheese. Whey is made up of 6.5% solids, of which 4.8% is lactose, which is purified by crystallisation. Industrially, lactose is produced from whey permeate – that is whey filtrated for all major proteins. The protein fraction is used in infant nutrition and sports nutrition while the permeate can be evaporated to 60–65% solids and crystallized while cooling. Lactose can also be isolated by dilution of whey with ethanol.
Dairy products such as yogurt and cheese contain very little lactose. This is because the bacteria used to make these products breaks down lactose through the use of lactase.
Metabolism
Infant mammals nurse on their mothers to drink milk, which is rich in lactose. The intestinal villi secrete the enzyme lactase (β-D-galactosidase) to digest it. This enzyme cleaves the lactose molecule into its two subunits, the simple sugars glucose and galactose, which can be absorbed. Since lactose occurs mostly in milk, in most mammals, the production of lactase gradually decreases with maturity due to genetic predispositions.
Many people with ancestry in Europe, West Asia, South Asia, the Sahel belt in West Africa, East Africa and a few other parts of Central Africa maintain lactase production into adulthood. In many of these areas, milk from mammals such as cattle, goats, and sheep is used as a large source of food. Hence, it was in these regions that genes for lifelong lactase production first evolved. The genes of adult lactose tolerance have evolved independently in various ethnic groups. By descent, more than 70% of western Europeans can digest lactose as adults, compared with less than 30% of people from areas of Africa, eastern and south-eastern Asia and Oceania. In people who are lactose intolerant, lactose is not broken down and provides food for gas-producing gut flora, which can lead to diarrhea, bloating, flatulence, and other gastrointestinal symptoms.
Biological properties
The sweetness of lactose is 0.2 to 0.4, relative to 1.0 for sucrose. For comparison, the sweetness of glucose is 0.6 to 0.7, of fructose is 1.3, of galactose is 0.5 to 0.7, of maltose is 0.4 to 0.5, of sorbose is 0.4, and of xylose is 0.6 to 0.7.
When lactose is completely digested in the small intestine, its caloric value is 4 kcal/g, or the same as that of other carbohydrates. However, lactose is not always fully digested in the small intestine. Depending on ingested dose, combination with meals (either solid or liquid), and lactase activity in the intestines, the caloric value of lactose ranges from 2 to 4 kcal/g. Undigested lactose acts as dietary fiber. It also has positive effects on absorption of minerals, such as calcium and magnesium.
The glycemic index of lactose is 46 to 65. For comparison, the glycemic index of glucose is 100 to 138, of sucrose is 68 to 92, of maltose is 105, and of fructose is 19 to 27.
Lactose has relatively low cariogenicity among sugars. This is because it is not a substrate for dental plaque formation and it is not rapidly fermented by oral bacteria. The buffering capacity of milk also reduces the cariogenicity of lactose.
Applications
Its mild flavor and easy handling properties have led to its use as a carrier and stabiliser of aromas and pharmaceutical products. Lactose is not added directly to many foods, because its solubility is less than that of other sugars commonly used in food. Infant formula is a notable exception, where the addition of lactose is necessary to match the composition of human milk.
Lactose is not fermented by most yeast during brewing, which may be used to advantage. For example, lactose may be used to sweeten stout beer; the resulting beer is usually called a milk stout or a cream stout.
Yeast belonging to the genus Kluyveromyces have a unique industrial application, as they are capable of fermenting lactose for ethanol production. Surplus lactose from the whey by-product of dairy operations is a potential source of alternative energy.
Another significant lactose use is in the pharmaceutical industry. Lactose is added to tablet and capsule drug products as an ingredient because of its physical and functional properties. For similar reasons, it can be used to dilute illicit drugs.
]]>