Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 Re: Dark Discussions at Cafe Infinity » crème de la crème » Today 18:04:36

2405) Alan Hodgkin

Gist:

Work

The nervous system in people and animals consists of many different cells. In cells, signals are conveyed by small electrical currents and by chemical substances. By measuring changes in electrical charges in a very large nerve fiber from a species of octopus, Alan Hodgkin and Andrew Huxley were able to show how nerve impulses are exchanged between cells. In 1952 they could demonstrate that a fundamental mechanism involves the passage of sodium and potassium ions in opposite directions in and out through the cell wall, which gives rise to electrical charges.

Summary

Sir Alan Hodgkin (born February 5, 1914, Banbury, Oxfordshire, England—died December 20, 1998, Cambridge) was an English physiologist and biophysicist, who received (with Andrew Fielding Huxley and Sir John Eccles) the 1963 Nobel Prize for Physiology or Medicine for the discovery of the chemical processes responsible for the passage of impulses along individual nerve fibres.

Hodgkin was educated at Trinity College, Cambridge. After conducting radar research (1939–45) for the British Air Ministry, he joined the faculty at Cambridge, where he worked (1945–52) with Huxley on measuring the electrical and chemical behaviour of individual nerve fibres. By inserting microelectrodes into the giant nerve fibres of the squid Loligo forbesi, they were able to show that the electrical potential of a fibre during conduction of an impulse exceeds the potential of the fibre at rest, contrary to the accepted theory, which postulated a breakdown of the nerve membrane during impulse conduction.

They knew that the activity of a nerve fibre depends on the fact that a large concentration of potassium ions is maintained inside the fibre, while a large concentration of sodium ions is found in the surrounding solution. Their experimental results (1947) indicated that the nerve membrane allows only potassium to enter the fibre during the resting phase but allows sodium to penetrate when the fibre is excited.

Hodgkin served as a research professor for the Royal Society (1952–69), professor of biophysics at Cambridge (from 1970), chancellor of the University of Leicester (1971–84), and master of Trinity College (1978–85). He was knighted in 1972 and admitted into the Order of Merit in 1973. Publications by Hodgkin include Conduction of the Nervous Impulse (1964) and his autobiography, Chance and Design: Reminiscences of Science in Peace and War (1992).

Details

Sir Alan Lloyd Hodgkin (5 February 1914 – 20 December 1998) was a British physiologist and biophysicist who shared the 1963 Nobel Prize in Physiology or Medicine with Andrew Huxley and John Eccles.

Early life and education

Hodgkin was born in Banbury, Oxfordshire, on 5 February 1914. He was the oldest of three sons of Quakers George Hodgkin and Mary Wilson Hodgkin. His father was the son of Thomas Hodgkin and had read for the Natural Science Tripos at Cambridge where he had befriended electrophysiologist Keith Lucas.

Because of poor eyesight, he was unable to study medicine and eventually ended up working for a bank in Banbury. As members of the Society of Friends, George and Mary opposed the Military Service Act of 1916, which introduced conscription, and had to endure a great deal of abuse from their local community, including an attempt to throw George in one of the town canals. In 1916, George Hodgkin travelled to Armenia as part of an investigation of distress. Moved by the misery and suffering of Armenian refugees he attempted to go back there in 1918 on a route through the Persian Gulf (as the northern route was closed because of the October Revolution in Russia). He died of dysentery in Baghdad on 24 June 1918, just a few weeks after his youngest son, Keith, had been born.

From an early life on, Hodgkin and his brothers were encouraged to explore the country around their home, which instilled in Alan an interest in natural history, particularly ornithology. At the age of 15, he helped Wilfred Backhouse Alexander with surveys of heronries and later, at Gresham's School, he overlapped and spent a lot of time with David Lack. In 1930, he was the winner of a bronze medal in the Public Schools Essay Competition organised by the Royal Society for the Protection of Birds.

School and university

Alan started his education at The Downs School where his contemporaries included future scientists Frederick Sanger, Alec Bangham, "neither outstandingly brilliant at school" according to Hodgkin, as well as future artists Lawrence Gowing and Kenneth Rowntree. After the Downs School, he went on to Gresham's School where he overlapped with future composer Benjamin Britten as well as Maury Meiklejohn. He ended up receiving a scholarship at Trinity College, Cambridge in botany, zoology and chemistry.

Between school and college, he spent May 1932 at the Freshwater Biological Station at Wray Castle based on a recommendation of his future Director of Studies at Trinity, Carl Pantin. After Wray Castle, he spent two months with a German family in Frankfurt as "in those days it was thought highly desirable that anyone intending to read science should have a reasonable knowledge of German." After his return to England in early August 1932, his mother Mary was remarried to Lionel Smith (1880–1972), the eldest son of A. L. Smith, whose daughter Dorothy was also married to Alan's uncle Robert Howard Hodgkin.

In the autumn of 1932, Hodgkin started as a freshman scholar at Trinity College where his friends included classicists John Raven and Michael Grant, fellow-scientists Richard Synge and John H. Humphrey, as well as Polly and David Hill, the children of Nobel laureate Archibald Hill. He took physiology with chemistry and zoology for the first two years, including lectures by Nobel laureate E.D. Adrian. For Part II of the tripos he decided to focus on physiology instead of zoology. Nevertheless, he participated in a zoological expedition to the Atlas Mountains in Morocco led by John Pringle in 1934. He finished Part II of the tripos in July 1935 and stayed at Trinity as a research fellow.

During his studies, Hodgkin, who described himself as "having been brought up as a supporter of the British Labour Party" was friends with communists and actively participated in the distribution of anti-war pamphlets. At Cambridge, he knew James Klugmann and John Cornford, but he emphasised in his autobiography that none of his friends "made any serious effort to convert me [to Communism], either then or later." From 1935 to 1937, Hodgkin was a member of the Cambridge Apostles.

Pre-war research

Hodgkin started conducting experiments on how electrical activity is transmitted in the sciatic nerve of frogs in July 1934. He found that a nerve impulse arriving at a cold or compression block, can decrease the electrical threshold beyond the block, suggesting that the impulse produces a spread of an electrotonic potential in the nerve beyond the block. In 1936, Hodgkin was invited by Herbert Gasser, then director of the Rockefeller Institute in New York City, to work in his laboratory during 1937–38. There he met Rafael Lorente de Nó and Kenneth Stewart Cole with whom he ended up publishing a paper. During that year he also spent time at the Woods Hole Marine Biological Laboratory where he was introduced to the squid giant axon, which ended up being the model system with which he conducted most of the research that eventually led to his Nobel Prize. In the spring of 1938, he visited Joseph Erlanger at Washington University in St. Louis who told him he would take Hodgkin's local circuit theory of nerve impulse propagation seriously if he could show that altering the resistance of the fluid outside a nerve fibre made a difference to the velocity of nerve impulse conduction. Working with single nerve fibres from shore crabs and squids, he showed that the conduction rate was much faster in seawater than in oil, providing strong evidence for the local circuit theory.

After his return to Cambridge he started collaborating with Andrew Huxley who had entered Trinity as a freshman in 1935, three years after Hodgkin. With a £300 equipment grant from the Rockefeller Foundation, Hodgkin managed to set up a similar physiology setup to the one he had worked with at the Rockefeller Institute. He moved all his equipment to the Plymouth Marine Laboratory in July 1939. There, he and Huxley managed to insert a fine cannula into the giant axon of squids and record action potentials from inside the nerve fibre. They sent a short note of their success to Nature just before the outbreak of World War II.

Later career and administrative positions

From 1951 to 1969, Hodgkin was the Foulerton Professor of the Royal Society at Cambridge. In 1970 he became the John Humphrey Plummer Professor of Biophysics at Cambridge. Around this time he also ended his experiments on nerve at the Plymouth Marine Laboratory and switched his focus to visual research which he could do in Cambridge with the help of others while serving as president of the Royal Society. Together with Denis Baylor and Peter Detwiler he published a series of papers on turtle photoreceptors.

From 1970 to 1975 Hodgkin served as the 53rd president of the Royal Society (PRS). During his tenure as PRS, he was knighted in 1972 and admitted into the Order of Merit in 1973.[68] From 1978 to 1985 he was the 34th Master of Trinity College, Cambridge.

He served on the Royal Society Council from 1958 to 1960 and on the Medical Research Council from 1959 to 1963. He was foreign secretary of the Physiological Society from 1961 to 1967. He also held additional administrative posts such as Chancellor, University of Leicester, from 1971 to 1984.

hodgkin-13163-portrait-medium.jpg

#2 Re: This is Cool » Miscellany » Today 17:20:40

2458) RADAR

Gist

RADAR stands for Radio Detection And Ranging, a system that uses radio waves to detect, locate, and track objects by sending out signals and analyzing the returning echoes to determine distance, speed, and direction, effective even in poor visibility for things like aircraft, ships, weather, and vehicles. It's crucial in aviation, shipping, meteorology, and traffic control.

The principle of Radar (Radio Detection And Ranging) is to detect objects by sending out electromagnetic waves (radio waves) and analyzing the returning echoes, much like SONAR uses sound. A transmitter sends pulses, an antenna radiates them, and the signal reflects off targets, returning as weak echoes. By measuring the time delay and direction of these returning signals, the radar system calculates an object's distance, bearing (direction), and velocity (using Doppler shift), displaying the information visually. 

Summary

Radar is a system that uses radio waves to determine the distance (ranging), direction (azimuth and elevation angles), and radial velocity of objects relative to the site. It is a radiodetermination method used to detect and track aircraft, ships, spacecraft, guided missiles, motor vehicles, weather formations and terrain. The term RADAR was coined in 1940 by the United States Navy as an acronym for "radio detection and ranging". The term radar has since entered English and other languages as an anacronym, a common noun, losing all capitalization.

A radar system consists of a transmitter producing electromagnetic waves in the radio or microwave domain, a transmitting antenna, a receiving antenna (often the same antenna is used for transmitting and receiving) and a receiver and processor to determine properties of the objects. Radio waves (pulsed or continuous) from the transmitter reflect off the objects and return to the receiver, giving information about the objects' locations and speeds. This device was developed secretly for military use by several countries in the period before and during World War II. A key development was the cavity magnetron in the United Kingdom, which allowed the creation of relatively small systems with sub-meter resolution.

The modern uses of radar are highly diverse, including air and terrestrial traffic control, radar astronomy, air-defense systems, anti-missile systems, marine radars to locate landmarks and other ships, aircraft anti-collision systems, ocean surveillance systems, outer space surveillance and rendezvous systems, meteorological precipitation monitoring, radar remote sensing, altimetry and flight control systems, guided missile target locating systems, self-driving cars, and ground-penetrating radar for geological observations.  Modern high tech radar systems use digital signal processing and machine learning and are capable of extracting useful information from very high noise levels.

Other systems which are similar to radar make use of other regions of the electromagnetic spectrum. One example is lidar, which uses predominantly infrared light from lasers rather than radio waves. With the emergence of driverless vehicles, radar is expected to assist the automated platform to monitor its environment, thus preventing unwanted incidents.

Details

A radar is an electromagnetic sensor used for detecting, locating, tracking, and recognizing objects of various kinds at considerable distances. It operates by transmitting electromagnetic energy toward objects, commonly referred to as targets, and observing the echoes returned from them. The targets may be aircraft, ships, spacecraft, automotive vehicles, and astronomical bodies, or even birds, insects, and rain. Besides determining the presence, location, and velocity of such objects, radar can sometimes obtain their size and shape as well. What distinguishes radar from optical and infrared sensing devices is its ability to detect faraway objects under adverse weather conditions and to determine their range, or distance, with precision.

Radar is an “active” sensing device in that it has its own source of illumination (a transmitter) for locating targets. It typically operates in the microwave region of the electromagnetic spectrum—measured in hertz (cycles per second), at frequencies extending from about 400 megahertz (MHz) to 40 gigahertz (GHz). It has, however, been used at lower frequencies for long-range applications (frequencies as low as several megahertz, which is the HF [high-frequency], or shortwave, band) and at optical and infrared frequencies (those of laser radar, or lidar). The circuit components and other hardware of radar systems vary with the frequency used, and systems range in size from those small enough to fit in the palm of the hand to those so enormous that they would fill several football fields.

Radar underwent rapid development during the 1930s and ’40s to meet the needs of the military. It is still widely employed by the armed forces, where many technological advances have originated. At the same time, radar has found an increasing number of important civilian applications, notably air traffic control, weather observation, remote sensing of the environment, aircraft and ship navigation, speed measurement for industrial applications and for law enforcement, space surveillance, and planetary observation.

Fundamentals of radar

Radar typically involves the radiating of a narrow beam of electromagnetic energy into space from an antenna. The narrow antenna beam scans a region where targets are expected. When a target is illuminated by the beam, it intercepts some of the radiated energy and reflects a portion back toward the radar system. Since most radar systems do not transmit and receive at the same time, a single antenna is often used on a time-shared basis for both transmitting and receiving.

A receiver attached to the output element of the antenna extracts the desired reflected signals and (ideally) rejects those that are of no interest. For example, a signal of interest might be the echo from an aircraft. Signals that are not of interest might be echoes from the ground or rain, which can mask and interfere with the detection of the desired echo from the aircraft. The radar measures the location of the target in range and angular direction. Range, or distance, is determined by measuring the total time it takes for the radar signal to make the round trip to the target and back. The angular direction of a target is found from the direction in which the antenna points at the time the echo signal is received. Through measurement of the location of a target at successive instants of time, the target’s recent track can be determined. Once this information has been established, the target’s future path can be predicted. In many surveillance radar applications, the target is not considered to be “detected” until its track has been established.

Pulse radar

The most common type of radar signal consists of a repetitive train of short-duration pulses. The figure shows a simple representation of a sine-wave pulse that might be generated by the transmitter of a medium-range radar designed for aircraft detection. The sine wave represents the variation with time of the output voltage of the transmitter. The numbers given in parentheses in the figure are meant only to be illustrative and are not necessarily those of any particular radar. They are, however, similar to what might be expected for a ground-based radar system with a range of about 50 to 60 nautical miles (90 to 110 km), such as the kind used for air traffic control at airports. The pulse width is given in the figure as 1 microsecond ({10}^{-6} second). It should be noted that the pulse is shown as containing only a few cycles of the sine wave; however, in a radar system having the values indicated, there would be 1,000 cycles within the pulse. In the figure the time between successive pulses is given as 1 millisecond ({10}^{-3} second), which corresponds to a pulse repetition frequency of 1 kilohertz (kHz). The power of the pulse, called the peak power, is taken here to be 1 megawatt. Since a pulse radar does not radiate continually, the average power is much less than the peak power. In this example, the average power is 1 kilowatt. The average power, rather than the peak power, is the measure of the capability of a radar system. Radars have average powers from a few milliwatts to as much as one or more megawatts, depending on the application.

A weak echo signal from a target might be as low as 1 picowatt ({10}^{-12} watt). In short, the power levels in a radar system can be very large (at the transmitter) and very small (at the receiver).

Another example of the extremes encountered in a radar system is the timing. An air-surveillance radar (one that is used to search for aircraft) might scan its antenna 360 degrees in azimuth in a few seconds, but the pulse width might be about one microsecond in duration. Some radar pulse widths are even of nanosecond ({10}^{-9} second) duration.

Radar waves travel through the atmosphere at roughly 300,000 km per second (the speed of light). The range to a target is determined by measuring the time that a radar signal takes to travel out to the target and back. The range to the target is equal to cT/2, where c = velocity of propagation of radar energy, and T = round-trip time as measured by the radar. From this expression, the round-trip travel of the radar signal through air is at a rate of 150,000 km per second. For example, if the time that it takes the signal to travel out to the target and back was measured by the radar to be 0.0006 second (600 microseconds), then the range of the target would be 90 km. The ability to measure the range to a target accurately at long distances and under adverse weather conditions is radar’s most distinctive attribute. There are no other devices that can compete with radar in the measurement of range.

The range accuracy of a simple pulse radar depends on the width of the pulse: the shorter the pulse, the better the accuracy. Short pulses, however, require wide bandwidths in the receiver and transmitter (since bandwidth is equal to the reciprocal of the pulse width). A radar with a pulse width of one microsecond can measure the range to an accuracy of a few tens of metres or better. Some special radars can measure to an accuracy of a few centimetres. The ultimate range accuracy of the best radars is limited by the known accuracy of the velocity at which electromagnetic waves travel.

Directive antennas and target direction

Almost all radars use a directive antenna—i.e., one that directs its energy in a narrow beam. (The beamwidth of an antenna of fixed size is inversely proportional to the radar frequency.) The direction of a target can be found from the direction in which the antenna is pointing when the received echo is at a maximum. A precise means for determining the direction of a target is the monopulse method—in which information about the angle of a target is obtained by comparing the amplitudes of signals received from two or more simultaneous receiving beams, each slightly offset (squinted) from the antenna’s central axis. A dedicated tracking radar—one that follows automatically a single target so as to determine its trajectory—generally has a narrow, symmetrical “pencil” beam. (A typical beamwidth might be about 1 degree.) Such a radar system can determine the location of the target in both azimuth angle and elevation angle. An aircraft-surveillance radar generally employs an antenna that radiates a “fan” beam, one that is narrow in azimuth (about 1 or 2 degrees) and broad in elevation (elevation beamwidths of from 20 to 40 degrees or more). A fan beam allows only the measurement of the azimuth angle.

Doppler frequency and target velocity

Radar can extract the Doppler frequency shift of the echo produced by a moving target by noting how much the frequency of the received signal differs from the frequency of the signal that was transmitted. (The Doppler effect in radar is similar to the change in audible pitch experienced when a train whistle or the siren of an emergency vehicle moves past the listener.) A moving target will cause the frequency of the echo signal to increase if it is approaching the radar or to decrease if it is receding from the radar. For example, if a radar system operates at a frequency of 3,000 MHz and an aircraft is moving toward it at a speed of 400 knots (740 km per hour), the frequency of the received echo signal will be greater than that of the transmitted signal by about 4.1 kHz. The Doppler frequency shift in hertz is equal to 3.4 f0vr, where f0 is the radar frequency in gigahertz and vr is the radial velocity (the rate of change of range) in knots.

Since the Doppler frequency shift is proportional to radial velocity, a radar system that measures such a shift in frequency can provide the radial velocity of a target. The Doppler frequency shift can also be used to separate moving targets from stationary targets even when the echo signal from undesired clutter is much more powerful than the echo from the desired moving targets. A form of pulse radar that uses the Doppler frequency shift to eliminate stationary clutter is called either a moving-target indication (MTI) radar or a pulse Doppler radar, depending on the particular parameters of the signal waveform.

The above measurements of range, angle, and radial velocity assume that the target is a “point-scatterer.” Actual targets, however, are of finite size and can have distinctive shapes. The range profile of a finite-sized target can be determined if the range resolution of the radar is small compared with the target’s size in the range dimension. (The range resolution of a radar, given in units of distance, is a measure of the ability of a radar to separate two closely spaced echoes.) Some radars can have resolutions much smaller than one metre, which is quite suitable for determining the radial size and profile of many targets of interest.

The resolution in angle, or cross range, that can be obtained with conventional antennas is poor compared with that which can be obtained in range. It is possible, however, to achieve good resolution in angle by resolving in Doppler frequency (i.e., separating one Doppler frequency from another). If the radar is moving relative to the target (as when the radar is on an aircraft and the target is the ground), the Doppler frequency shift will be different for different parts of the target. Thus, the Doppler frequency shift can allow the various parts of the target to be resolved. The resolution in cross range derived from the Doppler frequency shift is far better than that achieved with a narrow-beam antenna. It is not unusual for the cross-range resolution obtained from Doppler frequency to be comparable to that obtained in the range dimension.

Radar imaging

Radar can distinguish one kind of target from another (such as a bird from an aircraft), and some systems are able to recognize specific classes of targets (for example, a commercial airliner as opposed to a military jet fighter). Target recognition is accomplished by measuring the size and speed of the target and by observing the target with high resolution in one or more dimensions. Propellers and jet engines modify the radar echo from aircraft and can assist in target recognition. The flapping of the wings of a bird in flight produces a characteristic modulation that can be used to recognize that a bird is present or even to distinguish one type of bird from another.

Cross-range resolution obtained from Doppler frequency, along with range resolution, is the basis for synthetic aperture radar (SAR). SAR produces an image of a scene that is similar, but not identical, to an optical photograph. One should not expect the image seen by radar “eyes” to be the same as that observed by optical eyes. Each provides different information. Radar and optical images differ because of the large difference in the frequencies involved; optical frequencies are approximately 100,000 times higher than radar frequencies.

SAR can operate from long range and through clouds or other atmospheric effects that limit optical and infrared imaging sensors. The resolution of a SAR image can be made independent of range, an advantage over passive optical imaging where the resolution worsens with increasing range. Synthetic aperture radars that map areas of the Earth’s surface with resolutions of a few metres can provide information about the nature of the terrain and what is on the surface.

A SAR operates on a moving vehicle, such as an aircraft or spacecraft, to image stationary objects or planetary surfaces. Since relative motion is the basis for the Doppler resolution, high resolution (in cross range) also can be accomplished if the radar is stationary and the target is moving. This is called inverse synthetic aperture radar (ISAR). Both the target and the radar can be in motion with ISAR.

Additional Information

Radars are critical for understanding the weather; they allow us to “see” inside clouds and help us to observe what is really happening. Working together, engineers, technicians, and scientists collectively design, develop and operate the advanced technology of radars that are used to study the atmosphere.

What are Weather Radars?

Doppler weather radars are remote sensing instruments and are capable of detecting particle type (rain, snow, hail, insects, etc), intensity, and motion. Radar data can be used to determine the structure of storms and to help with predicting severity of storms.

The Electromagnetic Spectrum

Energy is emitted in various frequencies and wavelengths from large wavelength radio waves to shorter wavelength gamma rays. Radars emit microwave energy, a longer wavelength, highlighted in yellow.

How Do Radars Work?

The radar transmits a focused pulse of microwave energy (yup, just like a microwave oven or a cell phone, but stronger) at an object, most likely a cloud. Part of this beam of energy bounces back and is measured by the radar, providing information about the object. Radar can measure precipitation size, quantity, speed and direction of movement, within about 100 mile radius of its location.

Doppler radar is a specific type of radar that uses the Doppler effect to gather velocity data from the particles that are being measured. For example, a Doppler radar transmits a signal that gets reflected off raindrops within a storm. The reflected radar signal is measured by the radar's receiver with a change in frequency. That frequency shift is directly related to the motion of the raindrops.

Why does NCAR use radars for research?
Atmospheric scientists use different types of ground-based and aircraft-mounted radar to study weather and climate. Radar can be used to help study severe weather events such tornadoes and hurricanes, or long-term climate processes in the atmosphere.

Ground-based Research Radar

The NCAR S-Band Dual-Polarization Doppler Radar (S-PolKa) is a 10-cm wavelength weather radar initially designed and fielded by NCAR in the 1990s. Continuously modified and improved, this state-of-the-art radar system now includes dual-wavelength capability. When the Ka-band is added, a 0.8-cm wavelength radar, it is known as S-PolKa. S-PolKa’s mission is to promote a better understanding of weather and its causes and thereby ultimately provide improved forecasting of severe storms, tornadoes, floods, hail, damaging winds, aircraft icing conditions, and heavy snow.

Airborne Research Radar

In the air, research aircraft can be outfitted with an array of radars. The NCAR HIAPER Cloud Radar (HCR) can be mounted to the underside of the wing of the NSF/NCAR HIAPER research aircraft (a modified Gulfstream V jet) and delivers high quality observations of winds, precipitation and other particles. It was designed and manufactured by a collaborative team of mechanical, electrical, aerospace, and software engineers; research scientists; and instrument makers from EOL.

DSC00758.JPG

#3 Dark Discussions at Cafe Infinity » Collapse/Collapsed/Collapsing Quotes - II » Today 16:25:06

Jai Ganesh
Replies: 0

Collapse II/Collapsed/Collapsing Quotes

1. I come from a country in which I experienced economic collapse. - Angela Merkel

2. I can feel it in my bones that no matter what we do, even if we do not do anything, the revolutionary government of Madame Cory Aquino will collapse. - Ferdinand Marcos

3. If the U.N. is ineffective, the whole concept of multilateralism will collapse. - Sushma Swaraj

4. I would push myself so much that in the end I would collapse and I would have to be admitted to hospital, I would pray to God to save me, promise that I would be more careful in future. And then I would do it all over again. - Milkha Singh

5. My life collapsed. People ran from me because suddenly it was 'Oh my God! It's over for her now!' - Nicole Kidman

6. We would look up at the night sky together, and although Stephen wasn't actually very good at detecting constellations, he would tell me about the expanding universe and the possibility of it contracting again and describe a star collapsing in on itself to form a black hole in a way that was quite easy to understand. - Jane Hawking.

#4 Jokes » Carrot Jokes - I » Today 15:59:55

Jai Ganesh
Replies: 0

Q: Did you hear about the carrot detective?
A: He got to the root of every case.
* * *
Q: How can you make a soup rich?
A: Add 14 carrots (carats) to it.
* * *
Q: Why is a carrot orange and pointy?
A: Because if it was green and round it would want to pea!
* * *
Q: How do you kill a salad?
A: You go for the carrot-id artery.
* * *
Q: When does a carrot wear a mask?
A: To the mascarrot ball. (Masquerade).
* * *

#5 Re: Jai Ganesh's Puzzles » General Quiz » Today 15:51:34

Hi,

#10689. What does the term in Geography Compass rose mean?

#10690. What does the term in Geography Confluence mean?

#6 Re: Jai Ganesh's Puzzles » English language puzzles » Today 15:19:05

Hi,

#5885. What does the verb (used with object) qualify mean?

#5886. What does the adjective qualitative mean?

#8 Re: Jai Ganesh's Puzzles » Doc, Doc! » Today 14:24:36

Hi,

#2538. What does the medical term Colectomy mean?

#12 Science HQ » Hypotension » Today 00:50:16

Jai Ganesh
Replies: 0

Hypotension

Gist

Hypotension (low blood pressure) means blood flows at lower than normal force, typically below 90/60 mmHg, potentially depriving organs of oxygen, causing dizziness, fatigue, blurred vision, or fainting, and can stem from dehydration, blood loss, medications, or underlying conditions like heart issues, with treatment focusing on managing the cause, increasing fluids/salt, or compression. While some have naturally low BP without issues, sudden drops (orthostatic hypotension) are common, requiring prompt attention to prevent serious complications like shock.

Hypotension treatment focuses on raising blood pressure through lifestyle changes (more water, salt intake with doctor's advice, smaller meals, avoiding sudden movements, compression stockings) and, if needed, medications like midodrine or fludrocortisone, or IV fluids for severe cases, all depending on the underlying cause, with severe drops requiring emergency care. 

Summary

Hypotension is a condition in which the blood pressure is abnormally low, either because of reduced blood volume or because of increased blood-vessel capacity. Though not in itself an indication of ill health, it often accompanies disease.

Extensive bleeding is an obvious cause of reduced blood volume that leads to hypotension. There are other possible causes. A person who has suffered an extensive burn loses blood plasma—blood minus the red and white blood cells and the platelets. Blood volume is reduced in a number of conditions involving loss of salt and water from the tissues—as in excessive sweating and diarrhea—and its replacement with water from the blood. Loss of water from the blood to the tissues may result from exposure to cold temperatures. Also, a person who remains standing for as long as one-half hour may temporarily lose as much as 15 percent of the blood water into the tissues of the legs.

Orthostatic hypotension—low blood pressure upon standing up—seems to stem from a failure in the autonomic nervous system. Normally, when a person stands up, there is a reflex constriction of the small arteries and veins to offset the effects of gravity. Hypotension from an increase in the capacity of the blood vessels is a factor in fainting (see syncope). Hypotension is also a factor in poliomyelitis, in shock, and in overdose of depressant drugs, such as barbiturates.

Details

Hypotension, also known as low blood pressure, is a cardiovascular condition characterized by abnormally reduced blood pressure. Blood pressure is the force of blood pushing against the walls of the arteries as the heart pumps out blood and is indicated by two numbers, the systolic blood pressure (the top number) and the diastolic blood pressure (the bottom number), which are the maximum and minimum blood pressures within the cardiac cycle, respectively. A systolic blood pressure of less than 90 millimeters of mercury (mmHg) or diastolic of less than 60 mmHg is generally considered to be hypotension. Different numbers apply to children. However, in practice, blood pressure is considered too low only if noticeable symptoms are present.

Symptoms may include dizziness, lightheadedness, confusion, feeling tired, weakness, headache, blurred vision, nausea, neck or back pain, an irregular heartbeat or feeling that the heart is skipping beats or fluttering, and fainting. Hypotension is the opposite of hypertension, which is high blood pressure. It is best understood as a physiological state rather than a disease. Severely low blood pressure can deprive the brain and other vital organs of oxygen and nutrients, leading to a life-threatening condition called shock. Shock is classified based on the underlying cause, including hypovolemic shock, cardiogenic shock, distributive shock, and obstructive shock.

Hypotension can be caused by strenuous exercise, excessive heat, low blood volume (hypovolemia), hormonal changes, widening of blood vessels, anemia, vitamin B12 deficiency, anaphylaxis, heart problems, or endocrine problems. Some medications can also lead to hypotension. There are also syndromes that can cause hypotension in patients including orthostatic hypotension, vasovagal syncope, and other rarer conditions.

For many people, excessively low blood pressure can cause dizziness and fainting or indicate serious heart, endocrine or neurological disorders.

For some people who exercise and are in top physical condition, low blood pressure could be normal. A single session of exercise can induce hypotension, and water-based exercise can induce a hypotensive response.

Treatment depends on the cause of the low blood pressure. Treatment of hypotension may include the use of intravenous fluids or vasopressors. When using vasopressors, trying to achieve a mean arterial pressure (MAP) of greater than 70 mmHg does not appear to result in better outcomes than trying to achieve an MAP of greater than 65 mmHg in adults.

Additional Information

Low blood pressure is a reading below 90/60 mm Hg. Many issues can cause low blood pressure. Treatment varies depending on what’s causing it. Symptoms of low blood pressure include dizziness and fainting, but many people don’t have symptoms. The cause also affects your prognosis.

Overview:

What is low blood pressure?

Hypotension, or low blood pressure, is when your blood pressure is much lower than expected. It can happen either as a condition on its own or as a symptom of a wide range of conditions. It may not cause symptoms. But when it does, you may need medical attention.

Types of low blood pressure

Hypotension has two definitions:

* Absolute hypotension: Your resting blood pressure is below 90/60 millimeters of mercury (mm Hg).
* Orthostatic hypotension: Your blood pressure stays low for longer than three minutes after you stand up from a sitting position. (It’s normal for your blood pressure to drop briefly when you change positions, but not for that long.) The drop must be 20 mm Hg or more for your systolic (top) pressure and 10 mm Hg or more for your diastolic (bottom) pressure. Another name for this is postural hypotension because it happens with changes in posture.

Measuring blood pressure involves two numbers:

* Systolic (top number): This is the pressure on your arteries each time your heart beats.
* Diastolic (bottom number): This is how much pressure your arteries are under between heartbeats.

What is considered low blood pressure?

Low blood pressure is below 90/60 mm Hg. Normal blood pressure is above that, up to 120/80 mm Hg.

How common is low blood pressure?

Because low blood pressure is common without any symptoms, it’s impossible to know how many people it affects. However, orthostatic hypotension seems to be more and more common as you get older. An estimated 5% of people have it at age 50, while that figure climbs to more than 30% in people over 70.

Who does low blood pressure affect?

Hypotension can affect people of any age and background, depending on why it happens. However, it’s more likely to cause symptoms in people over 50 (especially orthostatic hypotension). It can also happen (with no symptoms) to people who are very physically active, which is more common in younger people.

Symptoms and Causes:

What are the symptoms of low blood pressure?

Low blood pressure symptoms include:

* Dizziness or feeling lightheaded.
* Fainting or passing out (syncope).
* Nausea or vomiting.
* Distorted or blurred vision.
* Fast, shallow breathing.
* Fatigue or weakness.
* Feeling tired, sluggish or lethargic.
* Confusion or trouble concentrating.
* Agitation or other unusual changes in behavior (a person not acting like themselves).

For people with symptoms, the effects depend on why hypotension is happening, how fast it develops and what caused it. Slow decreases in blood pressure happen normally, so hypotension becomes more common as people get older. Fast decreases in blood pressure can mean certain parts of your body aren’t getting enough blood flow. That can have effects that are unpleasant, disruptive or even dangerous.

Usually, your body can automatically control your blood pressure and keep it from dropping too much. If it starts to drop, your body tries to make up for that, either by speeding up your heart rate or constricting blood vessels to make them narrower. Symptoms of hypotension happen when your body can’t offset the drop in blood pressure.

For many people, hypotension doesn’t cause any symptoms. Many people don’t even know their blood pressure is low unless they measure their blood pressure.

What are the possible signs of low blood pressure?

Your healthcare provider may observe these signs of low blood pressure:

* A heart rate that’s too slow or too fast.
* A skin color that looks lighter than it usually does.
* Cool kneecaps.
* Low cardiac output (how much blood your heart pumps).
* Low urine (pee) output.

What causes low blood pressure?

Hypotension can happen for a wide range of reasons. Causes of low blood pressure include:

* Orthostatic hypotension: This happens when you stand up too quickly and your body can’t compensate with more blood flow to your brain.
* Central nervous system diseases: Conditions like Parkinson’s disease can affect how your nervous system controls your blood pressure. People with these conditions may feel the effects of low blood pressure after eating because their digestive systems use more blood as they digest food.
* Low blood volume: Blood loss from severe injuries can cause low blood pressure. Dehydration can also contribute to low blood volume.
* Life-threatening conditions: These conditions include irregular heart rhythms (arrhythmias), pulmonary embolism (PE), heart attacks and collapsed lung. Life-threatening allergic reactions (anaphylaxis) or immune reactions to severe infections (sepsis) can also cause hypotension.
* Heart and lung conditions: You can get hypotension when your heart beats too quickly or too slowly, or if your lungs aren’t working as they should. Advanced heart failure (weak heart muscle) is another cause.
* Prescription medications: Hypotension can happen with medications that treat high blood pressure, heart failure, erectile dysfunction, neurological problems, depression and more. Don’t stop taking any prescribed medicine unless your provider tells you to stop.
* Alcohol or recreational drugs: Recreational drugs can lower your blood pressure, as can alcohol (for a short time). Certain herbal supplements, vitamins or home remedies can also lower your blood pressure. This is why you should always include these when you tell your healthcare provider what medications you’re taking.
* Pregnancy: Orthostatic hypotension is possible in the first and second trimesters of pregnancy. Bleeding or other complications of pregnancy can also cause low blood pressure.
* Extreme temperatures: Being too hot or too cold can affect hypotension and make its effects worse.

What are the complications of low blood pressure?

Complications that can happen because of hypotension include:

* Falls and fall-related injuries: These are the biggest risks with hypotension because it can cause dizziness and fainting. Falls can lead to broken bones, concussions and other serious or even life-threatening injuries. If you have hypotension, preventing falls should be one of your biggest priorities.
* Shock: When your blood pressure is low, that can affect your organs by reducing the amount of blood they get. That can cause organ damage or even shock (where your body starts to shut down because of limited blood flow and oxygen).
* Heart problems or stroke: Low blood pressure can cause your heart to try to compensate by pumping faster or harder. Over time, that can cause permanent heart damage and even heart failure. It can also cause problems like deep vein thrombosis (DVT) and stroke because blood isn’t flowing like it should, causing clots to form.

Diagnosis and Tests:

How is low blood pressure diagnosed?

Hypotension itself is easy to diagnose. Taking your blood pressure is all you need to do. But figuring out why you have hypotension is another story. If you have symptoms, a healthcare provider will likely use a variety of tests to figure out why it’s happening and if there’s any danger to you because of it.

What tests will be done to diagnose low blood pressure?

Your provider may recommend the following tests:

Lab testing

Tests on your blood and pee (urine) can look for any potential problems, like:

* Diabetes.
* Vitamin deficiencies.
* Thyroid or hormone problems.
* Low iron levels (anemia).
* Pregnancy (for anyone who can become pregnant).

Imaging

If providers suspect a heart or lung problem is behind your hypotension, they’ll likely use imaging tests to see if they’re right. These tests include:

* X-rays.
* Computed tomography (CT) scans.
* Magnetic resonance imaging (MRI).
* Echocardiogram or similar ultrasound-based tests.

Diagnostic testing

These tests look for specific problems with your heart or other body systems.

* Electrocardiogram (ECG or EKG).
* Exercise stress testing.
* Tilt table test (can help in diagnosing orthostatic hypotension).

blood-pressure-monitor-low-bp-page.png

#13 This is Cool » Mobile Phone » Yesterday 21:59:23

Jai Ganesh
Replies: 0

Mobile Phone

Gist

A cell phone (or mobile phone) is a portable, wireless device that uses cellular networks (radio waves connecting to cell towers) for voice calls and data, evolving from simple phones to powerful smartphones with internet, apps, cameras, and more, replacing traditional phones for communication. 

A cellphone, also known as a mobile phone, allows users to make and receive calls over a radio frequency network while on the move. Modern cellphones support additional services beyond calls such as texting, multimedia messaging, email, internet access, Bluetooth, apps, and photos.

Summary

A mobile phone or cell phone is a portable wireless telephone that allows users to make and receive calls over a radio frequency link while moving within a designated telephone service area, unlike fixed-location phones (landline phones). This radio frequency link connects to the switching systems of a mobile phone operator, providing access to the public switched telephone network (PSTN). Modern mobile telephony relies on a cellular network architecture, which is why mobile phones are often referred to as 'cell phones' in North America.

Beyond traditional voice communication, digital mobile phones have evolved to support a wide range of additional services. These include text messaging, multimedia messaging, email, and internet access (via LTE, 5G NR or Wi-Fi), as well as short-range wireless technologies like Bluetooth, infrared, and ultra-wideband (UWB).

Mobile phones also support a variety of multimedia capabilities, such as digital photography, video recording, and gaming. In addition, they enable multimedia playback and streaming, including video content, as well as radio and television streaming. Furthermore, mobile phones offer satellite-based services, such as navigation and messaging, as well as business applications and payment solutions (via scanning QR codes or near-field communication (NFC)). Mobile phones offering only basic features are often referred to as feature phones (slang: dumbphones), while those with advanced computing power are known as smartphones.

The first handheld mobile phone was demonstrated by Martin Cooper of Motorola in New York City on 3 April 1973, using a handset weighing c. 2 kilograms (4.4 lbs). In 1979, Nippon Telegraph and Telephone (NTT) launched the world's first cellular network in Japan. In 1983, the DynaTAC 8000x was the first commercially available handheld mobile phone. From 1993 to 2024, worldwide mobile phone subscriptions grew to over 9.1 billion; enough to provide one for every person on Earth. In 2024, the top smartphone manufacturers worldwide were Samsung, Apple and Xiaomi; smartphone sales represented about 50 percent of total mobile phone sales. For feature phones as of 2016, the top-selling brands were Samsung, Nokia and Alcatel.

Mobile phones are considered an important human invention as they have been one of the most widely used and sold pieces of consumer technology. The growth in popularity has been rapid in some places; for example, in the UK, the total number of mobile phones overtook the number of houses in 1999. Today, mobile phones are globally ubiquitous, and in almost half the world's countries, over 90% of the population owns at least one.

Details

A cell phone is a wireless telephone that permits telecommunication within a defined area that may include hundreds of square miles, using radio waves in the 800–900 megahertz (MHz) band. To implement a cell-phone system, a geographic area is broken into smaller areas, or cells, usually mapped as uniform hexagrams but in fact overlapping and irregularly shaped. Each cell is equipped with a low-powered radio transmitter and receiver that permit propagation of signals between cell-phone users.

Cellular telephones, or simply cell phones, are portable devices that may be used in motor vehicles or by pedestrians. Communicating by radio waves, they permit a significant degree of mobility within a defined serving region that may range in area from a few city blocks to hundreds of square kilometres. The first mobile and portable subscriber units for cellular systems were large and heavy. With significant advances in component technology, though, the weight and size of portable transceivers have been significantly reduced. In this section, the concept of cell phones and the development of cellular systems are discussed.

Cellular communication

All cellular telephone systems exhibit several fundamental characteristics, as summarized in the following:

1) The geographic area served by a cellular system is broken up into smaller geographic areas, or cells. Uniform hexagons most frequently are employed to represent these cells on maps and diagrams; in practice, though, radio waves do not confine themselves to hexagonal areas, so the actual cells have irregular shapes.
2) All communication with a mobile or portable instrument within a given cell is made to a base station that serves the cell.
3) Because of the low transmitting power of battery-operated portable instruments, specific sending and receiving frequencies assigned to a cell may be reused in other cells within the larger geographic area. Thus, the spectral efficiency of a cellular system (that is, the uses to which it can put its portion of the radio spectrum) is increased by a factor equal to the number of times a frequency may be reused within its service area.
4) As a mobile instrument proceeds from one cell to another during the course of a call, a central controller automatically reroutes the call from the old cell to the new cell without a noticeable interruption in the signal reception. This process is known as handoff. The central controller, or mobile telephone switching office (MTSO), thus acts as an intelligent central office switch that keeps track of the movement of the mobile subscriber.
5) As demand for the radio channels within a given cell increases beyond the capacity of that cell (as measured by the number of calls that may be supported simultaneously), the overloaded cell is “split” into smaller cells, each with its own base station and central controller. The radio-frequency allocations of the original cellular system are then rearranged to account for the greater number of smaller cells.

Frequency reuse between discontiguous cells and the splitting of cells as demand increases are the concepts that distinguish cellular systems from other wireless telephone systems. They allow cellular providers to serve large metropolitan areas that may contain hundreds of thousands of customers.

Development of cellular systems

In the United States, interconnection of mobile transmitters and receivers with the public switched telephone network (PSTN) began in 1946, with the introduction of mobile telephone service (MTS) by the American Telephone & Telegraph Company (AT&T). In the U.S. MTS system, a user who wished to place a call from a mobile phone had to search manually for an unused channel before placing the call. The user then spoke with a mobile operator, who actually dialed the call over the PSTN. The radio connection was simplex—i.e., only one party could speak at a time, the call direction being controlled by a push-to-talk switch in the mobile handset. In 1964 AT&T introduced the improved mobile telephone service (IMTS). This provided full duplex operation, automatic dialing, and automatic channel searching. Initially 11 channels were provided, but in 1969 an additional 12 channels were made available. Since only 11 (or 12) channels were available for all users of the system within a given geographic area (such as the metropolitan area around a large city), the IMTS system faced a high demand for a very limited channel resource. Moreover, each base-station antenna had to be located on a tall structure and had to transmit at high power in order to provide coverage throughout the entire service area. Because of these high power requirements, all subscriber units in the IMTS system were motor-vehicle-based instruments that carried large storage batteries.

During this time a truly cellular system, known as the advanced mobile phone system, or AMPS, was developed primarily by AT&T and Motorola, Inc. AMPS was based on 666 paired voice channels, spaced every 30 kilohertz in the 800-megahertz region. The system employed an analog modulation approach—frequency modulation, or FM—and was designed from the outset to support subscriber units for use both in automobiles and by pedestrians. It was publicly introduced in Chicago in 1983 and was a success from the beginning. At the end of the first year of service, there were a total of 200,000 AMPS subscribers throughout the United States; five years later there were more than 2,000,000. In response to expected service shortages, the American cellular industry proposed several methods for increasing capacity without requiring additional spectrum allocations. One analog FM approach, proposed by Motorola in 1991, was known as narrowband AMPS, or NAMPS. In NAMPS systems each existing 30-kilohertz voice channel was split into three 10-kilohertz channels. Thus, in place of the 832 channels available in AMPS systems, the NAMPS system offered 2,496 channels. A second approach, developed by a committee of the Telecommunications Industry Association (TIA) in 1988, employed digital modulation and digital voice compression in conjunction with a time-division multiple access (TDMA) method; this also permitted three new voice channels in place of one AMPS channel. Finally, in 1994 there surfaced a third approach, developed originally by Qualcomm, Inc., but also adopted as a standard by the TIA. This third approach used a form of spread spectrum multiple access known as code-division multiple access (CDMA)—a technique that, like the original TIA approach, combined digital voice compression with digital modulation. (For more information on the techniques of information compression, signal modulation, and multiple access, see telecommunications.) The CDMA system offered 10 to 20 times the capacity of existing AMPS cellular techniques. All of these improved-capacity cellular systems were eventually deployed in the United States, but, since they were incompatible with one another, they supported rather than replaced the older AMPS standard.

Although AMPS was the first cellular system to be developed, a Japanese system was the first cellular system to be deployed, in 1979. Other systems that preceded AMPS in operation include the Nordic mobile telephone (NMT) system, deployed in 1981 in Denmark, Finland, Norway, and Sweden, and the total access communication system (TACS), deployed in the United Kingdom in 1983. A number of other cellular systems were developed and deployed in many more countries in the following years. All of them were incompatible with one another. In 1988 a group of government-owned public telephone bodies within the European Community announced the digital global system for mobile communications, referred to as GSM, the first such system that would permit any cellular user in one European country to operate in another European country with the same equipment. GSM soon became ubiquitous throughout Europe.

The analog cellular systems of the 1980s are now referred to as “first-generation” (or 1G) systems, and the digital systems that began to appear in the late 1980s and early ’90s are known as the “second generation” (2G). Since the introduction of 2G cell phones, various enhancements have been made in order to provide data services and applications such as Internet browsing, two-way text messaging, still-image transmission, and mobile access by personal computers. One of the most successful applications of this kind is iMode, launched in 1999 in Japan by NTT DoCoMo, the mobile service division of the Nippon Telegraph and Telephone Corporation. Supporting Internet access to selected Web sites, interactive games, information retrieval, and text messaging, iMode became extremely successful; within three years of its introduction, more than 35 million users in Japan had iMode-enabled cell phones.

Beginning in 1985, a study group of the Geneva-based International Telecommunication Union (ITU) began to consider specifications for Future Public Land Mobile Telephone Systems (FPLMTS). These specifications eventually became the basis for a set of “third-generation” (3G) cellular standards, known collectively as IMT-2000. The 3G standards are based loosely on several attributes: the use of CDMA technology; the ability eventually to support three classes of users (vehicle-based, pedestrian, and fixed); and the ability to support voice, data, and multimedia services. The world’s first 3G service began in Japan in October 2001 with a system offered by NTT DoCoMo. Soon 3G service was being offered by a number of different carriers in Japan, South Korea, the United States, and other countries. Several new types of service compatible with the higher data rates of 3G systems have become commercially available, including full-motion video transmission, image transmission, location-aware services (through the use of global positioning system [GPS] technology), and high-rate data transmission.

The increasing demands placed on mobile telephones to handle even more data than 3G could led to the development of 4G technology. In 2008 the ITU set forward a list of requirements for what it called IMT-Advanced, or 4G; these requirements included data rates of 1 gigabit per second for a stationary user and 100 megabits per second for a moving user. The ITU in 2010 decided that two technologies, LTE-Advanced (Long Term Evolution; LTE) and WirelessMan-Advanced (also called WiMAX), met the requirements. The Swedish telephone company TeliaSonera introduced the first 4G LTE network in Stockholm in 2009.

Airborne cellular systems

In addition to the terrestrial cellular phone systems described above, there also exist several systems that permit the placement of telephone calls to the PSTN by passengers on commercial aircraft. These in-flight telephones, known by the generic name aeronautical public correspondence (APC) systems, are of two types: terrestrial-based, in which telephone calls are placed directly from an aircraft to an en route ground station; and satellite-based, in which telephone calls are relayed via satellite to a ground station. In the United States the North American terrestrial system (NATS) was introduced by GTE Corporation in 1984. Within a decade the system was installed in more than 1,700 aircraft, with ground stations in the United States providing coverage over most of the United States and southern Canada. A second-generation system, GTE Airfone GenStar, employed digital modulation. In Europe the European Telecommunications Standards Institute (ETSI) adopted a terrestrial APC system known as the terrestrial flight telephone system (TFTS) in 1992. This system employs digital modulation methods and operates in the 1,670–1,675- and 1,800–1,805-megahertz bands. In order to cover most of Europe, the ground stations must be spaced every 50 to 700 km (30 to 435 miles).

Satellite-based telephone communication

In order to augment the terrestrial and aircraft-based mobile telephone systems, several satellite-based systems have been put into operation. The goal of these systems is to permit ready connection to the PSTN from anywhere on Earth’s surface, especially in areas not presently covered by cellular telephone service. A form of satellite-based mobile communication has been available for some time in airborne cellular systems that utilize Inmarsat satellites. However, the Inmarsat satellites are geostationary, remaining approximately 35,000 km (22,000 miles) above a single location on Earth’s surface. Because of this high-altitude orbit, Earth-based communication transceivers require high transmitting power, large communication antennas, or both in order to communicate with the satellite. In addition, such a long communication path introduces a noticeable delay, on the order of a quarter-second, in two-way voice conversations. One viable alternative to geostationary satellites would be a larger system of satellites in low Earth orbit (LEO). Orbiting less than 1,600 km (1,000 miles) above Earth, LEO satellites are not geostationary and therefore cannot provide constant coverage of specific areas on Earth. Nevertheless, by allowing radio communications with a mobile instrument to be handed off between satellites, an entire constellation of satellites can assure that no call will be dropped simply because a single satellite has moved out of range.

The first LEO system intended for commercial service was the Iridium system, designed by Motorola, Inc., and owned by Iridium LLC, a consortium made up of corporations and governments from around the world. The Iridium concept employed a constellation of 66 satellites orbiting in six planes around Earth. They were launched from May 1997 to May 1998, and commercial service began in November 1998. Each satellite, orbiting at an altitude of 778 kilometres (483 miles), had the capability to transmit 48 spot beams to Earth. Meanwhile, all the satellites were in communication with one another via 23-gigahertz radio “crosslinks,” thus permitting ready handoff between satellites when communicating with a fixed or mobile user on Earth. The crosslinks provided an uninterrupted communication path between the satellite serving a user at any particular instant and the satellite connecting the entire constellation with the gateway ground station to the PSTN. In this way, the 66 satellites provided continuous telephone communication service for subscriber units around the globe. However, the service failed to attract sufficient subscribers, and Iridium LLC went out of business in March 2000. Its assets were acquired by Iridium Satellite LLC, which continued to provide worldwide communication service to the U.S. Department of Defense as well as business and individual users.

Another LEO system, Globalstar, consisted of 48 satellites that were launched about the same time as the Iridium constellation. Globalstar began offering service in October 1999, though it too went into bankruptcy, in February 2002; a reorganized Globalstar LP continued to provide service thereafter.

Smartphone

Smartphone is a mobile telephone with a display screen (typically a liquid crystal display, or LCD), built-in personal information management programs (such as an electronic calendar and address book)), and an operating system (OS) that allows other computer software to be installed for Web browsing, email, music, video, and other applications. A smartphone may be thought of as a handheld computer integrated within a mobile telephone.

The first smartphone was designed by IBM and sold by BellSouth (formerly part of the AT&T Corporation) in 1993. It included a touchscreen interface for accessing its calendar, address book, calculator, and other functions. As the market matured and solid-state computer memory and integrated circuits became less expensive over the following decade, smartphones became more computer-like, and more advanced services, such as Internet access, became possible. Advanced services became ubiquitous with the introduction of the so-called third-generation (3G) mobile phone networks in 2001. Before 3G, most mobile phones could send and receive data at a rate sufficient for telephone calls and text messages. Using 3G, communication takes place at bit-rates high enough for sending and receiving photographs, video clips, music files, e-mails, and more. Most smartphone manufacturers license an operating system, such as Microsoft Corporation’s Windows Mobile OS, Symbian OS, Google’s Android OS, or Palm OS. Research in Motion’s BlackBerry and Apple Inc.’s iPhone have their own proprietary systems.

Smartphones contain either a keyboard integrated with the telephone number pad or a standard “QWERTY” keyboard for text messaging, e-mailing, and using Web browsers. “Virtual” keyboards can be integrated into a touch-screen design. Smartphones often have a built-in camera for recording and transmitting photographs and short videos. In addition, many smartphones can access Wi-Fi “hot spots” so that users can access VoIP (voice over Internet protocol) rather than pay cellular telephone transmission fees. The growing capabilities of handheld devices and transmission protocols have enabled a growing number of inventive and fanciful applications—for instance, “augmented reality,” in which a smartphone’s global positioning system (GPS) location chip can be used to overlay the phone’s camera view of a street scene with local tidbits of information, such as the identity of stores, points of interest, or real estate listings.

4G

4G refers to the fourth generation of cellular network technology, first introduced in the late 2000s and early 2010s. Compared to preceding third-generation (3G) technologies, 4G has been designed to support all-IP communications and broadband services, and eliminates circuit switching in voice telephony. It also has considerably higher data bandwidth compared to 3G, enabling a variety of data-intensive applications such as high-definition media streaming and the expansion of Internet of Things (IoT) applications.

The earliest deployed technologies marketed as "4G" were Long Term Evolution (LTE), developed by the 3GPP group, and Mobile Worldwide Interoperability for Microwave Access (Mobile WiMAX), based on IEEE specifications. These provided significant enhancements over previous 3G and 2G.

5G

5G, fifth-generation telecommunications technology. Introduced in 2019 and now globally deployed, 5G delivers faster connectivity with higher bandwidth and “lower latency” (shorter delay times), improving the performance of phone calls, streaming, videoconferencing, gaming, and business applications as well as the responsiveness of connected systems and mobile apps. 5G can double the download speeds for smartphones and improve performance considerably more for devices tied to the Internet of Things (IoT).

5G technology improves the data processing of more-advanced digital operations such as those tied to machine learning (ML), artificial intelligence (AI), virtual reality (VR), and augmented reality (AR), improving performance and the user experience alike. It also better supports autonomous vehicles, drones, and other robotic systems.

How 5G works

5G signals rely on a different part of the radiofrequency spectrum than previous versions of cellular technology. As a result, mobile phones and other devices must be built with a specific 5G microchip.

Three primary types of 5G technology exist: low-band networks that support a wide coverage area but increase speeds only by about 20 percent over 4G; high-band networks that deliver ultrafast connectivity but which are limited by distance and access to 5G base stations (which transmit the signals for the technology); and mid-band networks that balance both speed and breadth of coverage. 5G also supports “OpenRoaming” capabilities that allow a user to switch seamlessly and automatically from a cellular to a Wi-Fi connection while traveling, eliminating any interruption of service and the need for entering passwords to access the latter.

Telecom providers use a different type of antenna, known as MIMO (multiple-input multiple-output), to transmit 5G signals. This does not require the traditional large cell tower (base station) but can be deployed through a multiplicity of “small cells” (which are the micro boxes commonly seen on poles and lamp posts). Many observers see this as an aesthetic improvement to the city landscape. Proximity to these cells remains an issue globally, however, especially for rural and remote regions, underscoring the current limitations of 5G.

Security concerns accompany changing technologies. Since 5G networks rely on cloud-based data storage, they are susceptible to the same possible dangers as other types of cellar and noncellular networks, including data damage, cyberattacks, and theft. Additionally, companies must be mindful of data-point vulnerabilities during a transition to 5G from networks with different security capabilities.

How 5G is used

Besides the use of 5G for voice communications, the technology supports advanced IoT functionality. For example, 5G enables more-sophisticated smart home technology, including locks, lights, and appliances; more-advanced smart medical devices, such as blood sugar and blood pressure monitors; and enhanced retail experiences, facilitating such novelties as virtual product demonstrations and “phygital” shopping (blending the ease of online buying with the in-store experience).

5G technology can potentially enhance every field of work. Urban planners creating smart cities, for example, can move from magnetic loops embedded in roads for detecting vehicles (and triggering traffic signals and opening gates) to more efficient and cost-effective wireless cameras equipped with AI. Municipal trash collection can operate on demand, concentrating on key trash areas and at optimal times, instead of operating according to a schedule divorced from real-time needs. Inexpensive connected sensors can allow farmers to monitor water and soil nutrients remotely (and more frequently), while architects and engineers can more efficiently view information about infrastructure systems and operations, all done remotely on their smartphones or tablets; they can even contribute to site construction and building maintenance in real time through augmented-reality software. 5G can enable and enhance remote worker training, especially in fields with crippling worker shortages that result from frequent employee turnover and long training periods, as is common in emergency fields and medicine. Virtual reality, for instance, is common in training firefighters today, and emergency medical technicians (EMTs) can not only stay in better contact with 911 call centres and emergency rooms but also receive more efficient and effective interactive training, delivered to their personal phones and tablets, through ultrarealistic emergency simulations, all enabled through high-speed low-latency 5G technology.

smartphone_6215aa-e1734955877669.jpg?w=800

#14 Re: Dark Discussions at Cafe Infinity » crème de la crème » Yesterday 18:40:44

2404) John Eccles (neurophysiologist)

Gist:

Work

The nervous system in people and animals consists of many different cells. In cells, signals are conveyed by small electrical currents and by chemical substances. By measuring small variations in electrical charges at contact surfaces between nerve cells, or synapses, in the early 1950s John Eccles showed how nerve impulses are conveyed from one cell to another. The synapses are of different types, which has either a stimulating or inhibiting effect. A nerve cell receives signals from many different synapses, and the effect is determined by which type prevails.

Summary

Sir John Carew Eccles (born Jan. 27, 1903, Melbourne, Australia—died May 2, 1997, Contra, Switz.) was an Australian research physiologist who received (with Alan Hodgkin and Andrew Huxley) the 1963 Nobel Prize for Physiology or Medicine for his discovery of the chemical means by which impulses are communicated or repressed by nerve cells (neurons).

After graduating from the University of Melbourne in 1925, Eccles studied at the University of Oxford under a Rhodes scholarship. He received a Ph.D. there in 1929 after having worked under the neurophysiologist Charles Scott Sherrington. He held a research post at Oxford before returning to Australia in 1937, teaching there and in New Zealand over the following decades.

Eccles conducted his prizewinning research while at the Australian National University, Canberra (1951–66). He demonstrated that one nerve cell communicates with a neighbouring cell by releasing chemicals into the synapse (the narrow cleft, or gap, between the two cells). He showed that the excitement of a nerve cell by an impulse causes one kind of synapse to release into the neighbouring cell a substance (probably acetylcholine) that expands the pores in nerve membranes. The expanded pores then allow free passage of sodium ions into the neighbouring nerve cell and reverse the polarity of electric charge. This wave of electric charge, which constitutes the nerve impulse, is conducted from one cell to another. In the same way, Eccles found, an excited nerve cell induces another type of synapse to release into the neighbouring cell a substance that promotes outward passage of positively charged potassium ions across the membrane, reinforcing the existing polarity and inhibiting the transmission of an impulse.

Eccles’s research, which was based largely on the findings of Hodgkin and Huxley, settled a long-standing controversy over whether nerve cells communicate with each other by chemical or by electric means. His work had a profound influence on the medical treatment of nervous diseases and research on kidney, heart, and brain function.

Among his scientific books are Reflex Activity of the Spinal Cord (1932), The Physiology of Nerve Cells (1957), The Inhibitory Pathways of the Central Nervous System (1969), and The Understanding of the Brain (1973). He also wrote a number of philosophical works, including Facing Reality: Philosophical Adventures by a Brain Scientist (1970) and The Human Mystery (1979).

Details

Sir John Carew Eccles (27 January 1903 – 2 May 1997) was an Australian neurophysiologist and philosopher who won the 1963 Nobel Prize in Physiology or Medicine for his work on the synapse. He shared the prize with Andrew Huxley and Alan Lloyd Hodgkin.

Life and work:

Early life

Eccles was born in Melbourne, Australia. He grew up there with his two sisters and his parents: William and Mary Carew Eccles (both teachers, who home schooled him until he was 12). He initially attended Warrnambool High School (now Warrnambool College) (where a science wing is named in his honour), then completed his final year of schooling at Melbourne High School. Aged 17, he was awarded a senior scholarship to study medicine at the University of Melbourne.[3] As a medical undergraduate, he was never able to find a satisfactory explanation for the interaction of mind and body; he started to think about becoming a neuroscientist. He graduated (with first class honours) in 1925, and was awarded a Rhodes Scholarship to study under Charles Scott Sherrington at Magdalen College, Oxford University, where he received his Doctor of Philosophy in 1929.

In 1937 Eccles returned to Australia, where he worked on military research during World War II. During this time, Eccles was the director of the Kanematsu Institute at Sydney Medical School, where he and Bernard Katz gave research lectures at the University of Sydney, strongly influencing the intellectual environment of the university. After the war, he became a professor at the University of Otago in New Zealand. From 1952 to 1962, he worked as a professor at the John Curtin School of Medical Research (JCSMR) of the Australian National University. From 1966 to 1968, Eccles worked at the Feinberg School of Medicine at Northwestern University in Chicago.

Career

In the early 1950s, Eccles and his colleagues performed the research that would lead to his receiving the Nobel Prize. To study synapses in the peripheral nervous system, Eccles and colleagues used the stretch reflex as a model, which is easily studied because it consists of only two neurones: a sensory neurone (the muscle spindle fibre) and the motor neurone. The sensory neurone synapses onto the motor neurone in the spinal cord. When a current is passed into the sensory neurone in the quadriceps, the motor neurone innervating the quadriceps produced a small excitatory postsynaptic potential (EPSP). When a similar current is passed through the hamstring, the opposing muscle to the quadriceps, an inhibitory postsynaptic potential (IPSP) is produced in the quadriceps motor neurone. Although a single EPSP was not enough to fire an action potential in the motor neurone, the sum of several EPSPs from multiple sensory neurones synapsing onto the motor neurone can cause the motor neurone to fire, thus contracting the quadriceps. On the other hand, IPSPs could subtract from this sum of EPSPs, preventing the motor neurone from firing.

Apart from these seminal experiments, Eccles was key to a number of important developments in neuroscience. Until around 1949, Eccles believed that synaptic transmission was primarily electrical rather than chemical. Although he was wrong in this hypothesis, his arguments led him and others to perform some of the experiments which proved chemical synaptic transmission. Bernard Katz and Eccles worked together on some of the experiments which elucidated the role of acetylcholine as a neurotransmitter in the brain.

Honours

He was appointed a Knight Bachelor in 1958 in recognition of services to physiological research.

He won the Australian of the Year Award in 1963, the same year he won the Nobel Prize.

In 1964, he became an honorary member to the American Philosophical Society, and in 1966 he moved to the United States to work as a professor at the Institute for Biomedical Research at the Feinberg School of Medicine in Chicago. Unhappy with the working conditions there, he left to become a professor at The State University of New York at Buffalo from 1968 until he retired in 1975. After retirement, he moved to Switzerland and wrote on the mind–body problem.

In 1981, Eccles became a founding member of the World Cultural Council.

In 1990 he was appointed a Companion of the Order of Australia (AC) in recognition of service to science, particularly in the field of neurophysiology. He died at the age of 94 in 1997 in Tenero-Contra, Locarno, Switzerland.

In March 2012, the Eccles Institute of Neuroscience was constructed in a new wing of the John Curtin School of Medical Research, with the assistance of a $63M grant from the Commonwealth Government. In 2021, a new $60M animal research building was opened at the University of Otago, Dunedin, New Zealand, and named the Eccles Building.

eccles-13162-portrait-medium.jpg

#15 Dark Discussions at Cafe Infinity » Collapse Quotes - I » Yesterday 17:59:55

Jai Ganesh
Replies: 0

Collapse Quotes - I

1. Why did the Soviet Union disintegrate? Why did the Soviet Communist Party collapse? An important reason was that their ideals and beliefs had been shaken. - Xi Jinping

2. If all mankind were to disappear, the world would regenerate back to the rich state of equilibrium that existed ten thousand years ago. If insects were to vanish, the environment would collapse into chaos. - E. O. Wilson

3. I could feel his muscle tissues collapse under my force. It's ludicrous these mortals even attempt to enter my realm. - Mike Tyson

4. My brother Gary, who was my coach, five years my elder, studied human movements at Queensland University in Brisbane. We used to train together every day, and we'd train for so long that at the end of a session, we would physically almost collapse. - Matthew Hayden

5. Some people are contriving ways and means of making us collapse. - Robert Mugabe

6. Your home is regarded as a model home, your life as a model life. But all this splendor, and you along with it... it's just as though it were built upon a shifting quagmire. A moment may come, a word can be spoken, and both you and all this splendor will collapse. - Henrik Ibsen

7. The New Deal is plainly an attempt to achieve a working socialism and avert a social collapse in America; it is extraordinarily parallel to the successive 'policies' and 'Plans' of the Russian experiment. Americans shirk the word 'socialism', but what else can one call it? - H. G. Wells

8. Communism in Cuba will collapse sooner or later because you can't control the free flow of information. Communism prevents organizations from developing by stopping the flow of information. The system is based on police and listening devices and triggers the worst characteristics in humans. - Lech Walesa.

#16 Jokes » Cake Jokes - V » Yesterday 17:23:17

Jai Ganesh
Replies: 0

Q: What happened when Jessica Simpson tried to make a birthday cake?
A: The candles melted in the oven!
* * *
Q; Why did the burglar break into the bakery?
A; Because he heard the cakes were rich.
* * *
Q: What kind of cake do you get at a cafeteria?
A: A stomach-cake!
* * *
Q: What do you call a baker with a cold?
A: Coughee cake.
* * *
Q: What do they call Chris Christie in New Jersey?
A: Cake Boss.
* * *
Patient: Doctor, I get heartburn every time I eat birthday cake.
Doctor: Next time, take off the candles.
* * *

#17 This is Cool » Colloid » Yesterday 17:02:44

Jai Ganesh
Replies: 0

Colloid

Gist

A colloid is a heterogeneous mixture where tiny, insoluble particles (1-1000 nm) of one substance (dispersed phase) are spread out in another substance (dispersion medium) without settling, like milk, smoke, or fog, characterized by particle size between solutions and suspensions, stability, and the Tyndall effect (scattering light). 

A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels.

Summary

A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension).

Since the definition of a colloid is so ambiguous, the International Union of Pure and Applied Chemistry (IUPAC) formalized a modern definition of colloids:

'The term colloidal refers to a state of subdivision, implying that the molecules or polymolecular particles dispersed in a medium have at least in one direction a dimension roughly between 1 nanometre and 1 micrometre, or that in a system discontinuities are found at distances of that order. It is not necessary for all three dimensions to be in the colloidal range…Nor is it necessary for the units of a colloidal system to be discrete…The size limits given above are not rigid since they will depend to some extent on the properties under consideration.'

This IUPAC definition is particularly important because it highlights the flexibility inherent in colloidal systems. However, much of the confusion surrounding colloids arises from oversimplifications. IUPAC makes it clear that exceptions exist, and the definition should not be viewed as a rigid rule. D.H. Everett—the scientist who wrote the IUPAC definition—emphasized that colloids are often better understood through examples rather than strict definitions.

Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color.

Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi, who called them pseudosolutions, and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861.

Details

A colloid is any substance consisting of particles substantially larger than atoms or ordinary molecules but too small to be visible to the unaided eye; more broadly, any substance, including thin films and fibres, having at least one dimension in this general size range, which encompasses about {10}^{-7} to {10}^{-3} cm. Colloidal systems may exist as dispersions of one substance in another—for example, smoke particles in air—or as single materials, such as rubber or the membrane of a biological cell.

Colloids are generally classified into two systems, reversible and irreversible. In a reversible system the products of a physical or chemical reaction may be induced to interact so as to reproduce the original components. In a system of this kind, the colloidal material may have a high molecular weight, with single molecules of colloidal size, as in polymers, polyelectrolytes, and proteins, or substances with small molecular weights may associate spontaneously to form particles (e.g., micelles, microemulsion droplets, and liposomes) of colloidal size, as in soaps, detergents, some dyes, and aqueous mixtures of lipids. An irreversible system is one in which the products of a reaction are so stable or are removed so effectively from the system that its original components cannot be reproduced. Examples of irreversible systems include sols (dilute suspensions), pastes (concentrated suspensions), emulsions, foams, and certain varieties of gels. The size of the particles of these colloids is greatly dependent on the method of preparation employed.

All colloidal systems can be either generated or eliminated by nature as well as by industrial and technological processes. The colloids prepared in living organisms by biological processes are vital to the existence of the organism. Those produced with inorganic compounds in Earth and its waters and atmosphere are also of crucial importance to the well-being of life-forms.

The scientific study of colloids dates from the early 19th century. Among the first notable investigations was that of the British botanist Robert Brown. During the late 1820s Brown discovered, with the aid of a microscope, that minute particles suspended in a liquid are in continual, random motion. This phenomenon, which was later designated Brownian motion, was found to result from the irregular bombardment of colloidal particles by the molecules of the surrounding fluid. Francesco Selmi, an Italian chemist, published the first systematic study of inorganic colloids. Selmi demonstrated that salts would coagulate such colloidal materials as silver chloride and Prussian blue and that they differed in their precipitating power. The Scottish chemist Thomas Graham, who is generally regarded as the founder of modern colloid science, delineated the colloidal state and its distinguishing properties. In several works published during the 1860s, Graham observed that low diffusivity, the absence of crystallinity, and the lack of ordinary chemical relations were some of the most salient characteristics of colloids and that they resulted from the large size of the constituent particles.

The early years of the 20th century witnessed various key developments in physics and chemistry, a number of which bore directly on colloids. These included advances in the knowledge of the electronic structure of atoms, in the concepts of molecular size and shape, and in insights into the nature of solutions. Moreover, efficient methods for studying the size and configuration of colloidal particles were soon developed—for example, ultracentrifugal analysis, electrophoresis, diffusion, and the scattering of visible light and X-rays. More recently, biological and industrial research on colloidal systems has yielded much information on dyes, detergents, polymers, proteins, and other substances important to everyday life.

Additional Information

A colloid is one of the three primary types of mixtures, with the other two being a solution and suspension. A colloid is a mixture that has particles ranging between 1 and 1000 nanometers in diameter, yet are still able to remain evenly distributed throughout the solution. These are also known as colloidal dispersions because the substances remain dispersed and do not settle to the bottom of the container. In colloids, one substance is evenly dispersed in another. The substance being dispersed is referred to as being in the dispersed phase, while the substance in which it is dispersed is in the continuous phase.

To be classified as a colloid, the substance in the dispersed phase must be larger than the size of a molecule but smaller than what can be seen with the naked eye. This can be more precisely quantified as one or more of the substance's dimensions must be between 1 and 1000 nanometers. If the dimensions are smaller than this the substance is considered a solution and if they are larger than the substance is a suspension.

Classifying Colloids

A common method of classifying colloids is based on the phase of the dispersed substance and what phase it is dispersed in. The types of colloids includes sol, emulsion, foam, and aerosol.

* Sol is a colloidal suspension with solid particles in a liquid.
* Emulsion is between two liquids.
* Foam is formed when many gas particles are trapped in a liquid or solid.
* Aerosol contains small particles of liquid or solid dispersed in a gas.

When the dispersion medium is water, the collodial system is often referred to as a hydrocolloid. The particles in the dispersed phase can take place in different phases depending on how much water is available. For example, Jello powder mixed in with water creates a hydrocolloid. A common use of hydrocolloids is in the creation of medical dressings.

An easy way of determining whether a mixture is colloidal or not is through use of the Tyndall Effect. When light is shined through a true solution, the light passes cleanly through the solution, however when light is passed through a colloidal solution, the substance in the dispersed phases scatters the light in all directions, making it readily seen. An example of this is shining a flashlight into fog. The beam of light can be easily seen because the fog is a colloid.

Another method of determining whether a mixture is a colloid is by passing it through a semipermeable membrane. The larger dispersed particles in a colloid would be unable to pass through the membrane, while the surrounding liquid molecules can. Dialysis takes advantage of the fact that colloids cannot diffuse through semipermeable membranes to filter them out of a medium.

Tyndall Effect

The Tyndall Effect is the effect of light scattering in colloidal dispersion, while showing no light in a true solution. This effect is used to determine whether a mixture is a true solution or a colloid.

Introduction

"To be classified colloidal, a material must have one or more of its dimensions (length, width, or thickness) in the approximate range of 1-1000 nm." Because a colloidal solution or substance (like fog) is made up of scattered particles (like dust and water in air), light cannot travel straight through. Rather, it collides with these micro-particles and scatters causing the effect of a visible light beam. This effect was observed and described by John Tyndall as the Tyndall Effect.

The Tyndall effect is an easy way of determining whether a mixture is colloidal or not. When light is shined through a true solution, the light passes cleanly through the solution, however when light is passed through a colloidal solution, the substance in the dispersed phases scatters the light in all directions, making it readily seen.

colloid-system-solution-chemistry.png

#18 Re: This is Cool » Miscellany » Yesterday 16:22:02

2457) Snow Leopard

Gist

The snow leopard is also known as the ounce, from an old French word, and locally by names like "Shan" (Ladakhi) or "Bars" (Kazakh), but its most famous nickname is the "ghost of the mountains" due to its elusive nature. Its scientific name, Panthera uncia, reflects its connection to big cats, though it was once classified separately as Uncia uncia. 

The snow leopard (Panthera uncia) is a species of large cat in the genus Panthera of the family Felidae.

Summary

The snow leopard (Panthera uncia) is a species of large cat in the genus Panthera of the family Felidae. The species is native to the mountain ranges of Central and South Asia. It is listed as Vulnerable on the IUCN Red List because the global population is estimated to number fewer than 10,000 mature individuals and is expected to decline about 10% by 2040. It is mainly threatened by poaching and habitat destruction following infrastructural developments. It inhabits alpine and subalpine zones at elevations of 3,000–4,500 m (9,800–14,800 ft), ranging from eastern Afghanistan, the Himalayas and the Tibetan Plateau to southern Siberia, Mongolia and western China. In the northern part of its range, it also lives at lower elevations.

Taxonomically, the snow leopard was long classified in the monotypic genus Uncia. Since phylogenetic studies revealed the relationships among Panthera species, it has since been considered a member of that genus. Two subspecies were described based on morphological differences, but genetic differences between the two have not yet been confirmed. It is therefore regarded as a monotypic species. The species is widely depicted in Kyrgyz culture.

(IUCN:  International Union for Conservation of Nature ).

Details

Snow leopards have evolved to live in some of the harshest conditions on Earth. Their thick white-gray coat, spotted with large black rosettes, blends in perfectly with Asia’s steep and rocky, high mountains. Because of their incredible natural camouflage, rendering them almost invisible in their surroundings, snow leopards are often referred to as the “ghost of the mountains.”

The snow leopard’s powerful build allows it to scale great, steep slopes with ease. Its hind legs give the snow leopard the ability to leap six times the length of its body. A long tail enables agility, provides balance, and wraps around the resting snow leopard as protection from the cold.

For millennia, this magnificent cat was the king of the mountains. The mountains were rich with their prey, such as blue sheep, Argali wild sheep, ibex, marmots, pikas and hares. The snow leopard’s habitat range extends across the mountainous regions of 12 countries across Asia: Afghanistan, Bhutan, China, India, Kazakhstan, Kyrgyz Republic, Mongolia, Nepal, Pakistan, Russia, Tajikistan, and Uzbekistan. The total range covers an area of close to 772,204 square miles, with 60% of the habitat found in China. However, more than 70% of snow leopard habitat remains unexplored. Home range sizes can vary from 4.6-15.4 square miles in Nepal to over 193 square miles in Mongolia. And population density can range from <0.1 to 10 or more individuals per 38.6 square miles, depending on prey densities and habitat quality. Nevertheless, the snow leopard population is very likely declining.

Additional Information

A snow leopard is a large long-haired Asian cat, classified as either Panthera uncia or Uncia uncia in the family Felidae. The snow leopard inhabits the mountains of central Asia and the Indian subcontinent, ranging from an elevation of about 1,800 metres (about 6,000 feet) in the winter to about 5,500 metres (18,000 feet) in the summer.

Its soft coat, consisting of a dense insulating undercoat and a thick outercoat of hairs about 5 cm (2 inches) long, is pale grayish with dark rosettes and a dark streak along the spine. The underparts, on which the fur may be 10 cm (4 inches) long, are uniformly whitish. The snow leopard attains a length of about 2.1 metres (7 feet), including the 0.9-metre- (3-foot-) long tail. It stands about 0.6 metre (2 feet) high at the shoulder and weighs 23–41 kg (50–90 pounds). It hunts at night and preys on various animals, such as marmots, wild sheep, ibex (Capra), and domestic livestock. Its litters of two to four young are born after a gestation period of approximately 93 days.

Formerly classified as Leo uncia, the snow leopard has been placed—with the lion, tiger, and other big cats—in the genus Panthera. Because of the presence of certain skeletal features, such as having a shorter skull and having more-rounded eye orbits than other big cats, the snow leopard has also been classified by some authorities as the sole member of the genus Uncia. Genetic studies show that the common ancestor of snow leopards and tigers diverged from the lineage of big cats about 3.9 million years ago and that snow leopards branched from tigers about 3.2 million years ago.

Between 1986 and 2017 the snow leopard was listed as an endangered species on the Red List of Threatened Species from the International Union for Conservation of Nature (IUCN). However, in 2017 the species’ status was changed to “vulnerable” after a population calculation error was discovered in the species’ 2008 population assessment. Between 2,500 to 10,000 adult snow leopards remain in the wild, but the species continues to face daunting threats to its survival.

Several factors have contributed to their decline. Their wild prey has decreased as herding and ranching activities have expanded throughout their geographic range. They are often killed by herders and ranchers whose livestock they have taken, and their bones and hides are sought after by hunters and poachers for the illegal animal trade.

snow-leopard2.jpg

#19 Re: Jai Ganesh's Puzzles » General Quiz » Yesterday 15:43:21

Hi,

#10687. What does the term in Geography Commonwealth mean?

#10688. What does the term in Geography Compass mean?

#20 Re: Jai Ganesh's Puzzles » English language puzzles » Yesterday 15:20:46

Hi,

#5883. What does the noun osteopathy mean?

#5884. What does the noun ouster mean?

#21 Re: Jai Ganesh's Puzzles » Doc, Doc! » Yesterday 15:06:36

Hi,

#2537. What does the medical term Peritonitis mean?

Board footer

Powered by FluxBB