Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 Science HQ » Supercomputer » Today 18:38:01

Jai Ganesh
Replies: 0

Supercomputer

Gist

A supercomputer is an exceptionally fast, high-performance machine designed to process massive datasets and perform complex, parallel calculations far beyond the capability of regular computers. Used for demanding tasks like weather forecasting, AI training, nuclear simulations, and molecular modeling, these systems often utilize Linux-based operating systems to manage thousands of processor cores.

A supercomputer is an extremely powerful, high-performance computer designed to execute complex calculations and process massive datasets at incredible speeds, far exceeding general-purpose computers, primarily for scientific, engineering, and AI tasks like weather forecasting, molecular modeling, climate research, cryptography, and cosmological simulations, often utilizing parallel processing with thousands of processors working together. 

Summary

A supercomputer refers to a high-performance mainframe computer. It is a powerful, highly accurate machine known for processing massive sets of data and complex calculations at rapid speeds.

What makes a supercomputer “super” is its ability to interlink multiple processors within one system. This allows it to split up a task and distribute it in parts, then execute the parts of the task concurrently, in a method known as parallel processing.

Supercomputers are high-performing mainframe systems that solve complex computations. They split tasks into multiple parts and work on them in parallel, as if there were many computers acting as one collective machine.

Originally developed for nuclear weapon design and code-cracking, supercomputers are used today by scientists and engineers to test simulations that help predict climate changes and weather forecasts, explore cosmological evolution and discover new chemical compounds for pharmaceuticals.

How Do Supercomputers Work?

Unlike our everyday devices, supercomputers can perform multiple operations at once in parallel thanks to a multitude of built-in processors.

How it works: An operation is split into smaller parts, where each piece is sent to a CPU to solve. These multi-core processors are located within a node, alongside a memory block. In collaboration, these individual units — as many as tens of thousands — communicate through inter-node channels called interconnects to enable concurrent computation. Interconnects also interact with I/O systems, which manage disk storage and networking.

How’s that different from regular old computers? Picture this: On your home computer, once you strike the ‘return’ key on a search engine query, that information is input into a computer’s system, stored, then processed to produce an output value. In other words, one task is solved at a time. This process works great for everyday applications, such as sending a text message or mapping a route via GPS. But for more data-intensive projects, like calculating a missile’s ballistic orbit or cryptanalysis, researchers rely on more sophisticated systems that can execute many tasks at once.

“You have to use parallel computing to really take advantage of the power of the supercomputer,” Caitlin Joann Ross, a research and development engineer at Kitware who studied extreme-scale systems during her residency at Argonne Leadership Computing Facility, told Built In. “There are certain computations that might take weeks or months to run on your laptop, but if you can parallelize it efficiently to run on a supercomputer, it might only take a day.”

Details

A supercomputer is any of a class of extremely powerful computers. The term is commonly applied to the fastest high-performance systems available at any given time. Such computers have been used primarily for scientific and engineering work requiring exceedingly high-speed computations. Common applications for supercomputers include testing mathematical models for complex physical phenomena or designs, such as climate and weather, evolution of the cosmos, nuclear weapons and reactors, new chemical compounds (especially for pharmaceutical purposes), and cryptology. As the cost of supercomputing declined in the 1990s, more businesses began to use supercomputers for market research and other business-related models.

Distinguishing features

Supercomputers have certain distinguishing features. Unlike conventional computers, they usually have more than one CPU (central processing unit), which contains circuits for interpreting program instructions and executing arithmetic and logic operations in proper sequence. The use of several CPUs to achieve high computational rates is necessitated by the physical limits of circuit technology. Electronic signals cannot travel faster than the speed of light, which thus constitutes a fundamental speed limit for signal transmission and circuit switching. This limit has almost been reached, owing to miniaturization of circuit components, dramatic reduction in the length of wires connecting circuit boards, and innovation in cooling techniques (e.g., in various supercomputer systems, processor and memory circuits are immersed in a cryogenic fluid to achieve the low temperatures at which they operate fastest). Rapid retrieval of stored data and instructions is required to support the extremely high computational speed of CPUs. Therefore, most supercomputers have a very large storage capacity, as well as a very fast input/output capability.

Still another distinguishing characteristic of supercomputers is their use of vector arithmetic—i.e., they are able to operate on pairs of lists of numbers rather than on mere pairs of numbers. For example, a typical supercomputer can multiply a list of hourly wage rates for a group of factory workers by a list of hours worked by members of that group to produce a list of dollars earned by each worker in roughly the same time that it takes a regular computer to calculate the amount earned by just one worker.

Supercomputers were originally used in applications related to national security, including nuclear weapons design and cryptography. Today they are also routinely employed by the aerospace, petroleum, and automotive industries. In addition, supercomputers have found wide application in areas involving engineering or scientific research, as, for example, in studies of the structure of subatomic particles and of the origin and nature of the universe. Supercomputers have become an indispensable tool in weather forecasting: predictions are now based on numerical models. As the cost of supercomputers declined, their use spread to the world of online gaming. In particular, the 5th through 10th fastest Chinese supercomputers in 2007 were owned by a company with online rights in China to the electronic game World of Warcraft, which sometimes had more than a million people playing together in the same gaming world.

Historical development

Although early supercomputers were built by various companies, one individual, Seymour Cray, really defined the product almost from the start. Cray joined a computer company called Engineering Research Associates (ERA) in 1951. When ERA was taken over by Remington Rand, Inc. (which later merged with other companies to become Unisys Corporation), Cray left with ERA’s founder, William Norris, to start Control Data Corporation (CDC) in 1957. By that time Remington Rand’s UNIVAC line of computers and IBM had divided up most of the market for business computers, and, rather than challenge their extensive sales and support structures, CDC sought to capture the small but lucrative market for fast scientific computers. The Cray-designed CDC 1604 was one of the first computers to replace vacuum tubes with transistors and was quite popular in scientific laboratories. IBM responded by building its own scientific computer, the IBM 7030—commonly known as Stretch—in 1961. However, IBM, which had been slow to adopt the transistor, found few purchasers for its tube-transistor hybrid, regardless of its speed, and temporarily withdrew from the supercomputer field after a staggering loss, for the time, of $20 million. In 1964 Cray’s CDC 6600 replaced Stretch as the fastest computer on Earth; it could execute three million floating-point operations per second (FLOPS), and the term supercomputer was soon coined to describe it.

Cray left CDC to start Cray Research, Inc., in 1972 and moved on again in 1989 to form Cray Computer Corporation. Each time he moved on, his former company continued producing supercomputers based on his designs.

Cray was deeply involved in every aspect of creating the computers that his companies built. In particular, he was a genius at the dense packaging of the electronic components that make up a computer. By clever design he cut the distances signals had to travel, thereby speeding up the machines. He always strove to create the fastest possible computer for the scientific market, always programmed in the scientific programming language of choice (FORTRAN), and always optimized the machines for demanding scientific applications—e.g., differential equations, matrix manipulations, fluid dynamics, seismic analysis, and linear programming.

Among Cray’s pioneering achievements was the Cray-1, introduced in 1976, which was the first successful implementation of vector processing (meaning, as discussed above, it could operate on pairs of lists of numbers rather than on mere pairs of numbers). Cray was also one of the pioneers of dividing complex computations among multiple processors, a design known as “multiprocessing.” One of the first machines to use multiprocessing was the Cray X-MP, introduced in 1982, which linked two Cray-1 computers in parallel to triple their individual performance. In 1985 the Cray-2, a four-processor computer, became the first machine to exceed one billion FLOPS.

While Cray used expensive state-of-the-art custom processors and liquid immersion cooling systems to achieve his speed records, a revolutionary new approach was about to emerge. W. Daniel Hillis, a graduate student at the Massachusetts Institute of Technology, had a remarkable new idea about how to overcome the bottleneck imposed by having the CPU direct the computations between all the processors. Hillis saw that he could eliminate the bottleneck by eliminating the all-controlling CPU in favour of decentralized, or distributed, controls. In 1983 Hillis cofounded the Thinking Machines Corporation to design, build, and market such multiprocessor computers. In 1985 the first of his Connection Machines, the CM-1 (quickly replaced by its more commercial successor, the CM-2), was introduced. The CM-1 utilized an astonishing 65,536 inexpensive one-bit processors, grouped 16 to a chip (for a total of 4,096 chips), to achieve several billion FLOPS for some calculations—roughly comparable to Cray’s fastest supercomputer.

Hillis had originally been inspired by the way that the brain uses a complex network of simple neurons (a neural network) to achieve high-level computations. In fact, an early goal of these machines involved solving a problem in artificial intelligence, face-pattern recognition. By assigning each pixel of a picture to a separate processor, Hillis spread the computational load, but this introduced the problem of communication between the processors. The network topology that he developed to facilitate processor communication was a 12-dimensional “hypercube”—i.e., each chip was directly linked to 12 other chips. These machines quickly became known as massively parallel computers. Besides opening the way for new multiprocessor architectures, Hillis’s machines showed how common, or commodity, processors could be used to achieve supercomputer results.

Another common artificial intelligence application for multiprocessing was chess. For instance, in 1988 HiTech, built at Carnegie Mellon University, Pittsburgh, Pa., used 64 custom processors (one for each square on the chessboard) to become the first computer to defeat a grandmaster in a match. In February 1996 IBM’s Deep Blue, using 192 custom-enhanced RS/6000 processors, was the first computer to defeat a world champion, Garry Kasparov, in a “slow” game. It was then assigned to predict the weather in Atlanta, Ga., during the 1996 Summer Olympic Games. Its successor (now with 256 custom chess processors) defeated Kasparov in a six-game return match in May 1997.

As always, however, the principal application for supercomputing was military. With the signing of the Comprehensive Test Ban Treaty by the United States in 1996, the need for an alternative certification program for the country’s aging nuclear stockpile led the Department of Energy to fund the Accelerated Strategic Computing Initiative (ASCI). The goal of the project was to achieve by 2004 a computer capable of simulating nuclear tests—a feat requiring a machine capable of executing 100 trillion FLOPS (100 TFLOPS; the fastest extant computer at the time was the Cray T3E, capable of 150 billion FLOPS). ASCI Red, built at Sandia National Laboratories in Albuquerque, N.M., with the Intel Corporation, was the first to achieve 1 TFLOPS. Using 9,072 standard Pentium Pro processors, it reached 1.8 TFLOPS in December 1996 and was fully operational by June 1997.

While the massively multiprocessing approach prevailed in the United States, in Japan the NEC Corporation returned to the older approach of custom designing the computer chip—for its Earth Simulator, which surprised many computer scientists by debuting in first place on the industry’s TOP500 supercomputer speed list in 2002. It did not hold this position for long, however, as in 2004 a prototype of IBM’s Blue Gene/L, with 8,192 processing nodes, reached a speed of about 36 TFLOPS, just exceeding the speed of the Earth Simulator. Following two doublings in the number of its processors, the ASCI Blue Gene/L, installed in 2005 at Sandia National Laboratories in Livermore, Calif., became the first machine to pass the coveted 100 TFLOPS mark, with a speed of about 135 TFLOPS. Other Blue Gene/L machines, with similar architectures, held many of the top spots on successive TOP500 lists. With regular improvements, the ASCI Blue Gene/L reached a speed in excess of 500 TFLOPS in 2007. These IBM supercomputers are also noteworthy for the choice of operating system, Linux, and IBM’s support for the development of open source applications.

The first computer to exceed 1,000 TFLOPS, or 1 petaflop, was built by IBM in 2008. Known as Roadrunner, for New Mexico’s state bird, the machine was first tested at IBM’s facilities in New York, where it achieved the milestone, prior to being disassembled for shipment to the Los Alamos National Laboratory in New Mexico. The test version employed 6,948 dual-core Opteron microchips from Advanced Micro Devices (AMD) and 12,960 of IBM’s Cell Broadband Engines (first developed for use in the Sony Computer Entertainment PlayStation 3 video system). The Cell processor was designed especially for handling the intensive mathematical calculations needed to handle the virtual reality simulation engines in electronic games—a process quite analogous to the calculations needed by scientific researchers running their mathematical models.

Such progress in computing placed researchers on or past the verge of being able, for the first time, to do computer simulations based on first-principle physics—not merely simplified models. This in turn raised prospects for breakthroughs in such areas as meteorology and global climate analysis, pharmaceutical and medical design, new materials, and aerospace engineering. The greatest impediment for realizing the full potential of supercomputers remains the immense effort required to write programs in such a way that different aspects of a problem can be operated on simultaneously by as many different processors as possible. Even managing this in the case of less than a dozen processors, as are commonly used in modern personal computers, has resisted any simple solution, though IBM’s open source initiative, with support from various academic and corporate partners, made progress in the 1990s and 2000s.

Additional Information

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of aerodynamics, of the early moments of the universe, and of nuclear weapons). They have been essential in the field of cryptanalysis.

The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, exascale supercomputers have existed which can perform over {10}^{18} FLOPS. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS ({10}^{11}) to tens of teraFLOPS ({10}^{13}). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

Supercomputers were introduced in the 1960s, and for several decades the fastest was made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were added, with one to four processors being typical. In the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.

The U.S. has long been a leader in the supercomputer field, initially through Cray's nearly uninterrupted dominance, and later through a variety of technology companies. Japan made significant advancements in the field during the 1980s and 1990s, while China has become increasingly active in supercomputing in recent years. As of November 2024, Lawrence Livermore National Laboratory's El Capitan is the world's fastest supercomputer. The US has five of the top 10; Italy two, Japan, Finland, Switzerland have one each. In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.

52117623798_84faf38201_OLCF.jpg?fit=1456,818

#2 This is Cool » Swift » Today 17:50:08

Jai Ganesh
Replies: 0

Swift

Gist

Swifts are masters of the air and spend almost their entire lives in flight – eating, drinking, sleeping and even mating on the wing. They usually only land when it's time to nest, so you'll never see them perched on overhead wires like Swallows. Groups of Swifts can often be seen flying around rooftops at high speed.

What is the lifespan of a swift?

There are around 113 Swift species worldwide, eight of which may appear in the UK. But our familiar summer visitors are the Common Swift, Apus apus. The average lifespan of a Swift is nine years, reaching breeding maturity at around four years old. Estimates of the oldest recorded Swift range from 18 – 21 years old.

Summary

According to Britannica, swifts are small, fast-flying aerial birds (family Apodidae, ~75 species) known for long, curved wings, dull plumage, and weak feet suited only for clinging, not perching. They are considered the fastest small birds, capable of reaching speeds of 70 mph (110 kph), and primarily nest in chimneys, caves, or tree hollows using saliva.

Swift is any of about 75 species of agile, fast-flying birds of the family Apodidae (sometimes Micropodidae), in the order Apodiformes, which also includes the hummingbirds. The family is divided into the subfamilies Apodinae, or soft-tailed swifts, and Chaeturinae, or spine-tailed swifts. Almost worldwide in distribution, swifts are absent only from polar regions, southern Chile and Argentina, New Zealand, and most of Australia.

Closely resembling swallows, swifts range in length from about 9 to 23 cm (3.5 to 9 inches). They have exceptionally long wings and chunky, powerful bodies. Their compact plumage is a dull or glossy gray, brown, or black, sometimes with pale or white markings on the throat, neck, belly, or rump. The head is broad, with a short, wide, slightly curved bill. The tail, although often short, may be long and deeply forked. The feet are tiny and weak; with the aid of sharp claws they are used only to cling to vertical surfaces. A swift that lands on flat ground may be unable to regain the air. In soft-tailed forms, the hind toe is rotated forward as an aid in gripping vertical surfaces; in spine-tailed swifts, support is gained from the short needle-tipped tail feathers, and the feet are less modified.

In feeding, swifts course tirelessly back and forth, capturing insects with their large mouths open. They also drink, bathe, and sometimes mate on the wing. They fly with relatively stiff, slow wingbeats (four to eight per second), but the scimitar-like design of the wing makes it the most efficient among birds for high-speed flight. The fastest of small birds, swifts are believed to reach 110 km (70 miles) per hour regularly; reports of speeds three times that figure are not confirmed. The only avian predators known to take swifts with regularity are some of the larger falcons.

The nest of a swift is made of twigs, buds, moss, or feathers and is glued with its sticky saliva to the wall of a cave or the inside of a chimney, rock crack, or hollow tree. A few species attach the nest to a palm frond, an extreme example being the tropical Asian palm swift (Cypsiurus parvus), which glues its eggs to a tiny, flat feather nest on the surface of a palm leaf, which may be hanging vertically or even upside down. Swifts lay from one to six white eggs (usually two or three). Both eggs and young may be allowed to cool toward the environmental temperature in times of food scarcity, slowing development and conserving resources. The young stay in the nest or cling near it for 6 to 10 weeks, the length of time depending largely on the food supply. Upon fledging, they resemble the adults and immediately fly adeptly.

Among the best-known swifts is the chimney swift (Chaetura pelagica), a spine-tailed, uniformly dark gray bird that breeds in eastern North America and winters in South America, nesting in such recesses as chimneys and hollow trees; about 17 other Chaetura species are known worldwide. The common swift (Apus apus), called simply “swift” in Great Britain, is a soft-tailed, black bird that breeds across Eurasia and winters in southern Africa, nesting in buildings and hollow trees; nine other Apus swifts are found throughout temperate regions of the Old World, and some Apus species inhabit South America. The white-collared swift (Streptoprocne zonaris), soft-tailed and brownish black with a narrow white collar, is found from Mexico to Argentina and on larger Caribbean islands, nesting in caves and behind waterfalls. The white-rumped swift (Apus caffer), soft-tailed and black with white markings, is resident throughout Africa south of the Sahara. The white-throated swift (Aeronautes saxatalis), soft-tailed and black with white markings, breeds in western North America and winters in southern Central America, nesting on vertical rock cliffs.

Details

The Apodidae, or swifts, form a family of highly aerial birds. They are superficially similar to swallows, but are not closely related to any passerine species. Swifts are placed in the order Apodiformes along with hummingbirds. The treeswifts are closely related to the true swifts, but form a separate family, the Hemiprocnidae.

Resemblances between swifts and swallows are due to convergent evolution, reflecting similar life styles based on catching insects in flight.

The family name, Apodidae, is derived from the Greek (ápous), meaning "footless", a reference to the small, weak legs of these most aerial of birds. The tradition of depicting swifts without feet continued into the Middle Ages, as seen in the heraldic martlet.

Taxonomy

Taxonomists have long classified swifts and treeswifts as relatives of the hummingbirds, a judgment corroborated by the discovery of the Jungornithidae (apparently swift-like hummingbird-relatives) and of primitive hummingbirds such as Eurotrochilus. Traditional taxonomies place the hummingbird family (Trochilidae) in the same order as the swifts and treeswifts (and no other birds); the Sibley-Ahlquist taxonomy treated this group as a superorder in which the swift order was called Trochiliformes.

The taxonomy of the swifts is complicated, with genus and species boundaries widely disputed, especially amongst the swiftlets. Analysis of behavior and vocalizations is complicated by common parallel evolution, while analyses of different morphological traits and of various DNA sequences have yielded equivocal and partly contradictory results.

The Apodiformes diversified during the Eocene, at the end of which the extant families were present; fossil genera are known from all over temperate Europe, between today's Denmark and France, such as the primitive swift-like Scaniacypselus[5] (Early–Middle Eocene) and the more modern Procypseloides (Late Eocene/Early Oligocene – Early Miocene). A prehistoric genus sometimes assigned to the swifts, Primapus (Early Eocene of England), might also be a more distant ancestor.

Description

Swifts are among the fastest of birds in level flight, and larger species like the white-throated needletail have been reported travelling at up to 169 km/h (105 mph). Even the common swift can cruise at a maximum speed of 31 metres per second (112 km/h; 70 mph). In a single year the common swift can cover at least 200,000 km, and in a lifetime, about two million kilometers.

The wingtip bones of swiftlets are of proportionately greater length than those of most other birds. Changing the angle between the bones of the wingtips and forelimbs allows swifts to alter the shape and area of their wings to increase their efficiency and maneuverability at various speeds. They share with their relatives the hummingbirds a special ability to rotate their wings from the base, allowing the wing to remain rigid and fully extended and derive power on both the upstroke and downstroke. The downstroke produces both lift and thrust, while the upstroke produces a negative thrust (drag) that is 60% of the thrust generated during the downstrokes, but simultaneously it contributes lift that is also 60% of what is produced during the downstroke. This flight arrangement might benefit the bird's control and maneuverability in the air.

The swiftlets or cave swiftlets have developed a form of echolocation for navigating through dark cave systems where they roost. One species, the three-toed swiftlet, has recently been found to use this navigation at night outside its cave roost too.

Distribution and habitat

Swifts occur on all the continents except Antarctica, but not in the far north, in large deserts, or on many oceanic islands. The swifts of temperate regions are strongly migratory and winter in the tropics. Some species can survive short periods of cold weather by entering torpor, a state similar to hibernation.

Many have a characteristic shape, with a short forked tail and very long swept-back wings that resemble a crescent or a boomerang. The flight of some species is characterised by a distinctive "flicking" action quite different from swallows. Swifts range in size from the pygmy swiftlet (Collocalia troglodytes), which weighs 5.4 g and measures 9 cm (3.5 in) long, to the purple needletail (Hirundapus celebensis), which weighs 184 g (6.5 oz) and measures 25 cm (9.8 in) long.

Exploitation by humans

The hardened saliva nests of the edible-nest swiftlet and the black-nest swiftlet have been used in Chinese cooking for over 400 years, most often as bird's nest soup. Over-harvesting of this expensive delicacy has led to a decline in the numbers of these swiftlets, especially as the nests are also thought to have health benefits and aphrodisiac properties. Most nests are built during the breeding season by the male swiftlet over a period of 35 days. They take the shape of a shallow cup stuck to the cave wall. The nests are composed of interwoven strands of salivary cement and contain high levels of calcium, iron, potassium, and magnesium.

Additional Information

Swift is the long-distance migrant most associated with people, as it chooses to nest amongst our urban dwellings.

We await the return of Swifts to Britain and Ireland in early May and they are given the accolade of bringing the summer with them. Written about in poetry and prose, the dark scythe-winged silhouettes of Swifts wheeling about in a blue sky are often accompanied by their screaming calls.

Although widespread across much of Britain & Ireland, Breeding Bird Survey data have documented a significant decline in their populations. The reasons for these losses are likely to include poor summer weather, a decline in their insect food and continued loss of suitable nesting sites.

00295_Swift_Philip%20Croft.jpg

#3 Dark Discussions at Cafe Infinity » Come Quotes - IX » Today 17:12:33

Jai Ganesh
Replies: 0

Come Quotes - IX

1. Instead of drifting along like a leaf in a river, understand who you are and how you come across to people and what kind of an impact you have on the people around you and the community around you and the world, so that when you go out, you can feel you have made a positive difference. - Jane Fonda

2. All great and beautiful work has come of first gazing without shrinking into the darkness. - John Ruskin

3. People always fear change. People feared electricity when it was invented, didn't they? People feared coal, they feared gas-powered engines... There will always be ignorance, and ignorance leads to fear. But with time, people will come to accept their silicon masters. - Bill Gates

4. If some years were added to my life, I would give fifty to the study of the Yi, and then I might come to be without great faults. - Confucius

5. The essence of America - that which really unites us - is not ethnicity, or nationality or religion - it is an idea - and what an idea it is: That you can come from humble circumstances and do great things. - Condoleezza Rice

6. When I was a younger actor, I would try to keep it serious all day. But I have found, later on, that the lighter I am about things when I'm going to do a big scene that's dramatic and takes a lot out of you, the better off I am when I come to it. - Al Pacino

7. Thirty was so strange for me. I've really had to come to terms with the fact that I am now a walking and talking adult. - C. S. Lewis

8. I don't believe in pessimism. If something doesn't come up the way you want, forge ahead. If you think it's going to rain, it will. - Clint Eastwood.

#4 Jokes » Ice Cream Jokes - I » Today 16:54:53

Jai Ganesh
Replies: 0

Q: What do you get from an Alaskan cow ?
A: Ice Cream.
* * *
Q: What happens after you eat an entire gallon of "All Natural" ice cream?
A: You get Breyer's remorse!
* * *
Q: How did Reese eat her ice cream?
A: Witherspoon.
* * *
Q: How do astronauts eat their ice cream?
A: In floats!
* * *
Q: What do you get if you divide the circumference of a bowl of ice cream by its diameter?
A: Pi a'la mode.
* * *

#8 Re: Dark Discussions at Cafe Infinity » crème de la crème » Yesterday 21:58:28

2438) William Shockley

Gist:

Work

Amplifying electric signals proved decisive for telephony and radio. First, electron tubes were used for this. To develop smaller and more effective amplifiers, however, it was hoped that semiconductors could be used—materials with properties between those of electrical conductors and insulators. Quantum mechanics gave new insight into the properties of these materials. In 1947 John Bardeen and Walter Brattain produced a semiconductor amplifier, which was further developed by William Shockley. The component was named a “transistor”.

Summary

William B. Shockley (born Feb. 13, 1910, London, Eng.—died Aug. 12, 1989, Palo Alto, Calif., U.S.) was an American engineer and teacher, cowinner (with John Bardeen and Walter H. Brattain) of the Nobel Prize for Physics in 1956 for their development of the transistor, a device that largely replaced the bulkier and less-efficient vacuum tube and ushered in the age of microminiature electronics.

Shockley studied physics at the California Institute of Technology (B.S., 1932) and at the Massachusetts Institute of Technology (Ph.D., 1936). He joined the technical staff of the Bell Telephone Laboratories in 1936 and there began experiments with semiconductors that ultimately led to the invention and development of the transistor. During World War II, he served as director of research for the Antisubmarine Warfare Operations Research Group of the U.S. Navy.

After the war, Shockley returned to Bell Telephone as director of its research program on solid-state physics. Working with Bardeen and Brattain, he resumed his attempts to use semiconductors as amplifiers and controllers of electronic signals. The three men invented the point-contact transistor in 1947 and a more effective device, the junction transistor, in 1948. Shockley was deputy director of the Weapons Systems Evaluation Group of the Department of Defense in 1954–55. He joined Beckman Instruments, Inc., to establish the Shockley Semiconductor Laboratory in 1955. In 1958 he became lecturer at Stanford University, California, and in 1963 he became the first Poniatoff professor of engineering science there (emeritus, 1974). He wrote Electrons and Holes in Semiconductors (1950).

During the late 1960s Shockley became a figure of some controversy because of his widely debated views on the intellectual differences between races. He held that standardized intelligence tests reflect a genetic factor in intellectual capacity and that tests for IQ (intelligence quotient) reveal that blacks are inferior to whites. He further concluded that the higher rate of reproduction among blacks had a retrogressive effect on evolution.

Details

William Bradford Shockley (February 13, 1910 – August 12, 1989) was an American solid-state physicist. He was the manager of a research group at Bell Labs that included John Bardeen and Walter Brattain. The three scientists were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect."

Partly as a result of Shockley's attempts to commercialize a new transistor design in the 1950s and 1960s, California's Silicon Valley became a hotbed of electronics innovation. He recruited brilliant employees, but quickly alienated them with his autocratic and erratic management; they left and founded major companies in the industry.

In his later life, while he was a professor of electrical engineering at Stanford University and afterward, Shockley became known as a racist and a eugenicist.

Early life and education

William Bradford Shockley was born on February 13, 1910, in London to American parents, and was raised in the family's hometown of Palo Alto, California, from the age of 3. His father, William Hillman Shockley, was a mining engineer who speculated in mines for a living and spoke eight languages. His mother, May Bradford, grew up in the American West, graduated from Stanford University, and became the first female U.S. deputy mining surveyor.

Shockley was homeschooled up to the age of eight, due to his parents' dislike of public schools as well as Shockley's habit of violent tantrums. Shockley learned a little physics at a young age from a neighbor who was a Stanford physics professor. Shockley spent two years at Palo Alto Military Academy, then briefly enrolled in the Los Angeles Coaching School to study physics and later graduated from Hollywood High School in 1927.

Shockley obtained a B.S. from Caltech in 1932 and a Ph.D. from MIT in 1936. The title of his doctoral thesis was Electronic Bands in Sodium Chloride, a topic suggested by his thesis advisor, John C. Slater.

Career and research

Shockley was one of the first recruits to Bell Telephone Laboratories by Mervin Kelly, who became director of research at the company in 1936 and focused on hiring solid-state physicists. Shockley joined a group headed by Clinton Davisson in Murray Hill, New Jersey. Executives at Bell Labs had theorized that semiconductors may offer solid-state alternatives to the vacuum tubes used throughout Bell's nationwide telephone system. Shockley conceived a number of designs based on copper-oxide semiconductor materials, and with Walter Brattain's unsuccessful attempt to create a prototype in 1939.

Shockley published a number of fundamental papers on solid-state physics in Physical Review. In 1938, he received his first patent, "Electron Discharge Device", on electron multipliers.

When World War II broke out, Shockley's prior research was interrupted and he became involved in radar research in Manhattan (New York City). Also at Bell, early in 1942 Shockley did the first known pioneering applied work on Delay-line memory, which was 1/100th the cost of competing electronic memory of vacuum tube technology and approximately as rapid. This technology was incorporated inside the ENIAC computer by 1945.

In May 1942, he took leave from Bell Labs to become a research director at Columbia University's Anti-Submarine Warfare Operations Group. This involved devising methods for countering the tactics of submarines with improved convoying techniques, optimizing depth charge patterns, and so on. Shockley traveled frequently to the Pentagon and Washington to meet high-ranking officers and government officials.

In 1944, he organized a training program for B-29 bomber pilots to use new radar bomb sights. In late 1944, he took a three-month tour to bases around the world to assess the results. For this project, Secretary of War Robert Patterson awarded Shockley the Medal for Merit on October 17, 1946. In July 1945, the War Department asked Shockley to prepare a report on the question of probable casualties from an invasion of the Japanese mainland. Shockley concluded:

If the study shows that the behavior of nations in all historical cases comparable to Japan's has in fact been invariably consistent with the behavior of the troops in battle, then it means that the Japanese dead and ineffectives at the time of the defeat will exceed the corresponding number for the Germans. In other words, we shall probably have to kill at least 5 to 10 million Japanese. This might cost us between 1.7 and 4 million casualties including 400,000 to 800,000 killed.

This report influenced the decision of the United States to drop atomic bombs on Hiroshima and Nagasaki, which preceded the surrender of Japan.

Shockley was the first physicist to propose a log-normal distribution to model the creation process for scientific research papers.

shockley-13116-portrait-medium.jpg

#9 This is Cool » Lotion » Yesterday 20:22:19

Jai Ganesh
Replies: 0

Lotion

Gist

Lotion is primarily used to moisturize and hydrate skin, keeping it soft, smooth, and supple by locking in moisture and restoring the skin's protective barrier, which helps relieve dryness, flakiness, and itching, while also offering protection from environmental stressors and enhancing skin's appearance. It's a lightweight emulsion, usually of oil and water, designed for easy application to the body to maintain skin health and comfort.

Is lotion better than moisturizer?

Neither lotion nor moisturizer is inherently "better"; they serve different needs, with lotion being lighter for daily body hydration (more water, less oil), ideal for normal/oily skin or hot climates, while moisturizers (creams/ointments) are thicker, richer, and better for deep hydration, barrier repair, and very dry skin, often used on the face. Choose based on the body area and your skin's needs: lotion for the body, richer moisturizer for the face and very dry spots.

Summary

How to Use Body Lotion:

* Choose the right body lotion for your skin type.
* Apply lotion to cleansed skin immediately after a shower.
* Apply a coin-sized dollop of lotion on your palm and apply from the bottom up.
* Evenly distribute the lotion over your entire body.
* Use a separate product for your face lotion or cream.

Skin-care preparations

Preparations for the care of the skin form a major line of cosmetics. The basic step in facial care is cleansing, and soap and water is still one of the most effective means. Cleansing creams and lotions are useful, however, if heavy makeup is to be removed or if the skin is sensitive to soap. Their active ingredient is essentially oil, which acts as a solvent and is combined in an emulsion (a mixture of liquids in which one is suspended as droplets in another) with water. Cold cream, one of the oldest beauty aids, originally consisted of water beaten into mixtures of such natural fats as lard or almond oil, but modern preparations use mineral oil combined with an emulsifier that helps disperse the oil in water. Emollients (softening creams) and night creams are heavier cold creams that are formulated to encourage a massaging action in application; they often leave a thick film on the face overnight, thus minimizing water loss from the skin during that period.

Hand creams and lotions are used to prevent or reduce the dryness and roughness arising from exposure to household detergents, wind, sun, and dry atmospheres. Like facial creams, they act largely by replacing lost water and laying down an oil film to reduce subsequent moisture loss while the body’s natural processes repair the damage.

Details

Lotion is a low-viscosity topical preparation, typically an emulsion of oil and water, intended for application to unbroken skin for moisturizing, protective, cosmetic, or medicinal purposes. By contrast, creams and gels have higher viscosity, typically due to lower water content. Lotions are applied to external skin with bare hands, a brush, a clean cloth, or cotton wool.

While a lotion may be used as a medicine delivery system, many lotions, especially hand lotions and body lotions and lotion for allergies are meant instead to simply smooth, moisturize, soften and sometimes, perfume the skin.

Medicine delivery

Calamine lotion is used to treat itching.

Dermatologists can prescribe lotions to treat or prevent skin diseases. It is not unusual for the same drug ingredient to be formulated into a lotion, cream and ointment. Creams are the most convenient of the three but inappropriate for application to regions of hairy skin such as the scalp, while a lotion is less viscous and may be readily applied to these areas (many medicated shampoos are in fact lotions). Historically, lotions also had an advantage in that they may be spread thinly compared to a cream or ointment and may economically cover a large area of skin, but product research has steadily eroded this distinction. Non-comedogenic lotions are recommended to put on acne prone skin.

Lotions can be used for the delivery to the skin of medications such as:

* Antibiotics
* Antiseptics
* Antifungals
* Corticosteroids
* Anti-acne agents
* Soothing, smoothing, moisturizing or protective agents (such as calamine)
* Anti Allergens

Occupational use

Since health care workers must wash their hands frequently to prevent disease transmission, hospital-grade lotion is recommended to prevent skin dermatitis caused by frequent exposure to cleaning agents in the soap. A 2006 study found that application of hospital-grade lotion after hand washing significantly reduced skin roughness and dryness.

Care must be taken not to use consumer lotions in a hospital environment, as the perfumes and allergens may be a danger to those who are immunodeficient or with allergies.

Cosmetic uses

Most cosmetic lotions are moisturizing lotions, although other forms, such as tanning lotion, also exist.

Cosmetic lotions, including products marketed for anti-aging, often contain fragrances or other ingredients intended to modify the appearance or feel of the skin. The Food and Drug Administration voiced concern about lotions not classified as drugs that advertise anti-aging or anti-wrinkle properties.

Production

Most commercial lotions are oil-in-water emulsions — where oil droplets are dispersed in water — stabilized by emulsifiers such as cetearyl alcohol. Water-in-oil lotions, in which water droplets are dispersed in oil, are also produced and have different sensory and absorption properties. The key components of a skin care lotion, cream or gel emulsion (that is mixtures of oil and water) are the aqueous and oily phases, an emulsifier to prevent separation of these two phases, and, if used, the drug substance or substances. Various other ingredients such as fragrances, glycerol, petroleum jelly, dyes, preservatives, proteins and stabilizing agents are commonly added to lotions.

Manufacturing lotions and creams can be completed in two cycles:

* Emollients and lubricants are dispersed in oil with blending and thickening agents.
* Perfume, color and preservatives are dispersed in the water cycle. Active ingredients are broken up in both cycles depending on the raw materials involved and the desired properties of the lotion or cream.

A typical oil-in-water manufacturing process may be:

Step 1: Add flake/powder ingredients to the oil being used to prepare the oil phase.
Step 2: Disperse active ingredients.
Step 3: Prepare the water phase containing emulsifiers and stabilizers.
Step 4: Mix the oil and water to form an emulsion. (Note: This is aided by heating to between 110 and 185 F (45-85 C) depending on the formulation and viscosity desired.)
Step 5: Continue mixing until the end product is 'completed'

Potential health risks

Lotions are generally considered safe for typical cosmetic or therapeutic use, but certain formulations or usage patterns can be associated with adverse effects, including irritation, increased absorption of active ingredients, or allergic reactions.

Acne

Depending on their composition, lotions can be comedogenic, meaning that they can result in the increased formation of comedones (clogged hair follicles). People who are prone to acne or forming comedones often prefer lotions that are designed to be non-comedogenic (not causing outbreaks).

Systemic absorption

All topical products, including lotions, can result in the percutaneous (through the skin) absorption of their ingredients. Though this has some use as a route of drug administration, it more commonly results in unintended side effects. For example, medicated lotions such as diprolene are often used with the intention of exerting only local effects, but absorption of the drug through the skin can occur to a small degree, resulting in systemic side effects such as hyperglycemia and glycosuria.

Absorption through the skin is increased when lotions are applied and then covered with an occlusive layer, when they are applied to large areas of the body, or when they are applied to damaged or broken skin.

Allergens

Lotions containing some aromas or food additives may trigger an immune reaction or even cause users to develop new allergies.

There is currently no regulation over use of the term "hypoallergenic", and even pediatric skin products with the label were found to still contain allergens. Those with eczema are especially vulnerable to an allergic reaction with lotion, as their compromised skin barrier allows preservatives to bind with and activate immune cells.

The American Academy of Allergy, Asthma, and Immunology released a warning in 2014 that natural lotion containing ingredients commonly found in food (such as goats milk, cow's milk, coconut milk, or oil) may introduce new allergies, and an allergic reaction when those foods are later consumed. A 2021 study found that "frequent skin moisturization in early life might promote the development of food allergy, most likely through transcutaneous sensitization".

Additional Information

Body lotion benefits lie in skin health and vitality. Here are seven compelling reasons why incorporating body lotion into your daily skincare routine is essential:

Hydrates and Nourishes the Skin

Just like the face, the body loses moisture throughout the day due to various external factors. Body lotions are formulated to support the skin barrier and prevent moisture evaporation, keeping the skin hydrated, soft, and smooth. Look for formulas like our NIVEA Express Hydration Body Lotion, which provides fast-absorbing moisture, ideal for the summer months.

Keeps the skin healthy

Body lotions with nourishing ingredients help repair the skin's natural defense system and provide a protective shield against environmental pollutants, harsh weather conditions, and UV rays. Antioxidants and vitamins present in many lotions shield the skin from damage, preventing premature aging and discoloration.

Soften your skin & Soothers rough patches

Regular application of body lotion can help soften and soothe dry, rough skin, as well as alleviate minor skin irritations like rashes. Ingredients like aloe vera, rose water, and almond oil have soothing properties that calm inflammation and redness.

Takes care of Calluses

Body lotion's emollient properties make it effective in softening calluses and moisturizing dry areas like elbows, knees, and feet. Look for rich formulations like the NIVEA Rich Body Milk which is a 5 in 1 complete care nourishment, which provides deep moisture and dry-out protection for rough, dry skin.

Helps minimize aging signs

Body lotion delivers essential nutrients that promote collagen and elastin production, key proteins for maintaining skin elasticity and firmness. By keeping the skin hydrated, body lotion helps reduce the appearance of fine lines, wrinkles, and pigmentation, promoting a youthful complexion.

Makes your skin glow

Regular use of body lotion improves skin tone, restores its natural radiance, and enhances overall complexion, leaving you with glowing, healthy-looking skin.

Makes to feel and smell good

A luxurious body lotion with a pleasant fragrance not only makes your skin feel good but also uplifts your mood. Look for indulgent formulas like our NIVEA Smooth Body Milk, enriched with shea butter for soft, smooth skin and a delightful scent.

How to use body lotion

To reap the maximum benefits of body lotion, follow these simple steps:

* Take an adequate amount of body lotion into your palm.
* Rub your palms together to warm up the lotion.
* Apply the lotion to your body using circular motions, focusing on dry areas like elbows and knees.
* Massage the lotion until it is fully absorbed into the skin.

what-is-lotion-used-for.jpg?output-quality=75

#10 Re: This is Cool » Miscellany » Yesterday 18:16:37

2500) Brain injury/Traumatic Brain Injury

Gist

A Traumatic Brain Injury (TBI) results from an external force—such as a blow, jolt, or penetrating object—causing temporary or permanent damage to brain function, ranging from concussions to severe cognitive, physical, and emotional impairments. Common symptoms include headache, confusion, dizziness, fatigue, and memory loss.

TBIs are sometimes called brain injuries or even head injuries. Some types of TBI can cause temporary or short-term problems with brain function, including problems with how a person thinks, understands, moves, communicates, and acts. More serious TBIs can lead to severe and permanent disability—and even death.

Summary

A traumatic brain injury (TBI) happens when a hit to the head or an object injures your brain. They range from mild to severe and may affect your thinking, movement or emotions. It can cause headaches, confusion or memory loss. Treatment options are available to help you recover.

What Is a Traumatic Brain Injury?

A traumatic brain injury (TBI) happens when an outside force damages your brain and affects how it works. This can occur after a fall, a hard hit to your head, a vehicle accident or when something goes through your skull.

Symptoms can affect your body, thinking and emotions. You may have headaches, confusion, short-term memory loss, and mood or behavior changes. TBIs may be life-threatening. They can cause short-term or long-term health problems that affect many parts of your daily life.

Treatment is available and depends on how serious the injury is.

Types of traumatic brain injuries

There are two types:

* Penetrating TBI: This is when something pierces your skull, enters your brain tissue and damages a part of your brain. Healthcare providers may call these open TBIs.
* Blunt TBI (closed head TBI): This is when something hits your head hard enough that your brain bounces or twists around inside your skull.

What are the severity levels of TBIs?

Healthcare providers classify traumatic brain injuries as being mild, moderate or severe. They may use the term “concussion” when talking about mild TBI. Providers typically group moderate and severe TBIs together.

* Mild TBI: More than 75% of all TBIs are mild. But even mild TBIs may cause significant and long-term issues. For example, you may have trouble returning to your daily routine, including being able to work.
* Moderate and severe TBI: These are medical emergencies. Many develop into significant and long-term health issues.

Details

Brain injury, also known as brain damage or neurotrauma, is the destruction or degeneration of brain cells. It may result from external trauma, such as accidents or falls, or from internal factors, such as strokes, infections, or metabolic disorders.

Traumatic brain injury (TBI), the most common type of brain injury, is typically caused by external physical trauma to the head. Acquired brain injuries occur after birth, in contrast to congenital brain injuries that patients are born with.

In addition, brain injuries can be classified by timing: primary injuries occur at the moment of trauma, while secondary injuries develop afterward due to physiological responses. They can also be categorized by location: focal injuries affect specific areas, whereas diffuse injuries involve widespread brain regions.

The symptoms and complications of brain injuries vary greatly depending on the area(s) of the brain injured, the individual case, the cause of the injury and whether the person receives treatment. People may suffer from headaches, vomit or lose consciousness (potentially falling into a coma or a similar disorder of consciousness) after a brain injury. Long-term cognitive impairment, disturbances in language and motor skills, emotional dysfunction and changes in personality are common.

Treatments for brain injuries include preventing further injuries, medication, physical therapy, psychotherapy, occupational therapy and surgery. Because of neuroplasticity, the brain can partially recover function by forming new neural connections to compensate for damaged areas. Patients may regain adaptive skills such as movement and speech, especially if they undergo therapy and practice.

Classification:

Focal and diffuse

Focal brain injuries affect only a single area of the brain; they result from direct force to the head[4] and manifest as haemorrhages, contusions, and subdural and epidural haematomas. Diffuse brain injuries cause widespread damage to all or many areas, and are caused by diffuse axonal injuries, hypoxia, ischaemia and vascular injuries. If both are severe, focal brain injuries are deadlier than diffuse ones; severe focal and diffuse injuries have mortality rates of 40% and 25% respectively. Although, diffuse brain injuries more often result in long-term neurological and cognitive deficits.

Primary and secondary

Primary brain injuries, most of which are traumatic brain injuries, occur directly because of mechanical forces that deform the brain. Secondary brain injuries result from conditions, such as hypoxia, ischaemia, oedema, hydrocephalus and intracranial hypertension, that may or may not be the aftereffects of primary brain injuries.

Signs and symptoms

Symptoms of brain injuries vary based on the severity of the injury, the area of the brain injured, and how much of the brain was affected. The three categories used for classifying the severity of brain injuries are mild, moderate and severe.

Severity of injuries:

Mild brain injuries

When caused by a blow to the head, a mild brain injury is known as a concussion. Symptoms of a mild brain injury include headaches, confusion, tinnitus, fatigue and changes in sleep patterns, mood or behavior. Other symptoms include trouble with memory, concentration, attention or thinking. Because mental fatigue can be attributed to many disorders, patients may not realise the connection between fatigue and a minor brain injury.

Moderate/severe brain injuries

Cognitive symptoms include confusion, aggression, abnormal behavior and slurred speech. Physical symptoms include a loss of consciousness, headaches that worsen or do not go away, vomiting or nausea, convulsions, brain pulsation, abnormal dilation of the eyes, inability to wake from sleep, weakness in extremities and a loss of coordination.

Symptoms in children

Young children could be unable to communicate their physical states, emotions and thought processes, so parents, physicians and caregivers may need to observe their behaviours to discern symptoms. Signs include changes in eating habits, persistent anger, sadness, attention loss, losing interest in activities they used to enjoy, or sleep problems.

Complications:

Physiological effects

Physiological complications of a brain injury, caused by damage to the neurons, nerve tracts or sections of the brain, can occur immediately or at varying times after the injury. The immediate response can take many forms. Initially, there may be symptoms such as swelling, pain, bruising, or loss of consciousness. Headaches, dizziness and fatigue, which can develop as time progresses, may become permanent or persist for a long time.

Brain damage predisposes patients to seizures, Parkinson's disease, dementia and hormone-secreting gland disorders; monitoring is essential for detecting the development of these diseases and treating them promptly.

Diffuse brain injuries, brain injuries that result in intracranial hypertension and brain injuries affecting parts of the brain responsible for consciousness may induce a coma, a prolonged period of deep unconsciousness. Severe brain injuries may cause a persistent vegetative state in which a patient displays wakefulness without any awareness of his or her surroundings.

Brain death occurs when all activity of the brain is deemed to have irreversibly ceased. The prerequisite for considering brain death is the presence of an injury, bodily status (e.g. hyperpyrexia) or disease that has severely damaged the entire brain. After this has been confirmed, the criteria for ascertaining brain death are an absence of brain activity 24 hours after a patient has been resuscitated, an absence of brainstem reflexes (including the pupillary response and gag reflex) and an absence of spontaneous breathing when the lungs are filled with carbon dioxide.

Cognitive effects

Post-traumatic amnesia, and issues with both long- and short-term memory, are common with brain damage, as is temporary aphasia, or impairment of language. Tissue damage and loss of blood flow caused by the injury may cause both of these issues to become permanent. Apraxia, the impairment of motor coordination and movement, has also been documented.

Cognitive effects can depend on the location of the brain that was damaged, and certain types of impairments can be attributed to damage to certain areas of the brain. Larger lesions tend to cause worse symptoms and more complicated recoveries.

Brain lesions in Wernicke's and Broca's areas are correlated with language, speech and category-specific disorders. Wernicke's aphasia is associated with word retrieval deficits, unknowingly making up words (neologisms), and problems with language comprehension. The symptoms of Wernicke's aphasia are caused by damage to the posterior section of the superior temporal gyrus.

Damage to Broca's area typically produces symptoms like omitting functional words (agrammatism), sound production changes, alexia, agraphia, and problems with comprehension and production. Broca's aphasia is indicative of damage to the posterior inferior frontal gyrus of the brain.

The impairment of a cognitive process following a brain injury does not necessarily indicate that the damaged area is wholly responsible for the process that is impaired. For example, in pure alexia, the ability to read is destroyed by a lesion damaging both the left visual field and the connection between the right visual field and the language areas (Broca's area and Wernicke's area). However, this does not mean one with pure alexia is incapable of comprehending speech—merely that there is no connection between their working visual cortex and language areas—as is demonstrated by the fact that people with pure alexia can still write, speak, and even transcribe letters without understanding their meaning.

Lesions to the fusiform gyrus often result in prosopagnosia, the inability to distinguish faces and other complex objects from each other. Lesions in the amygdala would eliminate the enhanced activation seen in occipital and fusiform visual areas in response to fear with the area intact. Amygdala lesions change the functional pattern of activation to emotional stimuli in regions that are distant from the amygdala.

Other lesions to the visual cortex have different effects depending on the location of the damage. Lesions to V1, for example, can cause blindsight in different areas of the brain depending on the size of the lesion and location relative to the calcarine fissure. Lesions to V4 can cause color-blindness, and bilateral lesions to MT/V5 can cause the loss of the ability to perceive motion. Lesions to the parietal lobes may result in agnosia, an inability to recognize complex objects, smells, or shapes, or amorphosynthesis, a loss of perception on the opposite side of the body.

Psychological effects

There are documented cases of lasting psychological effects as well, such as emotional changes often caused by damage to the various parts of the brain that control emotions and behaviour. Individuals may experience sudden, severe mood swings that subside quickly. Emotional changes, which may not be triggered by a specific event, can cause distress to the injured party and their family and friends. Brain injuries increase the risk of developing depression, bipolar disorder and schizophrenia. The more severe a brain injury is the likelier it is to cause bipolar disorder or schizophrenia; the correlation between brain injuries and mental illness is stronger in female and older patients. Often, counseling in either a one-on-one or group setting is suggested for those who experience emotional dysfunction after their injury.

Any type of acquired brain injury can result in changes in personality, including, with regards to the Big Five personality traits, increased neuroticism, decreased extraversion and decreased conscientiousness. If the patient is aware of the change in his or her cognitive capacity, personality and mental state after an injury, he or she might feel disconnected from his or her pre-injury identity, leading to irritability, emotional distress and a disrupted concept of self.

Additional Information

Traumatic brain injury (TBI) happens when a sudden, external, physical assault damages the brain. It is one of the most common causes of disability and death in adults. TBI is a broad term that describes a vast array of injuries that happen to the brain. The damage can be focal (confined to one area of the brain) or diffuse (happens in more than one area of the brain). The severity of a brain injury can range from a mild concussion to a severe injury that results in coma or even death.

What are the different types of TBI?

Brain injury may happen in one of two ways:

* Closed brain injury. Closed brain injuries happen when there is a nonpenetrating injury to the brain with no break in the skull. A closed brain injury is caused by a rapid forward or backward movement and shaking of the brain inside the bony skull that results in bruising and tearing of brain tissue and blood vessels. Closed brain injuries are usually caused by car accidents, falls, and increasingly, in sports. Shaking a baby can also result in this type of injury (called shaken baby syndrome).

* Penetrating brain injury. Penetrating, or open head injuries happen when there is a break in the skull, such as when a bullet pierces the brain.

What is diffuse axonal injury (DAI)?

Diffuse axonal injury is the shearing (tearing) of the brain's long connecting nerve fibers (axons) that happens when the brain is injured as it shifts and rotates inside the bony skull. DAI usually causes coma and injury to many different parts of the brain. The changes in the brain are often microscopic and may not be evident on computed tomography (CT scan) or magnetic resonance imaging (MRI) scans.

What is primary and secondary brain injury?

Primary brain injury refers to the sudden and profound injury to the brain that is considered to be more or less complete at the time of impact. This happens at the time of the car accident, gunshot wound, or fall.

Secondary brain injury refers to the changes that evolve over a period of hours to days after the primary brain injury. It includes an entire series of steps or stages of cellular, chemical, tissue, or blood vessel changes in the brain that contribute to further destruction of brain tissue.

What causes a head injury?

There are many causes of head injury in children and adults. The most common injuries are from motor vehicle accidents (where the person is either riding in the car or is struck as a pedestrian), violence, falls, or as a result of shaking a child (as seen in cases of child abuse).

What causes bruising and internal damage to the brain?

When there is a direct blow to the head, the bruising of the brain and the damage to the internal tissue and blood vessels is due to a mechanism called coup-contrecoup. A bruise directly related to trauma at the site of impact is called a coup lesion (pronounced COO). As the brain jolts backward, it can hit the skull on the opposite side and cause a bruise called a contrecoup lesion. The jarring of the brain against the sides of the skull can cause shearing (tearing) of the internal lining, tissues, and blood vessels leading to internal bleeding, bruising, or swelling of the brain.

What are the possible results of brain injury?

Some brain injuries are mild, with symptoms disappearing over time with proper attention. Others are more severe and may result in permanent disability. The long-term or permanent results of brain injury may need post-injury and possibly lifelong rehabilitation. Effects of brain injury may include:

* Cognitive deficits

** Coma
** Confusion
** Shortened attention span
** Memory problems and amnesia
** Problem-solving deficits
** Problems with judgment
** Inability to understand abstract concepts
** Loss of sense of time and space
** Decreased awareness of self and others
** Inability to accept more than one- or two-step commands at the same time

* Motor deficits

** Paralysis or weakness
** Spasticity (tightening and shortening of the muscles)
** Poor balance
** Decreased endurance
** Inability to plan motor movements
** Delays in getting started
** Tremors
** Swallowing problems
** Poor coordination

* Perceptual or sensory deficits

** Changes in hearing, vision, taste, smell, and touch
** Loss of sensation or heightened sensation of body parts
** Left- or right-sided neglect
** Difficulty understanding where limbs are in relation to the body
** Vision problems, including double vision, lack of visual acuity, or limited range of vision

* Communication and language deficits

** Difficulty speaking and understanding speech (aphasia)
** Difficulty choosing the right words to say (aphasia)
** Difficulty reading (alexia) or writing (agraphia)
** Difficulty knowing how to perform certain very common actions, like brushing one's teeth (apraxia)
** Slow, hesitant speech and decreased vocabulary
** Difficulty forming sentences that make sense
** Problems identifying objects and their function
** Problems with reading, writing, and ability to work with numbers

* Functional deficits

** Impaired ability with activities of daily living (ADLs), such as dressing, bathing, and eating
** Problems with organization, shopping, or paying bills
** Inability to drive a car or operate machinery

* Social difficulties

** Impaired social capacity resulting in difficult interpersonal relationships
** Difficulties in making and keeping friends
** Difficulties understanding and responding to the nuances of social interaction

* Regulatory disturbances


** Fatigue
** Changes in sleep patterns and eating habits
** Dizziness
** Headache
** Loss of bowel and bladder control

* Personality or psychiatric changes

** Apathy
** Decreased motivation
** Emotional lability
** Irritability
** Anxiety and depression
** Disinhibition, including temper flare-ups, aggression, cursing, lowered frustration tolerance, and inappropriate sexual behavior

Certain psychiatric disorders are more likely to develop if damage changes the chemical composition of the brain.

* Traumatic Epilepsy

** Epilepsy can happen with a brain injury, but more commonly with severe or penetrating injuries. While most seizures happen immediately after the injury, or within the first year, it is also possible for epilepsy to surface years later. Epilepsy includes both major or generalized seizures and minor or partial seizures.

Can the brain heal after being injured?

Most studies suggest that once brain cells are destroyed or damaged, for the most part, they do not regenerate. However, recovery after brain injury can take place, especially in younger people, as, in some cases, other areas of the brain make up for the injured tissue. In other cases, the brain learns to reroute information and function around the damaged areas. The exact amount of recovery is not predictable at the time of injury and may be unknown for months or even years. Each brain injury and rate of recovery is unique. Recovery from a severe brain injury often involves a prolonged or lifelong process of treatment and rehabilitation.

What is coma?

Coma is an altered state of consciousness that may be very deep (unconsciousness) so that no amount of stimulation will cause the patient to respond. It can also be a state of reduced consciousness, so that the patient may move about or respond to pain. Not all patients with brain injury are comatose. The depth of coma, and the time a patient spends in a coma varies greatly depending on the location and severity of the brain injury. Some patients emerge from a coma and have a good recovery. Other patients have significant disabilities.

How is coma measured?

Depth of the coma is usually measured in the emergency and intensive care settings using a Glasgow coma scale. The scale (from 3 to 15) evaluates eye opening, verbal response, and motor response. A high score shows a greater amount of consciousness and awareness.

In rehabilitation settings, here are several scales and measures used to rate and record the progress of the patient. Some of the most common of these scales are described below.

* Rancho Los Amigos 10 Level Scale of Cognitive Functioning. This is a revision of the original Rancho 8 Level Scale, which is based on how the patient reacts to external stimuli and the environment. The scales consist of 10 different levels and each patient will progress through the levels with starts and stops, progress and plateaus.

* Disability Rating Scale (DRS). This scale measures functional change during the course of recovery rating the person's disability level from  none to extreme. The DRS assesses cognitive and physical function, impairment, disability, and handicap and can track a person's progress from "coma to community."

* Functional Independent Measure (FIM). The FIM scale measures a person's level of independence in activities of daily living. Scores can range from 1 (complete dependence) to 7 (complete independence).

* Functional Assessment Measure (FAM). This measure is used along with FIM and was developed specifically for people with brain injury.

traumatic-brain-injury.png?lossy=2&strip=1&webp=1

#11 Dark Discussions at Cafe Infinity » Come Quotes - VIII » Yesterday 17:00:44

Jai Ganesh
Replies: 0

Come Quotes - VIII

1. Those who have come into Formula One without experiencing cars devoid of electronic aids will find it tough. To control 800 horse power relying just on arm muscles and foot sensitivity can turn out to be a dangerous exercise. - Michael Schumacher

2. I've been here before and will come again, but I'm not going this trip through. - Bob Marley

3. This is my 20th year in the sport. I've known swimming and that's it. I don't want to swim past age 30; if I continue after this Olympics, and come back in 2016, I'll be 31. I'm looking forward to being able to see the other side of the fence. - Michael Phelps

4. The American People will come first once again. My plan will begin with safety at home - which means safe neighborhoods, secure borders, and protection from terrorism. There can be no prosperity without law and order. - Donald Trump

5. The goal towards which the pleasure principle impels us - of becoming happy - is not attainable: yet we may not - nay, cannot - give up the efforts to come nearer to realization of it by some means or other. - Sigmund Freud

6. Change will come slowly, across generations, because old beliefs die hard even when demonstrably false. - E. O. Wilson

7. Liberty has never come from Government. Liberty has always come from the subjects of it. The history of liberty is a history of limitations of governmental power, not the increase of it. - Woodrow Wilson

8. If atomic bombs are to be added as new weapons to the math of a warring world, or to the math of nations preparing for war, then the time will come when mankind will curse the names of Los Alamos and of Hiroshima. - J. Robert Oppenheimer.

#12 Re: Jai Ganesh's Puzzles » General Quiz » Yesterday 16:45:06

Hi,

#10757. What does the term in Geography Meander cutoff mean?

#10758. What does the term in Geography Cyclone mean?

#13 Re: Jai Ganesh's Puzzles » English language puzzles » Yesterday 16:27:53

Hi,

#5953. What does the noun derision mean?

#5954. What does the noun derogation mean?

#14 Re: Jai Ganesh's Puzzles » Doc, Doc! » Yesterday 16:14:32

Hi,

#2573. What does the medical term Bronchiolitis mean?

#15 Science HQ » Concave Mirror » Yesterday 15:42:10

Jai Ganesh
Replies: 0

Concave Mirror

Gist

A concave mirror is a spherical, inward-curved reflecting surface that converges light rays to a focal point. Known as converging mirrors, they produce varied, often magnified, real or virtual images depending on the object's distance. Common applications include shaving mirrors, telescopes, and headlights, as they can create enlarged images or parallel light beams.

A concave mirror is a spherical mirror with a reflecting surface curved inwards, like the inside of a spoon, that converges (brings together) light rays to a focal point, allowing it to form magnified, diminished, real, or virtual images depending on the object's distance, making it useful in headlights, telescopes, and shaving mirrors. 

Summary

A concave mirror, or converging mirror, has a reflecting surface that is recessed inward (away from the incident light). Concave mirrors reflect light inward to one focal point. They are used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on the distance between the object and the mirror.

The mirrors are called "converging mirrors" because they tend to collect light that falls on them, refocusing parallel incoming rays toward a focus. This is because the light is reflected at different angles at different spots on the mirror as the normal to the mirror surface differs at each spot.

Uses

Concave mirrors are used in reflecting telescopes. They are also used to provide a magnified image of the face for applying make-up or shaving. In illumination applications, concave mirrors are used to gather light from a small source and direct it outward in a beam as in torches, headlamps and spotlights, or to collect light from a large area and focus it into a small spot, as in concentrated solar power. Concave mirrors are used to form optical cavities, which are important in laser construction. Some dental mirrors use a concave surface to provide a magnified image. The mirror landing aid system of modern aircraft carriers also uses a concave mirror.

Details:

Concave Mirror Definition

A concave mirror is a curved mirror where the reflecting surface is on the inner side of the curved shape. It has a surface that curves inward, resembling the shape of the inner surface of a hollow sphere. Concave mirrors are also converging mirrors because they cause light rays to converge or come together after reflection. Depending on the position of the object and the mirror, concave mirrors can form both real and virtual images.

Characteristics of Concave Mirrors

* Converging Mirror: A concave mirror is often referred to as a converging mirror because when light rays strike and reflect from its reflecting surface, they converge or come together at a specific point known as the focal point. This property of concave mirrors allows them to focus light to a point.
* Magnification and Image Formation: When a concave mirror is placed very close to the object, it forms a magnified, erect, and virtual image. The image appears larger than the actual object and is upright. The virtual image is formed as the reflected rays appear to diverge from a point behind the mirror.
* Changing Distance and Image Properties: As the distance between the object and the concave mirror increases, the size of the image decreases. Eventually, at a certain distance, the image transitions from virtual to real. In this case, a real and inverted image is formed on the opposite side of the mirror.
* Versatile Image Formation: Concave mirrors have the ability to create images that can vary in size, from small to large, and in nature, from real to virtual. These characteristics make concave mirrors useful in various applications such as telescopes, shaving mirrors, and reflecting headlights.

Additional Information

If a hollow sphere is cut into some parts and the outer surface of the cut part is painted, then it turns out to be a mirror with its inner surface as the reflecting surface. This makes a concave mirror.

A concave mirror or converging mirror is a type of mirror that is bent towards the inwards side in the middle. Moreover, by looking in this mirror, we will feel that we are looking in a cave. We tend to use the mirror equation to deal with a concave mirror.

The equation for these mirrors determines the position of the object and the accurate size of the object. The angle of incidence in the concave mirror is not the same as the angle of reflection. Moreover, the angle of reflection, in this case, depends on the area on which the light hits.

Properties of Concave Mirrors

* Light after reflection converges at a point when it strikes and reflects back from the reflecting surface of the concave mirror. Hence, it is also termed a converging mirror.
* When the converging mirror is placed very near to the object, a magnified and virtual image is observed.
* But, if we tend to increase the distance between the object and the mirror, then the image's size reduces, and a real image is formed.
* The image formed by the concave mirror can be small or enlarged or can be either real or virtual.

Applications of Concave Mirrors

* Used in shaving mirrors: Converging mirrors are most widely used in shaving because they have reflective and curved surfaces. At the time of shaving, the concave mirror forms an enlarged as well as erect image of the face when the concave mirror is held closer to the face.
* The concave mirror used in the ophthalmoscope: These mirrors are used in optical instruments as in ophthalmoscopes for treatment.
* Uses of the concave mirrors in astronomical telescopes: These mirrors are also widely used in making astronomical telescopes. In an astronomical telescope, a converging mirror of a diameter of about 5 meters or more is used as the objective.
* Concave mirrors used in the headlights of vehicles: Converging mirrors are widely used in the headlights of automobiles and in motor vehicles, torchlights, railway engines, etc. as reflectors. The point light source is kept at the focus of the mirror, so after reflection, the light rays travel over a huge distance as parallel light beams of high intensity.
* Used in solar furnaces: Large converging mirrors are used to focus the sunlight to produce heat in the solar furnace. They are often used in solar ovens to gather a large amount of solar energy in the focus of the concave mirror for heating, cooking, melting metals, etc.

E3_fig2_online.svg

#16 Jokes » Green Bean Jokes » Yesterday 15:09:37

Jai Ganesh
Replies: 0

Q: What water yields the most beautiful Green Beans?
A: Perspiration!
* * *
Q: What vegetable can tie your stomach in knots?
A: String beans.
* * *
Q: Where did the green bean go to have a few drinks?
A: The Salad Bar!
* * *
Q: What kind of beans can not grow in a garden?
A: A jelly bean.
* * *
Q: Why shouldn't you tell a secret on a farm?
A: Because the potatoes have eyes, the corn has ears, and the beans stalk.
* * *

#20 Re: Dark Discussions at Cafe Infinity » crème de la crème » 2026-02-17 19:04:41

2437) Walter Brattain

Gist:

Work

Amplifying electric signals proved decisive for telephony and radio. First, electron tubes were used for this. To develop smaller and more effective amplifiers, however, it was hoped that semiconductors could be used—materials with properties between those of electrical conductors and insulators. Quantum mechanics gave new insight into the properties of these materials. In 1947 John Bardeen and Walter Brattain produced a semiconductor amplifier, which was further developed by William Shockley. The component was named a “transistor”.

Summary

Walter H. Brattain (born Feb. 10, 1902, Amoy, China—died Oct. 13, 1987, Seattle, Wash., U.S.) was an American scientist who, along with John Bardeen and William B. Shockley, won the Nobel Prize for Physics in 1956 for his investigation of the properties of semiconductors—materials of which transistors are made—and for the development of the transistor. The transistor replaced the bulkier vacuum tube for many uses and was the forerunner of microminiature electronic parts.

Brattain earned a Ph.D. from the University of Minnesota, and in 1929 he became a research physicist for Bell Telephone Laboratories. His chief field of research involved the surface properties of solids, particularly the atomic structure of a material at the surface, which usually differs from its atomic structure in the interior. He, Shockley, and Bardeen invented the transistor in 1947. After leaving Bell Laboratories in 1967, Brattain served as adjunct professor at Whitman College, Walla Walla, Wash. (1967–72), then was designated overseer emeritus. He was granted a number of patents and wrote many articles on solid-state physics.

Details

Walter Houser Brattain (February 10, 1902 – October 13, 1987) was an American solid-state physicist who shared the 1956 Nobel Prize in Physics with John Bardeen and William Shockley for their invention of the point-contact transistor. Brattain devoted much of his life to research on surface states.

Early life and education

Walter Houser Brattain was born on February 10, 1902, in Amoy (now Xiamen), China, to American parents, Ross R. Brattain and Ottilie Houser. His father was of Scottish descent, while his mother's parents were both immigrants from Stuttgart, Germany. Ross was a teacher at the Ting-Wen Institute,  a private school for Chinese boys. Ottilie was a gifted mathematician. Both were graduates of Whitman College. Ottilie and baby Walter returned to the United States in 1903, and Ross followed shortly afterward. The family lived for several years in Spokane, Washington, then settled on a cattle ranch near Tonasket, Washington, in 1911.

Brattain attended high school in Washington, spending one year at Queen Anne High School, two years at Tonasket High School, and one year at Moran School for Boys. He then attended Whitman College, where he studied under Benjamin H. Brown (physics) and Walter A. Bratton (mathematics). He received his B.S. in 1924 with a double major in Physics and Mathematics. Brattain and his classmates Walker Bleakney, Vladimir Rojansky, and E. John Workman would all go on to have distinguished careers, later becoming known as "the four horsemen of physics".  Brattain's brother Robert, who followed him at Whitman College, also became a physicist.

Brattain obtained an M.A. from the University of Oregon in 1926 and a Ph.D. from the University of Minnesota in 1929. At Minnesota, he had the opportunity to study the new field of quantum mechanics under John Van Vleck. His doctoral thesis, written under John T. Tate, was titled Efficiency of Excitation by Electron Impact and Anomalous Scattering in Mercury Vapor.

Career and research

From 1928 to 1929, Brattain worked for the National Bureau of Standards in Washington, D.C., where he helped to develop piezoelectric frequency standards. In August 1929, he joined Joseph A. Becker at Bell Telephone Laboratories as a research physicist. The two men worked on the heat-induced flow of charge carriers in copper oxide rectifiers. Brattain was able to attend a lecture by Arnold Sommerfeld. Some of their subsequent experiments on thermionic emission provided experimental validation for the Sommerfeld theory. They also did work on the surface state and work function of tungsten and the adsorption of thorium atoms.  Through his studies of rectification and photo-effects on the semiconductor surfaces of cuprous oxide and silicon, Brattain discovered the photo-effect at the free surface of a semiconductor. This work was considered by the Nobel Committee to be one of his chief contributions to solid-state physics.

At the time, the telephone industry was heavily dependent on the use of vacuum tubes to control electron flow and amplify current. Vacuum tubes were neither reliable nor efficient, and Bell Labs wanted to develop an alternative technology. As early as the 1930s Brattain worked with William Shockley on the idea of a semiconductor amplifier that used copper oxide, an early and unsuccessful attempt at creating a field-effect transistor. Other researchers at Bell and elsewhere were also experimenting with semiconductors, using materials such as germanium and silicon, but the pre-war research effort was somewhat haphazard and lacked strong theoretical grounding.

During World War II, both Brattain and Shockley were separately involved in research on magnetic detection of submarines with the National Defense Research Committee at Columbia University. Brattain's group developed magnetometers sensitive enough to detect anomalies in the Earth's magnetic field caused by submarines. As a result of this work, in 1944, Brattain patented a design for a magnetometer head.

In 1945, Bell Labs reorganized and created a group specifically to do fundamental research in solid-state physics, relating to communications technologies. Creation of the sub-department was authorized by the vice-president for research, Mervin Kelly. An interdisciplinary group, it was co-led by Shockley and Stanley O. Morgan.  The new group was soon joined by John Bardeen. Bardeen was a close friend of Brattain's brother Robert, who had introduced John and Walter in the 1930s. They often played bridge and golf together.  Bardeen was a quantum physicist, Brattain a gifted experimenter in materials science, and Shockley, the leader of their team, was an expert in solid-state physics.

brattain-13115-portrait-medium.jpg

#21 Re: This is Cool » Miscellany » 2026-02-17 18:18:13

2499) Induction Coil

Gist

An induction coil is an electrical transformer that converts low-voltage direct current (DC) into high-voltage pulses using a primary coil, a secondary coil with many more turns, and an iron core. It operates by interrupting DC with a magnetic vibrator, causing a rapidly collapsing magnetic field that induces high voltage, creating sparks.

An induction coil is defined as a component used in induction heating that generates eddy currents and heat through a varying magnetic field, and can be designed in various forms such as single turn, multi-turn, pancake, hairpin, or split coils, depending on the application and substrate geometry.

Summary

An induction coil or "spark coil" is a type of electrical transformer. It is used to produce high-voltage pulses from a low-voltage direct current (DC) supply. To create the flux changes necessary to induce voltage in the secondary coil, the direct current in the primary coil is repeatedly interrupted by a vibrating mechanical contact called an interrupter.

The induction coil was the first type of transformer. It was widely used in X-ray machines, spark-gap radio transmitters, arc lighting and quack medical devices from the 1880s to the 1920s. Today its only common use is for ignition coils in internal combustion engines and in physics education to demonstrate induction.

Details

An induction coil or "spark coil" (archaically known as an inductorium or Ruhmkorff coil after Heinrich Rühmkorff) is a type of transformer used to produce high-voltage pulses from a low-voltage direct current (DC) supply. To create the flux changes necessary to induce voltage in the secondary coil, the direct current in the primary coil is repeatedly interrupted by a vibrating mechanical contact called an interrupter. Invented in 1836 by the Irish-Catholic priest Nicholas Callan, also independently by American inventor Charles Grafton Page, the induction coil was the first type of transformer. It was widely used in x-ray machines, spark-gap radio transmitters, arc lighting and quack medical electrotherapy devices from the 1880s to the 1920s. Today its only common use is as the ignition coils in internal combustion engines and in physics education to demonstrate induction.

Construction and function

An induction coil consists of two coils of insulated wire wound around a common iron core (M). One coil, called the primary winding (P), is made from relatively few (tens or hundreds) turns of coarse wire. The other coil, the secondary winding, (S) typically consists of up to a million turns of fine wire (up to 40 gauge).

An electric current is passed through the primary, creating a magnetic field. Because of the common core, most of the primary's magnetic field couples with the secondary winding. The primary behaves as an inductor, storing energy in the associated magnetic field. When the primary current is suddenly interrupted, the magnetic field rapidly collapses. This causes a high voltage pulse to be developed across the secondary terminals through electromagnetic induction. Because of the large number of turns in the secondary coil, the secondary voltage pulse is typically many thousands of volts. This voltage is often sufficient to cause an electric spark, to jump across an air gap (G) separating the secondary's output terminals. For this reason, induction coils were called spark coils.

An induction coil is traditionally characterised by the length of spark it can produce; a '4 inch' (10 cm) induction coil could produce a 4 inch spark. Until the development of the cathode ray oscilloscope, this was the most reliable measurement of peak voltage of such asymmetric waveforms. The relationship between spark length and voltage is linear within a wide range:

4 inches (10 cm) = 110kV; 8 inches (20 cm) = 150kV; 12 inches (30 cm) = 190kV; 16 inches (41 cm) = 230kV
Curves supplied by a 1984 reference agree closely with those values.

Interrupter

To operate the coil continually, the direct current must be repeatedly connected and disconnected to create the magnetic field changes needed for induction. To do that, induction coils use a magnetically activated vibrating arm called an interrupter or break (A) to rapidly connect and break the current flowing into the primary coil. The interrupter is mounted on the end of the coil next to the iron core. When the power is turned on, the increasing current in the primary coil produces an increasing magnetic field, the magnetic field attracts the interrupter's iron armature (A). After a time, the magnetic attraction overcomes the armature's spring force, and the armature begins to move. When the armature has moved far enough, the pair of contacts (K) in the primary circuit open and disconnect the primary current. Disconnecting the current causes the magnetic field to collapse and create the spark. Also, the collapsed field no longer attracts the armature, so the spring force accelerates the armature toward its initial position. A short time later the contacts reconnect, and the current starts building the magnetic field again. The whole process starts over and repeats many times per second.

Opposite potentials are induced in the secondary when the interrupter breaks the circuit and closes the circuit. However, the current change in the primary is much more abrupt when the interrupter breaks. When the contacts close, the current builds up slowly in the primary because the supply voltage has a limited ability to force current through the coil's inductance. In contrast, when the interrupter contacts open, the current falls to zero suddenly. So the pulse of voltage induced in the secondary at break is much larger than the pulse induced at close, it is the break that generates the coil's high-voltage output.

Capacitor

An arc forms at the interrupter contacts on break which has undesirable effects: the arc consumes energy stored in the magnetic field, reduces the output voltage, and damages the contacts. To prevent this, a quenching capacitor (C) of 0.5 to 15 μF is connected across the primary coil to slow the rise in the voltage after a break. The capacitor and primary winding together form a tuned circuit, so on break, a damped sinusoidal wave of current flows in the primary and likewise induces a damped wave in the secondary. As a result, the high-voltage output consists of a series of damped waves.

Construction details

To prevent the high voltages generated in the coil from breaking down the thin insulation and arcing between the secondary wires, the secondary coil uses special construction so as to avoid having wires carrying large voltage differences lying next to each other. In one widely used technique, the secondary coil is wound in many thin flat pancake-shaped sections (called "pies"), connected in series.

The primary coil is first wound on the iron core and insulated from the secondary with a thick paper or rubber coating. Then each secondary subcoil is connected to the coil next to it and slid onto the iron core, insulated from adjoining coils with waxed cardboard disks. The voltage developed in each subcoil isn't large enough to jump between the wires in the subcoil. Large voltages are only developed across many subcoils in series, which are too widely separated to arc over. To give the entire coil a final insulating coating, it is immersed in melted paraffin wax or rosin; the air evacuated to ensure there are no air bubbles left inside and the paraffin allowed to solidify, so the entire coil is encased in wax.

To prevent eddy currents, which cause energy losses, the iron core is made of a bundle of parallel iron wires, individually coated with shellac to insulate them electrically. The eddy currents, which flow in loops in the core perpendicular to the magnetic axis, are blocked by the layers of insulation. The ends of the insulated primary coil often protruded several inches from either end of the secondary coil, to prevent arcs from the secondary to the primary or the core.

Additional Information

An induction heating system consists of an induction power supply for converting line power to an alternating current and delivering it to a workhead, and a work coil for generating an electromagnetic field within the coil. The work piece is positioned in the coil such that this field induces a current in the work piece, which in turn produces heat.

The water-cooled coil is positioned around or bordering the work piece. It does not contact the work piece, and the heat is only produced by the induced current transmitted through the work piece. The material used to make the work piece can be a metal such as copper, aluminum, steel, or brass. It can also be a semiconductor such as graphite, carbon or silicon carbide.

For heating non-conductive materials such as plastics or glass, induction can be used to heat an electrically-conductive susceptor e.g., graphite, which then passes the heat to the non-conducting material.

Induction heating finds applications in processes where temperatures are as low as 100ºC (212°F) and as high as 3000°C (5432°F). It is also used in short heating processes lasting for less than half a second and in heating processes that extend over several months.

Induction heating is used both domestic and commercial cooking, in several applications such as heat treating, soldering, preheating for welding, melting, shrink fitting in industry, sealing, brazing, curing, and in research and development.

How Does Induction Heating Work?

Induction produces an electromagnetic field in a coil to transfer energy to a work piece to be heated. When the electrical current passes along a wire, a magnetic field is produced around that wire.

Key Benefits of Induction

The benefits of induction are:

* Efficient and quick heating
* Accurate, repeatable heating
* Safe heating as there is no flame
* Prolonged life of fixturing due to accurate heating

Methods of Induction Heating

Induction heating is done using two methods:

The first method is referred to as eddy current heating from the I²R losses caused from the resistivity of a work piece’s material. The second is referred to as hysteretic heating, in which energy is produced within a part by the alternating magnetic field generated by the coil modifying the component’s magnetic polarity.

Hysteretic heating occurs in a component up to the Curie temperature when the material’s magnetic permeability decreases to 1 and hysteretic heating is reduced. Eddy current heating constitutes the remaining induction heating effect.

When there is a change in the direction of electrical current (AC) the magnetic field generated fails, and is produced in the reverse direction, as the direction of the current is reversed. When a second wire is positioned in that alternating magnetic field, an alternating current is produced in the second wire.

The current transmitted through the second wire and that through the first wire are proportional to each other and also to the inverse of the square of the distance between them.

When the wire in this model is substituted with a coil, the alternating current on the coil generates an electromagnetic field and while the work piece to be heated is in the field, the work piece matches to the second wire and an alternating current is produced in the work piece. The I²R losses of the material resistivity of the work piece causes heat to be created in the work piece of the work piece’s material resistivity. This is called eddy current heating.

Working of an Induction Coil

With the help of an alternating electric field, energy is transmitted to the work piece with a work coil.

The alternating current passing via the coil produces the electromagnetic field which induces a current passing in the work piece as a mirror image to the current passing in the work coil. The work coil/inductor is a part of the induction heating system that displays the effectiveness and efficiency of the work piece when it is heated. Work coils are of numerous types ranging from complex to simple.

The helical wound (or solenoid) coil is an example of simple coil, which consists of many turns of copper tube wound around a mandrel. A coil precision-machined from solid copper and brazed together is an example of complex coil.

ImageForArticle_11659_4497125451089126486.jpg

#22 This is Cool » Telescope » 2026-02-17 17:41:36

Jai Ganesh
Replies: 0

Telescope

Gist

In physics, a telescope is an optical instrument that collects and magnifies light (or other electromagnetic radiation) from distant objects, making them appear closer and brighter, primarily using lenses or curved mirrors to focus radiation onto an image plane for viewing or analysis. It works by gathering far more light than the naked eye, allowing observation of faint celestial bodies and providing detailed images, acting as the key tool in astronomy to study everything from planets to distant galaxies across various parts of the electromagnetic spectrum (visible light, X-rays, radio waves, etc.). 

The main purpose of astronomical telescope is to make objects from outer space appear as bright, contrasty and large as possible. That defines its three main function: light gathering, resolution and magnification.

Summary

A telescope is a device used to observe distant objects by their emission, absorption, or reflection of electromagnetic radiation. Originally, it was an optical instrument using lenses, curved mirrors, or a combination of both to observe distant objects – an optical telescope. Nowadays, the word "telescope" is defined as a wide range of instruments capable of detecting different regions of the electromagnetic spectrum, and in some cases other types of detectors.

The first known practical telescopes were refracting telescopes with glass lenses and were invented in the Netherlands at the beginning of the 17th century. They were used for both terrestrial applications and astronomy.

The reflecting telescope, which uses mirrors to collect and focus light, was invented within a few decades of the first refracting telescope.

In the 20th century, many new types of telescopes were invented, including radio telescopes in the 1930s and infrared telescopes in the 1960s.

Details

A telescope is a device used to form magnified images of distant objects. The telescope is undoubtedly the most important investigative tool in astronomy. It provides a means of collecting and analyzing radiation from celestial objects, even those in the far reaches of the universe.

Galileo revolutionized astronomy when he applied the telescope to the study of extraterrestrial bodies in the early 17th century. Until then, magnification instruments had never been used for this purpose. Since Galileo’s pioneering work, increasingly more powerful optical telescopes have been developed, as has a wide array of instruments capable of detecting and measuring radiation in every region of the electromagnetic spectrum. Observational capability has been further enhanced by the invention of various kinds of auxiliary instruments (e.g., the camera, spectrograph, and charge-coupled device) and by the use of electronic computers, rockets, and spacecraft in conjunction with telescope systems. These developments have contributed dramatically to advances in scientific knowledge about the solar system, the Milky Way Galaxy, and the universe as a whole.

Refracting telescopes

Commonly known as refractors, telescopes of this kind are typically used to examine the Moon, other objects of the solar system such as Jupiter and Mars, and binary stars. The name refractor is derived from the term refraction, which is the bending of light when it passes from one medium to another of different density—e.g., from air to glass. The glass is referred to as a lens and may have one or more components. The physical shape of the components may be convex, concave, or plane-parallel. This diagram illustrates the principle of refraction and the term focal length. The focus is the point, or plane, at which light rays from infinity converge after passing through a lens and traveling a distance of one focal length. In a refractor the first lens through which light from a celestial object passes is called the objective lens. It should be noted that the light will be inverted at the focal plane. A second lens, referred to as the eyepiece lens, is placed behind the focal plane and enables the observer to view the enlarged, or magnified, image. Thus, the simplest form of refractor consists of an objective and an eyepiece, as illustrated in the diagram.

The diameter of the objective is referred to as the aperture; it typically ranges from a few centimetres for small spotting telescopes up to one metre for the largest refractor in existence. The objective, as well as the eyepiece, may have several components. Small spotting telescopes may contain an extra lens behind the eyepiece to erect the image so that it does not appear upside-down. When an object is viewed with a refractor, the image may not appear sharply defined, or it may even have a predominant colour in it. Such distortions, or aberrations, are sometimes introduced when the lens is polished into its design shape. The major kind of distortion in a refractor is chromatic aberration, which is the failure of the differently coloured light rays to come to a common focus. Chromatic aberration can be minimized by adding components to the objective. In lens-design technology, the coefficients of expansion of different kinds of glass are carefully matched to minimize the aberrations that result from temperature changes of the telescope at night.

Eyepieces, which are used with both refractors and reflectors (see below Reflecting telescopes), have a wide variety of applications and provide observers with the ability to select the magnification of their instruments. The magnification, sometimes referred to as magnifying power, is determined by dividing the focal length of the objective by the focal length of the eyepiece. For example, if the objective has a focal length of 254 cm (100 inches) and the eyepiece has a focal length of 2.54 cm (1 inch), then the magnification will be 100. Large magnifications are very useful for observing the Moon and the planets. However, since stars appear as point sources owing to their great distances, magnification provides no additional advantage when viewing them. Another important factor that one must take into consideration when attempting to view at high magnification is the stability of the telescope mounting. Any vibration in the mounting will also be magnified and may severely reduce the quality of the observed image. Thus, great care is usually taken to provide a stable platform for the telescope. This problem should not be associated with that of atmospheric seeing, which may introduce a disturbance to the image because of fluctuating air currents in the path of the light from a celestial or terrestrial object. Generally, most of the seeing disturbance arises in the first 30 metres (100 feet) of air above the telescope. Large telescopes are frequently installed on mountain peaks in order to get above the seeing disturbances.

Light gathering and resolution

The most important of all the powers of an optical telescope is its light-gathering power. This capacity is strictly a function of the diameter of the clear objective—that is, the aperture—of the telescope. Comparisons of different-sized apertures for their light-gathering power are calculated by the ratio of their diameters squared; for example, a 25-cm (10-inch) objective will collect four times the light of a 12.5-cm (5-inch) objective ([25 × 25] ÷ [12.5 × 12.5] = 4). The advantage of collecting more light with a larger-aperture telescope is that one can observe fainter stars, nebulae, and very distant galaxies.

Resolving power is another important feature of a telescope. This is the ability of the instrument to distinguish clearly between two points whose angular separation is less than the smallest angle that the observer’s eye can resolve. The resolving power of a telescope can be calculated by the following formula: resolving power = 11.25 seconds of arc/d, where d is the diameter of the objective expressed in centimetres. Thus, a 25-cm-diameter objective has a theoretical resolution of 0.45 second of arc and a 250-cm (100-inch) telescope has one of 0.045 second of arc. An important application of resolving power is in the observation of visual binary stars. There, one star is routinely observed as it revolves around a second star. Many observatories conduct extensive visual binary observing programs and publish catalogs of their observational results. One of the major contributors in this field is the United States Naval Observatory in Washington, D.C.

Most refractors currently in use at observatories have equatorial mountings. The mounting describes the orientation of the physical bearings and structure that permits a telescope to be pointed at a celestial object for viewing. In the equatorial mounting, the polar axis of the telescope is constructed parallel to Earth’s axis. The polar axis supports the declination axis of the instrument. Declination is measured on the celestial sky north or south from the celestial equator. The declination axis makes it possible for the telescope to be pointed at various declination angles as the instrument is rotated about the polar axis with respect to right ascension. Right ascension is measured along the celestial equator from the vernal equinox (i.e., the position on the celestial sphere where the Sun crosses the celestial equator from south to north on the first day of spring). Declination and right ascension are the two coordinates that define a celestial object on the celestial sphere. Declination is analogous to latitude, and right ascension is analogous to longitude. Graduated dials are mounted on the axis to permit the observer to point the telescope precisely. To track an object, the telescope’s polar axis is driven smoothly by an electric motor at a sidereal rate—namely, at a rate equal to the rate of rotation of Earth with respect to the stars. Thus, one can track or observe with a telescope for long periods of time if the sidereal rate of the motor is very accurate. High-accuracy motor-driven systems have become readily available with the rapid advancement of quartz-clock technology. Most major observatories now rely on either quartz or atomic clocks to provide accurate sidereal time for observations as well as to drive telescopes at an extremely uniform rate.

A notable example of a refracting telescope is the 66-cm (26-inch) refractor of the U.S. Naval Observatory. This instrument was used by the astronomer Asaph Hall to discover the two moons of Mars, Phobos and Deimos, in 1877. Today, the telescope is used primarily for observing binary stars. The 91-cm (36-inch) refractor at Lick Observatory on Mount Hamilton, California, U.S., is the largest refracting system currently in operation.

Another type of refracting telescope is the astrograph, which usually has an objective diameter of approximately 20 cm (8 inches). The astrograph has a photographic plateholder mounted in the focal plane of the objective so that photographs of the celestial sphere can be taken. The photographs are usually taken on glass plates. The principal application of the astrograph is to determine the positions of a large number of faint stars. These positions are then published in catalogs such as the AGK3 and serve as reference points for deep-space photography.

Reflecting telescopes

Reflectors are used not only to examine the visible region of the electromagnetic spectrum but also to explore both the shorter- and longer-wavelength regions adjacent to it (i.e., the ultraviolet and the infrared). The name of this type of instrument is derived from the fact that the primary mirror reflects the light back to a focus instead of refracting it. The primary mirror usually has a concave spherical or parabolic shape, and, as it reflects the light, it inverts the image at the focal plane. The diagram illustrates the principle of a concave reflecting mirror. The formulas for resolving power, magnifying power, and light-gathering power, as discussed for refractors, apply to reflectors as well.

The primary mirror is located at the lower end of the telescope tube in a reflector and has its front surface coated with an extremely thin film of metal, such as aluminum. The back of the mirror is usually made of glass, although other materials have been used from time to time. Pyrex was the principal glass of choice for many of the older large telescopes, but new technology has led to the development and widespread use of a number of glasses with very low coefficients of expansion. A low coefficient of expansion means that the shape of the mirror will not change significantly as the temperature of the telescope changes during the night. Since the back of the mirror serves only to provide the desired form and physical support, it does not have to meet the high optical quality standards required for a lens.

Reflecting telescopes have a number of other advantages over refractors. They are not subject to chromatic aberration because reflected light does not disperse according to wavelength. Also, the telescope tube of a reflector is shorter than that of a refractor of the same diameter, which reduces the cost of the tube. Consequently, the dome for housing a reflector is smaller and more economical to construct. So far only the primary mirror for the reflector has been discussed. One might wonder about the location of the eyepiece. The primary mirror reflects the light of the celestial object to the prime focus near the upper end of the tube. Obviously, if an observer put his eye there to observe with a modest-sized reflector, he would block out the light from the primary mirror with his head. Isaac Newton placed a small plane mirror at an angle of 45° inside the prime focus and thereby brought the focus to the side of the telescope tube. The amount of light lost by this procedure is very small when compared to the total light-gathering power of the primary mirror. The Newtonian reflector is popular among amateur telescope makers.

Laurent Cassegrain of France, a contemporary of Newton, invented another type of reflector. Called the Cassegrain telescope, this instrument employs a small convex mirror to reflect the light back through a small hole in the primary mirror to a focus located behind the primary. The diagram illustrates a typical Cassegrain reflector. Some large telescopes of this kind do not have a hole in the primary mirror but use a small plane mirror in front of the primary to reflect the light outside the main tube and provide another place for observation. The Cassegrain design usually permits short tubes relative to their mirror diameter.

One more variety of reflector was invented by another of Newton’s contemporaries, the Scottish astronomer James Gregory. Gregory placed a concave secondary mirror outside the prime focus to reflect the light back through a hole in the primary mirror. Notable is the fact that the Gregorian design was adopted for the Earth-orbiting space observatory, the Solar Maximum Mission (SMM), launched in 1980.

Most large reflecting telescopes that are currently in use have a cage at their prime focus that permits the observer to sit inside the telescope tube while operating the instrument. The 5-metre (200-inch) reflector at Palomar Observatory, near San Diego, Calif., is so equipped. While most reflectors have equatorial mounts similar to refractors, the world’s largest reflector, the 10.4-metre (34.1-foot) instrument at the Gran Telescopio Canarias at La Palma, Canary Islands, Spain, has an altitude-azimuth mounting. The significance of the latter design is that the telescope must be moved both in altitude and in azimuth as it tracks a celestial object. Equatorial mountings, by contrast, require motion in only one coordinate while tracking, since the declination coordinate is constant. Reflectors, like refractors, usually have small guide telescopes mounted parallel to their main optical axis to facilitate locating the desired object. These guide telescopes have low magnification and a wide field of view, the latter being a desirable attribute for finding stars or other remote cosmic objects.

The parabolic shape of a primary mirror has a basic failing in that it produces a narrow field of view. This can be a problem when one wishes to observe extended celestial objects. To overcome this difficulty, most large reflectors now have a modified Cassegrain design. The central area of the primary mirror has its shape deepened from that of a paraboloid, and the secondary mirror is configured to compensate for the altered primary. The result is the Ritchey-Chrétien design, which has a curved rather than a flat focus. Obviously, the photographic medium must be curved to collect high-quality images across the curved focal plane. The 1-metre telescope of the U.S. Naval Observatory in Flagstaff, Arizona, was one of the early examples of this design.

The Schmidt telescope

The Ritchey-Chrétien design has a good field of view of about 1°. For some astronomical applications, however, photographing larger areas of the sky is mandatory. In 1930 Bernhard Schmidt, an optician at the Hamburg Observatory in Bergedorf, Germany, designed a catadioptric telescope that satisfied the requirement of photographing larger celestial areas. A catadioptric telescope design incorporates the best features of both the refractor and the reflector—i.e., it has both reflective and refractive optics. The Schmidt telescope has a spherically shaped primary mirror. Since parallel light rays that are reflected by the centre of a spherical mirror are focused farther away than those reflected from the outer regions, Schmidt introduced a thin lens (called the correcting plate) at the radius of curvature of the primary mirror. Since this correcting plate is very thin, it introduces little chromatic aberration. The resulting focal plane has a field of view several degrees in diameter. The diagram illustrates a typical Schmidt design.

The National Geographic Society–Palomar Observatory Sky Survey made use of a 1.2-metre (47-inch) Schmidt telescope to photograph the northern sky in the red and blue regions of the visible spectrum. The survey produced 900 pairs of photographic plates (about 7° by 7° each) taken from 1949 to 1956. Schmidt telescopes of the European Southern Observatory in Chile and of the Siding Spring Observatory in Australia have photographed the remaining part of the sky that cannot be observed from Palomar Observatory. (The survey undertaken at the latter included photographs in the infrared as well as in the red and blue spectral regions.)

Multimirror telescopes

The main reason astronomers build larger telescopes is to increase light-gathering power so that they can see deeper into the universe. Unfortunately, the cost of constructing larger single-mirror telescopes increases rapidly—approximately with the cube of the diameter of the aperture. Thus, in order to achieve the goal of increasing light-gathering power while keeping costs down, it has become necessary to explore new, more economical and nontraditional telescope designs.

The two 10-metre (33-foot) Keck Observatory multimirror telescopes represent such an effort. The first was installed on Mauna Kea on the island of Hawaii in 1992, and a second telescope was completed in 1996. Each of the Keck telescopes comprises 36 contiguous adjustable mirror segments, all under computer control. Even-larger multimirror instruments are currently being planned by American and European astronomers.

Special types of optical telescopes:

Solar telescopes

Either a refractor or a reflector may be used for visual observations of solar features, such as sunspots or solar prominences. Special solar telescopes have been constructed, however, for investigations of the Sun that require the use of such ancillary instruments as spectroheliographs and coronagraphs. These telescopes are mounted in towers and have very long focus objectives. Typical examples of tower solar telescopes are found at the Mount Wilson Observatory in California and the McMath-Hulbert Observatory in Michigan. The long focus objective produces a very good scale factor, which in turn makes it possible to look at individual wavelengths of the solar electromagnetic spectrum in great detail. A tower telescope has an equatorially mounted plane mirror at its summit to direct the sunlight into the telescope objective. This plane mirror is called a coelostat. Bernard Lyot constructed another type of solar telescope in 1930 at Pic du Midi Observatory in France. This instrument was specifically designed for photographing the Sun’s corona (the outer layer), which up to that time had been successfully photographed only during solar eclipses. The coronagraph, as this special telescope is called, must be located at a high altitude to be effective. The high altitude is required to reduce the scattered sunlight, which would reduce the quality of the photograph. The High Altitude Observatory in Colorado has such a coronagraph. The principle has been extended to build instruments that can search for extrasolar planets by blocking out the light of their parent stars. Coronagraphs are also used on board satellites, such as the Solar and Heliospheric Observatory, that study the Sun.

Earth-orbiting space telescopes

While astronomers continue to seek new technological breakthroughs with which to build larger ground-based telescopes, it is readily apparent that the only solution to some scientific problems is to make observations from above Earth’s atmosphere. A series of Orbiting Astronomical Observatories (OAOs) was launched by the National Aeronautics and Space Administration (NASA). The OAO launched in 1972—later named Copernicus—had an 81-cm (32-inch) telescope on board. The most sophisticated observational system placed in Earth orbit so far is the Hubble Space Telescope (HST; see photograph). Launched in 1990, the HST is essentially a telescope with a 2.4-metre (94-inch) primary mirror. It has been designed to enable astronomers to see into a volume of space 300 to 400 times larger than that permitted by other systems. At the same time, the HST is not impeded by any of the problems caused by the atmosphere. It is equipped with five principal scientific instruments: (1) a wide-field and planetary camera, (2) a faint-object spectrograph, (3) a high-resolution spectrograph, (4) a high-speed photometer, and (5) a faint-object camera. The HST was launched into orbit from the U.S. space shuttle at an altitude of more than 570 km (350 miles) above Earth. Shortly after its deployment in Earth orbit, HST project scientists found that a manufacturing error affecting the shape of the telescope’s primary mirror severely impaired the instrument’s focusing capability. The flawed mirror caused spherical aberration, which limited the ability of the HST to distinguish between cosmic objects that lie close together and to image distant galaxies and quasars. Project scientists devised measures that enabled them to compensate for the defective mirror and correct the imaging problem.

Astronomical transit instruments

These small but extremely important telescopes have played a vital role in mapping the celestial sphere. Astronomical transit instruments are usually refractors with apertures of 15 to 20 cm (6 to 8 inches). (Ole Rømer, a Danish astronomer, is credited with having invented this type of telescope system.) The main optical axis of the instrument is aligned on a north-south line such that its motion is restricted to the plane of the meridian of the observer. The observer’s meridian is a great circle on the celestial sphere that passes through the north and south points of the horizon as well as through the zenith of the observer. Restricting the telescope to motion only in the meridian provides an added degree of stability, but it requires the observer to wait for the celestial object to rotate across his meridian. The latter process is referred to as transiting the meridian, from which the name of the telescope is derived. There are various types of transit instruments—for example, the transit circle telescope, the vertical circle telescope, and the horizontal meridian circle telescope. The transit circle determines the right ascension of celestial objects, while the vertical circle measures only their declinations. Transit circles and horizontal meridian circles measure both right ascension and declination at the same time. The final output data of all transit instruments are included in star or planetary catalogs. A notable example of this class of telescopes is the transit circle of the National Astronomical Observatory in Tokyo.

Astrolabes

Another special type of telescopic instrument is the modern version of the astrolabe. Known as a prismatic astrolabe, it too is used for making precise determinations of the positions of stars and planets. It may sometimes be used inversely to determine the latitude and longitude of the observer, assuming the star positions are accurately known. The aperture of a prismatic astrolabe is small, usually only 8 to 10 cm (3 to 4 inches). A small pool of mercury and a refracting prism make up the other principal parts of the instrument. An image reflected off the mercury is observed along with a direct image to give the necessary position data. The most notable example of this type of instrument is the French-constructed Danjon astrolabe. During the 1970s, however, the Chinese introduced various innovations that resulted in a more accurate and automatic kind of astrolabe, which is now in use at the National Astronomical Observatories of China’s headquarters in Beijing.

The development of the telescope and auxiliary instrumentation:

Evolution of the optical telescope

Galileo is credited with having developed telescopes for astronomical observation in 1609. While the largest of his instruments was only about 120 cm (47 inches) long and had an objective diameter of 5 cm (2 inches), it was equipped with an eyepiece that provided an upright (i.e., erect) image. Galileo used his modest instrument to explore such celestial phenomena as the valleys and mountains of the Moon, the phases of Venus, and the four largest Jovian satellites, which had never been systematically observed before.

The reflecting telescope was developed in 1668 by Newton, though John Gregory had independently conceived of an alternative reflector design in 1663. Cassegrain introduced another variation of the reflector in 1672. Near the end of the century, others attempted to construct refractors as long as 61 metres, but these instruments were too awkward to be effective.

The most significant contribution to the development of the telescope in the 18th century was that of Sir William Herschel. Herschel, whose interest in telescopes was kindled by a modest 5-cm Gregorian, persuaded the king of England to finance the construction of a reflector with a 12-metre (39-foot) focal length and a 120-cm mirror. Herschel is credited with having used this instrument to lay the observational groundwork for the concept of extragalactic “nebulas”—i.e., galaxies outside the Milky Way system.

Reflectors continued to evolve during the 19th century with the work of William Parsons, 3rd earl of Rosse, and William Lassell. In 1845 Lord Rosse constructed in Ireland a reflector with a 185-cm (73-inch) mirror and a focal length of about 16 metres (52 feet). For 75 years this telescope ranked as the largest in the world and was used to explore thousands of nebulae and star clusters. Lassell built several reflectors, the largest of which was on Malta; this instrument had a 124-cm (49-inch) primary mirror and a focal length of more than 10 metres (33 feet). His telescope had greater reflecting power than Rosse’s, and it enabled him to catalog 600 new nebulae as well as to discover several satellites of the outer planets—Triton (Neptune’s largest moon), Hyperion (Saturn’s 8th moon), and Ariel and Umbriel (two of Uranus’s moons).

Refractor telescopes, too, underwent development during the 18th and 19th centuries. The last significant one to be built was the 1-metre (40-inch) refractor at Yerkes Observatory. Installed in 1897, it was the largest refracting system in the world. Its objective was designed and constructed by the optician Alvan Clark, while the mount was built by the firm of Warner & Swasey.

The reflecting telescope predominated in the 20th century. The rapid proliferation of increasingly larger instruments of this type began with the installation of the 2.5-metre (60-inch) reflector at the Mount Wilson Observatory near Pasadena, Calif., U.S. The technology for mirrors underwent a major advance when the Corning Glass Works (in Steuben county, N.Y., U.S.) developed Pyrex. This borosilicate glass, which undergoes substantially less expansion than ordinary glass does, was used in the 5-metre (200-inch) Hale Telescope built in 1948 at the Palomar Observatory. Pyrex also was utilized in the main mirror of the 6-metre (236-inch) reflector of the Special Astrophysical Observatory in Zelenchukskaya, Russia. Since then, much better materials for mirrors have become available. Cer-Vit, for example, was used for the 4.2-metre (165-inch) William Herschel Telescope of the Roque de los Muchachos Observatory in the Canary Islands, and Zerodur was used for the 10.4-metre (410-inch) reflector at the Gran Telescopio Canarias in the Canary Islands.

Advances in auxiliary instrumentation

Almost as important as the telescope itself are the auxiliary instruments that the astronomer uses to exploit the light received at the focal plane. Examples of such instruments are the camera, spectrograph, photomultiplier tube, charge-coupled device (CCD), and charge injection device (CID). Each of these instrument types is discussed below.

Cameras

American John Draper photographed the Moon as early as 1840 by applying the daguerreotype process. The French physicists A.-H.-L. Fizeau and J.-B.-L. Foucault succeeded in making a photographic image of the Sun in 1845. Five years later astronomers at Harvard Observatory took the first photographs of the stars.

The use of photographic equipment in conjunction with telescopes benefited astronomers greatly, giving them two distinct advantages: first, photographic images provided a permanent record of celestial phenomena, and, second, photographic plates integrated the light from celestial sources over long periods of time and thereby permitted astronomers to see much-fainter objects than they would be able to observe visually. Typically, the camera’s photographic plate (or film) was mounted in the focal plane of the telescope. The plate or film consisted of glass or of a plastic material that was covered with a thin layer of a silver compound. The light striking the photographic medium caused the silver compound to undergo a chemical change. When processed, a negative image resulted; i.e., the brightest spots (the Moon and the stars, for example) appeared as the darkest areas on the plate or the film. In the 1980s the CCD supplanted photography in the production of astronomical images.

Spectrographs

Newton noted the interesting way in which a piece of glass can break up light into different bands of colour, but it was not until 1814 that the German physicist Joseph von Fraunhofer discovered the lines of the solar spectrum and laid the basis for spectroscopy. The spectrograph consists of a slit, a collimator, a prism for dispersing the light, and a focusing lens. The collimator is an optical device that produces parallel rays from a focal plane source—i.e., it gives the appearance that the source is located at an infinite distance. The spectrograph enables astronomers to analyze the chemical composition of planetary atmospheres, stars, nebulae, and other celestial objects. A bright line in the spectrum indicates the presence of a glowing gas radiating at a wavelength characteristic of the chemical element in the gas. A dark line in the spectrum usually means that a cooler gas has intervened and absorbed the lines of the element characteristic of the intervening material. The lines may be displaced to either the red end or the blue end of the spectrum. This effect was first noted in 1842 by the Austrian physicist Christian Johann Doppler. When a light source is approaching, the lines are shifted toward the blue end of the spectrum, and when the source is receding, the lines are shifted toward its red end. This effect, known as the Doppler effect, permits astronomers to study the relative motions of celestial objects with respect to Earth’s motion.

The slit of the spectrograph is placed at the focal plane of the telescope. The resulting spectrum may be recorded photographically or with some kind of electronic detector, such as a photomultiplier tube, CCD, or CID. If no recording device is used, then the optical device is technically referred to as a spectroscope.

Photomultiplier tubes

The photomultiplier tube is an enhanced version of the photocell, which was first used by astronomers to record data electronically. The photocell contains a photosensitive surface that generates an electric current when struck by light from a celestial source. The photosensitive surface is positioned just behind the focus. A diaphragm of very small aperture is usually placed in the focal plane to eliminate as much of the background light of the sky as possible. A small lens is used to focus the focal plane image on the photosensitive surface, which, in the case of a photomultiplier tube, is referred to as the photocathode. In the photomultiplier tube a series of special sensitive plates are arranged geometrically to amplify or multiply the electron stream. Frequently, magnifications of a million are achieved by this process.

The photomultiplier tube has a distinct advantage over the photographic plate. With the photographic plate the relationship between the brightness of the celestial source and its registration on the plate is not linear. In the case of the photomultiplier tube, however, the release of electrons in the tube is directly proportional to the intensity of light from the celestial source. This linear relationship is very useful for working over a wide range of brightness. A disadvantage of the photomultiplier tube is that only one object can be recorded at a time. The output from such a device is sent to a recorder or digital storage device to produce a permanent record.

Charge-coupled devices

The charge-coupled device (CCD) uses a light-sensitive material on a silicon chip to electronically detect photons in a way similar to the photomultiplier tube. The principal difference is that the chip also contains integrated microcircuitry required to transfer the detected signal along a row of discrete picture elements (or pixels) and thereby scan a celestial object or objects very rapidly. When individual pixels are arranged simply in a single row, the detector is referred to as a linear array. When the pixels are arranged in rows and columns, the assemblage is called a two-dimensional array.

Pixels can be assembled in various sizes and shapes. The Hubble Space Telescope has a CCD detector with a 1,600 × 1,600 pixel array. Actually, there are four 800 × 800 pixel arrays mosaicked together. The sensitivity of a CCD is 100 times greater than a photographic plate and so has the ability to quickly scan objects such as planets, nebulae, and star clusters and record the desired data. Another feature of the CCD is that the detector material may be altered to provide more sensitivity at different wavelengths. Thus, some detectors are more sensitive in the blue region of the spectrum than in the red region.

Today, most large observatories use CCDs to record data electronically. Another similar device, the charge injection device, is sometimes employed. The basic difference between the CID and the CCD is in the way the electric charge is transferred before it is recorded; however, the two devices may be used interchangeably as far as astronomical work is concerned.

Impact of technological developments:

Computers

Besides the telescope itself, the electronic computer has become the astronomer’s most important tool. Indeed, the computer has revolutionized the use of the telescope to the point where the collection of observational data is now completely automated. The astronomer need only identify the object to be observed, and the rest is carried out by the computer and auxiliary electronic equipment.

A telescope can be set to observe automatically by means of electronic sensors appropriately placed on the telescope axis. Precise quartz or atomic clocks send signals to the computer, which in turn activates the telescope sensors to collect data at the proper time. The computer not only makes possible more efficient use of telescope time but also permits a more detailed analysis of the data collected than could have been done manually. Data analysis that would have taken a lifetime or longer to complete with a mechanical calculator can now be done within hours or even minutes with a high-speed computer.

Improved means of recording and storing computer data also have contributed to astronomical research. Optical disc data-storage technology, such as the CD-ROM (compact disc read-only memory) or the DVD-ROM (digital video disc read-only memory), has provided astronomers with the ability to store and retrieve vast amounts of telescopic and other astronomical data.

Rockets and spacecraft

The quest for new knowledge about the universe has led astronomers to study electromagnetic radiation other than just visible light. Such forms of radiation, however, are blocked for the most part by Earth’s atmosphere, and so their detection and analysis can only be achieved from above this gaseous envelope.

During the late 1940s, single-stage sounding rockets were sent up to 160 km (100 miles) or more to explore the upper layers of the atmosphere. From 1957, more sophisticated multistage rockets were launched as part of the International Geophysical Year. These rockets carried artificial satellites equipped with a variety of scientific instruments. Beginning in 1959, the Soviet Union and the United States, engaged in a “space race,” intensified their efforts and launched a series of robotic probes to explore the Moon. Lunar exploration culminated with the first crewed landing on the Moon, by the U.S. Apollo 11 astronauts on July 20, 1969. Numerous other U.S. and Soviet spacecraft were sent to further study the lunar environment until the mid-1970s. Lunar exploration revived in the early years of the 21st century with the United States, China, Japan, and India all sending robotic probes to the Moon.

Starting in the early 1960s, both the United States and the Soviet Union launched a multitude of robotic deep-space probes to learn more about the other planets and satellites of the solar system. Carrying television cameras, detectors, and an assortment of other instruments, these probes sent back impressive amounts of scientific data and close-up pictures. Among the most successful missions were those involving the U.S. Messenger flybys of Mercury (2008–15), the Soviet Venera probes to Venus (1967–83), the U.S. Mars Exploration Rover landings on Mars (2004–18), and the U.S. Voyager 2 flybys of Jupiter, Saturn, Uranus, and Neptune (1979–89). When the Voyager 2 probe flew past Neptune and its moons in August 1989, every known major planet had been explored by spacecraft. Many long-held views, particularly those about the outer planets, were altered by the findings of the Voyager probe. These findings included the discovery of several rings and six additional satellites around Neptune, all of which are undetectable to ground-based telescopes.

Specially instrumented spacecraft have enabled astronomers to investigate other celestial phenomena as well. The Orbiting Solar Observatories and Solar Maximum Mission (Earth-orbiting U.S. satellites equipped with ultraviolet detector systems) have provided a means for studying solar activity. Another example is the Giotto probe of the European Space Agency, which enabled astronomers to obtain detailed photographs of the nucleus of Halley’s Comet during its 1986 passage.

Additional Information

Early telescopes focused light using pieces of curved, clear glass, called lenses. However, most telescopes today use curved mirrors to gather light from the night sky. The shape of the mirror or lens in a telescope concentrates light. That light is what we see when we look into a telescope.

A telescope is a tool that astronomers use to see faraway objects. Most telescopes, and all large telescopes, work by using curved mirrors to gather and focus light from the night sky.

The first telescopes focused light by using pieces of curved, clear glass, called lenses. So why do we use mirrors today? Because mirrors are lighter, and they are easier than lenses to make perfectly smooth.

The mirrors or lenses in a telescope are called the “optics.” Really powerful telescopes can see very dim things and things that are really far away. To do that, the optics—be they mirrors or lenses—have to be really big.

The bigger the mirrors or lenses, the more light the telescope can gather. Light is then concentrated by the shape of the optics. That light is what we see when we look into the telescope.

The optics of a telescope must be almost perfect. That means the mirrors and lenses have to be just the right shape to concentrate the light. They can’t have any spots, scratches or other flaws. If they do have such problems, the image gets warped or blurry and is difficult to see. It’s hard to make a perfect mirror, but it’s even harder to make a perfect lens.

Lenses

A telescope made with lenses is called a refracting telescope.

A lens, just like in eyeglasses, bends light passing through it. In eyeglasses, this makes things less blurry. In a telescope, it makes faraway things seem closer.

People with especially poor eyesight need thick lenses in their glasses. Big, thick lenses are more powerful. The same is true for telescopes. If you want to see far away, you need a big powerful lens. Unfortunately, a big lens is very heavy.

Heavy lenses are hard to make and difficult to hold in the right place. Also, as they get thicker the glass stops more of the light passing through them.

Because the light is passing through the lens, the surface of the lens has to be extremely smooth. Any flaws in the lens will change the image. It would be like looking through a dirty window.

Why Mirrors Work Better

A telescope that uses mirrors is called a reflecting telescope.

Unlike a lens, a mirror can be very thin. A bigger mirror does not also have to be thicker. Light is concentrated by bouncing off of the mirror. So the mirror just has to have the right curved shape.

It is much easier to make a large, near-perfect mirror than to make a large, near-perfect lens. Also, since mirrors are one-sided, they are easier than lenses to clean and polish.

But mirrors have their own problems. Have you ever looked into a spoon and noticed your reflection is upside down? The curved mirror in a telescope is like a spoon: It flips the image. Luckily, the solution is simple. We just use other mirrors to flip it back.

The number-one benefit of using mirrors is that they’re not heavy. Since they are much lighter than lenses, mirrors are a lot easier to launch into space.

Space telescopes such as the Hubble Space Telescope and the Spitzer Space Telescope have allowed us to capture views of galaxies and nebulas far away from our own solar system. Set to launch in December 2021, the James Webb Space Telescope is the largest, most powerful space telescope ever built. It will allow scientists to look at what our universe was like about 200 million years after the Big Bang.

refracting_telescope_wavy.en.jpg

#23 Science HQ » Convex Mirror » 2026-02-17 16:51:08

Jai Ganesh
Replies: 0

Convex Mirror

Gist

A convex mirror is a spherical mirror with a reflecting surface that bulges outwards, causing it to diverge light rays and provide a wider field of view, making it ideal for rearview mirrors and security surveillance where it forms upright, virtual, and diminished (smaller) images. 

Convex mirrors are used as rear-view mirrors in vehicles because they provide a wider field of view. This helps drivers see more traffic and reduces blind spots, improving safety on roads. Key reasons include: Forming erect, virtual, and diminished images.

Summary

Convex Mirror is a curved mirror where the reflective surface bulges out toward the light source. This bulging-out surface reflects light outwards and is not used to focus light. These mirrors form a virtual image as the focal point (F), and the centre of curvature (2F) are imaginary points in the mirror that cannot be reached. This results in the formation of images that cannot be projected on a screen as the image is inside the mirror. The image looks smaller than the object from a distance but gets larger as the object gets closer to the mirror.

Uses of Convex Mirror

* Convex mirrors are often used in buildings’ hallways, including stores, schools, hospitals, hotels and apartment buildings.
* They are used in driveways, roads, and alleys to provide safety to all the bikers and motorists at curves and turns and other places where there is a lack of visibility.
* They are also used in some automated teller machines as a handy security feature that allows users to see what is happening behind them.
* They are used in the passenger side mirror on a car, and somewhere it is labelled as “ objects in mirror are closer than they appear” to warn the driver.

Details

A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (F) and the centre of curvature (2F) are both imaginary points "inside" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror.

A collimated (parallel) beam of light diverges (spreads out) after reflection from a convex mirror, since the normal to the surface differs at each spot on the mirror.

Uses

The passenger-side mirror on a car is typically a convex mirror. In some countries, these are labeled with the safety warning "Objects in mirror are closer than they appear", to warn the driver of the convex mirror's distorting effects on distance perception. Convex mirrors are preferred in vehicles because they give an upright (not inverted), though diminished (smaller), image and because they provide a wider field of view as they are curved outwards.

These mirrors are often found in the hallways of various buildings (commonly known as "hallway safety mirrors"), including hospitals, hotels, schools, stores, and apartment buildings. They are usually mounted on a wall or ceiling where hallways intersect each other, or where they make sharp turns. They are useful for people to look at any obstruction they will face on the next hallway or after the next turn. They are also used on roads, driveways, and alleys to provide safety for road users where there is a lack of visibility, especially at curves and turns.

Convex mirrors are used in some automated teller machines as a simple and handy security feature, allowing the users to see what is happening behind them. Similar devices are sold to be attached to ordinary computer monitors. Convex mirrors make everything seem smaller but cover a larger area of surveillance.

Round convex mirrors called Oeil de Sorcière (French for "sorcerer's eye") were a popular luxury item from the 15th century onwards, shown in many depictions of interiors from that time. With 15th century technology, it was easier to make a regular curved mirror (from blown glass) than a perfectly flat one. They were also known as "bankers' eyes" because their wide field of vision was useful for security. Famous examples in art include the Arnolfini Portrait by Jan van Eyck and the left wing of the Werl Altarpiece by Robert Campin.

Image

The image on a convex mirror is always virtual (rays haven't actually passed through the image; their extensions do, like in a regular mirror), diminished (smaller), and upright (not inverted). As the object gets closer to the mirror, the image gets larger, until approximately the size of the object, when it touches the mirror. As the object moves away, the image diminishes in size and gets gradually closer to the focus, until it is reduced to a point in the focus when the object is at an infinite distance. These features make convex mirrors very useful: since everything appears smaller in the mirror, they cover a wider field of view than a normal plane mirror, so useful for looking at cars behind a driver's car on a road, watching a wider area for surveillance, etc.

Additional Information:

Introduction

A mirror is a smooth surface that shows images of the objects near it. Most mirrors are a sheet of glass with a shiny metallic coating on the back.

Reflection

The appearance of an image in a mirror is called a reflection. Reflection happens when light hits a surface. If the light cannot pass through the surface, it bounces off, or reflects. Most surfaces absorb some light and reflect some light. Mirrors, however, reflect almost all the light that hits them. The metallic coating on the back causes the reflection.

When you stand in front of a mirror, your body reflects patterns of light to the mirror. Those patterns of light bounce off the mirror and go back to your eyes. Your brain then interprets, or reads, the patterns of light as an image of yourself in the mirror.

Types of Mirrors

Most mirrors are flat. They are called plane mirrors. Images in a plane mirror are reversed. For example, if you raise your right hand while looking in a mirror, you will appear to raise your left hand. People use plane mirrors to check their appearance.

Other mirrors are curved. Convex mirrors curve outward, like a dome. They make objects appear reversed and smaller than their actual size. Concave mirrors curve inward, like a bowl. At a distance, they make objects appear upside down. Nearby, however, objects appear right side up and larger than their actual size.

How Mirrors Are Made

Mirrors are made in factories with special machinery. First, a sheet of glass is polished smooth and cleaned. Next, the back of the glass is covered with a thin layer of silver, aluminum, or another metal. Then the metal is covered with copper, varnish, or paint to protect it from scratches.

11-3-convex-mirror-parallel.png?revision=1&size=bestfit&width=651&height=473

#24 Dark Discussions at Cafe Infinity » Come Quotes - VII » 2026-02-17 16:08:24

Jai Ganesh
Replies: 0

Come Quotes - VII

1. Many people take no care of their money till they come nearly to the end of it, and others do just the same with their time. - Johann Wolfgang von Goethe

2. The vegetable life does not content itself with casting from the flower or the tree a single seed, but it fills the air and earth with a prodigality of seeds, that, if thousands perish, thousands may plant themselves, that hundreds may come up, that tens may live to maturity; that, at least one may replace the parent. - Ralph Waldo Emerson

3. If you come to fame not understanding who you are, it will define who you are. - Oprah Winfrey

4. Belief is a wise wager. Granted that faith cannot be proved, what harm will come to you if you gamble on its truth and it proves false? If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation, that He exists. - Blaise Pascal

5. Our greatness has always come from people who expect nothing and take nothing for granted - folks who work hard for what they have, then reach back and help others after them. - Michelle Obama

6. Peace is not a relationship of nations. It is a condition of mind brought about by a serenity of soul. Peace is not merely the absence of war. It is also a state of mind. Lasting peace can come only to peaceful people. - Jawaharlal Nehru

7. Trust has to be earned, and should come only after the passage of time. - Arthur Ashe

8. Living Life Tomorrow's fate, though thou be wise, Thou canst not tell nor yet surmise; Pass, therefore, not today in vain, For it will never come again. - Omar Khayyam.

#25 Jokes » Grapefruit Jokes - II » 2026-02-17 15:49:27

Jai Ganesh
Replies: 0

Q: Why did the grapefruit fail his driving test?
A: It kept peeling out.
* * *
Q: Why did the grapefruit go to the doctor?
A: It wasn't peeling well.
* * *
Q: Did you hear about the spring training games that used fruits instead of baseballs?
A: They called it the "Grapefruit League".
* * *
Q: Why did the fruit bat eat the orange?
A: Because it had appeal.
* * *
Q: Why did the man lose his job at the grapefruit juice factory?
A: He couldn't concentrate!
* * *

Board footer

Powered by FluxBB