Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 This is Cool » Transistor » Today 18:28:38

Jai Ganesh
Replies: 0

Transistor

Gist

A transistor is a semiconductor device that amplifies or switches electronic signals and power, acting as a fundamental building block of modern electronics, found in everything from smartphones to computers. It uses a small current or voltage at one terminal to control a much larger current flow between the other two, functioning like a tiny, fast electronic switch or amplifier.

A transistor is a semiconductor device that acts as either an electronic switch (turning current on/off) or an amplifier (boosting signals), controlling a larger current with a smaller one, forming the fundamental building block of modern electronics like computers, radios, and smartphones. 

Summary

A transistor is a semiconductor device used to amplify or switch electrical signals and power. It is one of the basic building blocks of modern electronics. It is composed of semiconductor material, usually with at least three terminals for connection to an electronic circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Some transistors are packaged individually, but many more in miniature form are found embedded in integrated circuits. Because transistors are the key active components in practically all modern electronics, many people consider them one of the 20th century's greatest inventions.

Physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor (FET) in 1925, but it was not possible to construct a working device at that time. The first working device was a point-contact transistor invented in 1947 by physicists John Bardeen, Walter Brattain, and William Shockley at Bell Labs who shared the 1956 Nobel Prize in Physics for their achievement. The most widely used type of transistor, the metal–oxide–semiconductor field-effect transistor (MOSFET), was invented at Bell Labs between 1955 and 1960. Transistors revolutionized the field of electronics and paved the way for smaller and cheaper radios, calculators, computers, and other electronic devices.

Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind of charge carrier in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are generally smaller and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages, such as traveling-wave tubes and gyrotrons. Many types of transistors are made to standardized specifications by multiple manufacturers.

Details

A transistor is a semiconductor device for amplifying, controlling, and generating electrical signals. Transistors are the active components of integrated circuits, or “microchips,” which often contain billions of these minuscule devices etched into their shiny surfaces. Deeply embedded in almost everything electronic, transistors have become the nerve cells of the Information Age.

There are typically three electrical leads in a transistor, called the emitter, the collector, and the base—or, in modern switching applications, the source, the drain, and the gate. An electrical signal applied to the base (or gate) influences the semiconductor material’s ability to conduct electrical current, which flows between the emitter (or source) and collector (or drain) in most applications. A voltage source such as a battery drives the current, while the rate of current flow through the transistor at any given moment is governed by an input signal at the gate—much as a faucet valve is used to regulate the flow of water through a garden hose.

The first commercial applications for transistors were for hearing aids and “pocket” radios during the 1950s. With their small size and low power consumption, transistors were desirable substitutes for the vacuum tubes (known as “valves” in Great Britain) then used to amplify weak electrical signals and produce audible sounds. Transistors also began to replace vacuum tubes in the oscillator circuits used to generate radio signals, especially after specialized structures were developed to handle the higher frequencies and power levels involved. Low-frequency, high-power applications, such as power-supply inverters that convert alternating current (AC) into direct current (DC), have also been transistorized. Some power transistors can now handle currents of hundreds of amperes at electric potentials over a thousand volts.

By far the most common application of transistors today is for computer memory chips—including solid-state multimedia storage devices for electronic games, cameras, and MP3 players—and microprocessors, where millions of components are embedded in a single integrated circuit. Here the voltage applied to the gate electrode, generally a few volts or less, determines whether current can flow from the transistor’s source to its drain. In this case the transistor operates as a switch: if a current flows, the circuit involved is on, and if not, it is off. These two distinct states, the only possibilities in such a circuit, correspond respectively to the binary 1s and 0s employed in digital computers. Similar applications of transistors occur in the complex switching circuits used throughout modern telecommunications systems. The potential switching speeds of these transistors now are hundreds of gigahertz, or more than 100 billion on-and-off cycles per second.

Development of transistors

The transistor was invented in 1947–48 by three American physicists, John Bardeen, Walter H. Brattain, and William B. Shockley, at the American Telephone and Telegraph Company’s Bell Laboratories. The transistor proved to be a viable alternative to the electron tube and, by the late 1950s, supplanted the latter in many applications. Its small size, low heat generation, high reliability, and low power consumption made possible a breakthrough in the miniaturization of complex circuitry. During the 1960s and ’70s, transistors were incorporated into integrated circuits, in which a multitude of components (e.g., diodes, resistors, and capacitors) are formed on a single “chip” of semiconductor material.

Motivation and early radar research

Electron tubes are bulky and fragile, and they consume large amounts of power to heat their cathode filaments and generate streams of electrons; also, they often burn out after several thousand hours of operation. Electromechanical switches, or relays, are slow and can become stuck in the on or off position. For applications requiring thousands of tubes or switches, such as the nationwide telephone systems developing around the world in the 1940s and the first electronic digital computers, this meant constant vigilance was needed to minimize the inevitable breakdowns.

An alternative was found in semiconductors, materials such as silicon or germanium whose electrical conductivity lies midway between that of insulators such as glass and conductors such as aluminum. The conductive properties of semiconductors can be controlled by “doping” them with select impurities, and a few visionaries had seen the potential of such devices for telecommunications and computers. However, it was military funding for radar development in the 1940s that opened the door to their realization. The “superheterodyne” electronic circuits used to detect radar waves required a diode rectifier—a device that allows current to flow in just one direction—that could operate successfully at ultrahigh frequencies over one gigahertz. Electron tubes just did not suffice, and solid-state diodes based on existing copper-oxide semiconductors were also much too slow for this purpose.

Crystal rectifiers based on silicon and germanium came to the rescue. In these devices a tungsten wire was jabbed into the surface of the semiconductor material, which was doped with tiny amounts of impurities, such as boron or phosphorus. The impurity atoms assumed positions in the material’s crystal lattice, displacing silicon (or germanium) atoms and thereby generating tiny populations of charge carriers (such as electrons) capable of conducting usable electrical current. Depending on the nature of the charge carriers and the applied voltage, a current could flow from the wire into the surface or vice-versa, but not in both directions. Thus, these devices served as the much-needed rectifiers operating at the gigahertz frequencies required for detecting rebounding microwave radiation in military radar systems. By the end of World War II, millions of crystal rectifiers were being produced annually by such American manufacturers as Sylvania and Western Electric.

Innovation at Bell Labs

Executives at Bell Labs had recognized that semiconductors might lead to solid-state alternatives to the electron-tube amplifiers and electromechanical switches employed throughout the nationwide Bell telephone system. In 1936 the new director of research at Bell Labs, Mervin Kelly, began recruiting solid-state physicists. Among his first recruits was William B. Shockley, who proposed a few amplifier designs based on copper-oxide semiconductor materials then used to make diodes. With the help of Walter H. Brattain, an experimental physicist already working at Bell Labs, he even tried to fabricate a prototype device in 1939, but it failed completely. Semiconductor theory could not yet explain exactly what was happening to electrons inside these devices, especially at the interface between copper and its oxide. Compounding the difficulty of any theoretical understanding was the problem of controlling the exact composition of these early semiconductor materials, which were binary combinations of different chemical elements (such as copper and oxygen).

With the close of World War II, Kelly reorganized Bell Labs and created a new solid-state research group headed by Shockley. The postwar search for a solid-state amplifier began in April 1945 with Shockley’s suggestion that silicon and germanium semiconductors could be used to make a field-effect amplifier (see integrated circuit: Field-effect transistors). He reasoned that an electric field from a third electrode could increase the conductivity of a sliver of semiconductor material just beneath it and thereby allow usable current to flow through the sliver. But attempts to fabricate such a device by Brattain and others in Shockley’s group again failed. The following March, John Bardeen, a theoretical physicist whom Shockley had hired for his group, offered a possible explanation. Perhaps electrons drawn to the semiconductor surface by the electric field were blocking the penetration of this field into the bulk material, thereby preventing it from influencing the conductivity.

Bardeen’s conjecture spurred a basic research program at Bell Labs into the behaviour of these “surface-state” electrons. While studying this phenomenon in November 1947, Brattain stumbled upon a way to neutralize their blocking effect and permit the applied field to penetrate deep into the semiconductor material. Working closely together over the next month, Bardeen and Brattain invented the first successful semiconductor amplifier, called the point-contact transistor, on December 16, 1947. Similar to the World War II crystal rectifiers, this weird-looking device had not one but two closely spaced metal wires jabbing into the surface of a semiconductor—in this case, germanium. The input signal on one of these wires (the emitter) boosted the conductivity of the germanium beneath both of them, thus modulating the output signal on the other wire (the collector). Observers present at a demonstration of this device the following week could hear amplified voices in the earphones that it powered. Shockley later called this invention a “magnificent Christmas present” for the farsighted company, which had supported the research program that made this breakthrough.

Not to be outdone by members of his own group, Shockley conceived yet another way to fabricate a semiconductor amplifier the very next month, on January 23, 1948. His junction transistor was basically a three-layer sandwich of germanium or silicon in which the adjacent layers would be doped with different impurities to induce distinct electrical characteristics. An input signal entering the middle layer—the “meat” of the semiconductor sandwich—determined how much current flowed from one end of the device to the other under the influence of an applied voltage. Shockley’s device is often called the bipolar junction transistor because its operation requires that the negatively charged electrons and their positively charged counterparts (the holes corresponding to an absence of electrons in the crystal lattice) coexist briefly in the presence of one another.

The name transistor, a combination of transfer and resistor, was coined for these devices in May 1948 by Bell Labs electrical engineer John Robinson Pierce, who was also a science-fiction author in his spare time. A month later Bell Labs announced the revolutionary invention in a press conference held at its New York City headquarters, heralding Bardeen, Brattain, and Shockley as the three coinventors of the transistor. The three were eventually awarded the Nobel Prize for Physics for their invention.

Although the point-contact transistor was the first transistor invented, it faced a difficult gestation period and was eventually used only in a switch made for the Bell telephone system. Manufacturing them reliably and with uniform operating characteristics proved a daunting problem, largely because of hard-to-control variations in the metal-to-semiconductor point contacts.

Shockley had foreseen these difficulties in the process of conceiving the junction transistor, which he figured would be much easier to manufacture. But it still required more than three years, until mid-1951, to resolve its own development problems. Bell Labs scientists, engineers, and technicians first had to find ways to make ultrapure germanium and silicon, form large crystals of these elements, dope them with narrow layers of the required impurities, and attach delicate wires to these layers to serve as electrodes. In July 1951 Bell Labs announced the successful invention and development of the junction transistor, this time with only Shockley in the spotlight.

Commercialization

Commercial transistors began to roll off production lines during the 1950s, after Bell Labs licensed the technology of their production to other companies, including General Electric, Raytheon, RCA, Sylvania, and Transitron Electronics. Transistors found ready applications in lightweight devices such as hearing aids and portable radios. Texas Instruments Inc., working with the Regency Division of Industrial Development Engineering Associates, manufactured the first transistor radio in late 1954. Selling for $49.95, the Regency TR-1 employed four germanium junction transistors in a multistage amplifier of radio signals. The very next year a new Japanese company, Sony, introduced its own transistor radio and began to corner the market for this and other transistorized consumer electronics.

Transistors also began replacing vacuum tubes in the digital computers manufactured by IBM, Control Data, and other companies. “It seems to me that in these robot brains the transistor is the ideal nerve cell,” Shockley had observed in a 1949 radio interview. “The advantage of the transistor is that it is inherently a small-size and low-power device,” noted Bell Labs circuit engineer Robert Wallace early in the 1950s. “This means you can pack a large number of them in a small space without excessive heat generation and achieve low propagation delays. And that’s what you need for logic applications. The significance of the transistor is not that it can replace the tube but that it can do things the vacuum tube could never do!” After 1955 IBM started purchasing germanium transistors from Texas Instruments to employ in its computer circuits. By the end of the 1950s, bipolar junction transistors had almost completely replaced electron tubes in computer applications.

Silicon transistors

During the 1950s, meanwhile, scientists and engineers at Bell Labs and Texas Instruments were developing advanced technologies needed to produce silicon transistors. Because of its higher melting temperature and greater reactivity, silicon was much more difficult to work with than germanium, but it offered major prospects for better performance, especially in switching applications. Germanium transistors make leaky switches; substantial leakage currents can flow when these devices are supposedly in their off state. Silicon transistors have far less leakage. In 1954 Texas Instruments produced the first commercially available silicon junction transistors and quickly dominated this new market—especially for military applications, in which their high cost was of little concern.

In the mid-1950s Bell Labs focused its transistor-development efforts around new diffusion technologies, in which very narrow semiconductor layers—with thicknesses measured in microns, or millionths of a metre—are prepared by diffusing impurity atoms into the semiconductor surface from a hot gas. Inside a diffusion furnace the impurity atoms penetrate more readily into the silicon or germanium surface; their penetration depth is controlled by varying the density, temperature, and pressure of the gas as well as the processing time. (See integrated circuit: Fabricating ICs.) For the first time, diodes and transistors produced by these diffusion implantation processes functioned at frequencies above 100 megahertz (100 million cycles per second). These diffused-base transistors could be used in receivers and transmitters for FM radio and television, which operate at such high frequencies.

Another important breakthrough occurred at Bell Labs in 1955, when Carl Frosch and Link Derick developed a means of producing a glassy silicon dioxide outer layer on the silicon surface during the diffusion process. This layer offered transistor producers a promising way to protect the silicon underneath from further impurities once the diffusion process was finished and the desired electrical properties had been established.

Texas Instruments, Fairchild Semiconductor Corporation, and other companies took the lead in applying these diffusion technologies to the large-scale manufacture of transistors. At Fairchild, physicist Jean Hoerni developed the planar manufacturing process, whereby the various semiconductor layers and their sensitive interfaces are embedded beneath a protective silicon dioxide outer layer. The company was soon making and selling planar silicon transistors, largely for military applications. Led by Robert Noyce and Gordon E. Moore, Fairchild’s scientists and engineers extended this revolutionary technique to the manufacture of integrated circuits.

In the late 1950s Bell Labs researchers developed ways to use the new diffusion technologies to realize Shockley’s original 1945 idea of a field-effect transistor (FET). To do so, they had to overcome the problem of surface-state electrons, which would otherwise have blocked external electric fields from penetrating into the semiconductor. They succeeded by carefully cleaning the silicon surface and growing a very pure silicon dioxide layer on it. This approach reduced the number of surface-state electrons at the interface between the silicon and oxide layers, permitting fabrication of the first successful field-effect transistor in 1960 at Bell Labs—which, however, did not pursue its development any further.

Refinements of the FET design by other companies, especially RCA and Fairchild, resulted in the metal-oxide-semiconductor field-effect transistor (MOSFET) during the early 1960s. The key problems to be solved were the stability and reliability of these MOS transistors, which relied upon interactions occurring at or near the sensitive silicon surface rather than deep inside. The two firms began to make MOS transistors commercially available in late 1964.

In early 1963 Frank Wanlass at Fairchild developed the complementary MOS (CMOS) transistor circuit, based on a pair of MOS transistors. This approach eventually proved ideal for use in integrated circuits because of its simplicity of production and very low power dissipation during standby operation. Stability problems continued to plague MOS transistors, however, until researchers at Fairchild developed solutions in the mid-1960s. By the end of the decade, MOS transistors were beginning to displace bipolar junction transistors in microchip manufacturing. Since the late 1980s CMOS has been the technology of choice for digital applications, while bipolar transistors are now used primarily for analog and microwave devices.

Transistor principles

The operation of junction transistors, as well as most other semiconductor devices, depends heavily on the behaviour of electrons and holes at the interface between two dissimilar layers, known as a p-n junction. Discovered in 1940 by Bell Labs electrochemist Russell Ohl, p-n junctions are formed by adding two different impurity elements to adjacent regions of germanium or silicon. The addition of these impurity elements is called doping. Atoms of elements from Group 15 of the periodic table (which possess five valence electrons), such as phosphorus or math, contribute an electron that has no natural resting place within the crystal lattice. These excess electrons are therefore loosely bound and relatively free to roam about, acting as charge carriers that can conduct electrical current. Atoms of elements from Group 13 (which have three valence electrons), such as boron or aluminum, induce a deficit of electrons when added as impurities, effectively creating “holes” in the lattice. These positively charged quantum mechanical entities are also fairly free to roam around and conduct electricity. Under the influence of an electric field, the electrons and holes move in opposite directions. During and immediately after World War II, chemists and metallurgists at Bell Labs perfected techniques of adding impurities to high-purity silicon and germanium to induce the desired electron-rich layer (known as the n-layer) and the electron-poor layer (known as the p-layer) in these semiconductors, as described in the section Development of transistors.

A p-n junction acts as a rectifier, similar to the old point-contact crystal rectifiers, permitting easy flow of current in only a single direction. If no voltage is applied across the junction, electrons and holes will gather on opposite sides of the interface to form a depletion layer that will act as an insulator between the two sides. A negative voltage applied to the n-layer will drive the excess electrons within it toward the interface, where they will combine with the positively charged holes attracted there by the electric field. Current will then flow easily. If instead a positive voltage is applied to the n-layer, the resulting electric field will draw electrons away from the interface, so combinations of them with holes will occur much less often. In this case current will not flow (other than tiny leakage currents). Thus, electricity will flow in only one direction through a p-n junction.

Junction transistors

Shortly after his colleagues John Bardeen and Walter H. Brattain invented their point-contact device, Bell Labs physicist William B. Shockley recognized that these rectifying characteristics might also be used in making a junction transistor. In a 1949 paper Shockley explained the physical principles behind the operation of these junctions and showed how to use them in a three-layer—n-p-n or p-n-p—device that could act as a solid-state amplifier or switch. Electric current would flow from one end to the other, with the voltage applied to the inner layer governing how much current rushed by at any given moment. In the n-p-n junction transistor, for example, electrons would flow from one n-layer through the inner p-layer to the other n-layer. Thus, a weak electrical signal applied to the inner, base layer would modulate the current flowing through the entire device. For this current to flow, some of the electrons would have to survive briefly in the presence of holes; in order to reach the second n-layer, they could not all combine with holes in the p-layer. Such bipolar operation was not at all obvious when Shockley first conceived his junction transistor. Experiments with increasingly pure crystals of silicon and germanium showed that it indeed occurred, making bipolar junction transistors possible.

To achieve bipolar operation, it also helps that the base layer be narrow, so that electrons (in n-p-n transistors) and holes (in p-n-p) do not have to travel very far in the presence of their opposite numbers. Narrow base layers also promote high-frequency operation of junction transistors: the narrower the base, the higher the operating frequency. That is a major reason why there was so much interest in developing diffused-base transistors during the 1950s, as described in the section Silicon transistors. Their microns-thick bases permitted transistors to operate above 100 megahertz (100 million cycles per second) for the first time.

MOS-type transistors

A similar principle applies to metal-oxide-semiconductor (MOS) transistors, but here it is the distance between source and drain that largely determines the operating frequency. In an n-channel MOS (NMOS) transistor, for example, the source and the drain are two n-type regions that have been established in a piece of p-type semiconductor, usually silicon. Except for the two points at which metal leads contact these regions, the entire semiconductor surface is covered by an insulating oxide layer. The metal gate, usually aluminum, is deposited atop the oxide layer just above the gap between source and drain. If there is no voltage (or a negative voltage) upon the gate, the semiconductor material beneath it will contain excess holes, and very few electrons will be able to cross the gap, because one of the two p-n junctions will block their path. Therefore, no current will flow in this configuration—other than unavoidable leakage currents. If the gate voltage is instead positive, an electric field will penetrate through the oxide layer and attract electrons into the silicon layer (often called the inversion layer) directly beneath the gate. Once this voltage exceeds a specific threshold value, electrons will begin flowing easily between source and drain. The transistor turns on.

Analogous behaviour occurs in a p-channel MOS transistor, in which the source and the drain are p-type regions formed in n-type semiconductor material. Here a negative voltage above a threshold induces a layer of holes (instead of electrons) beneath the gate and permits a current of them to flow from source to drain. For both n-channel and p-channel MOS (also called NMOS and PMOS) transistors, the operating frequency is largely governed by the speed at which the electrons or holes can drift through the semiconductor material divided by the distance from source to drain. Because electrons have mobilities through silicon that are about three times higher than holes, NMOS transistors can operate at substantially higher frequencies than PMOS transistors. Small separations between source and drain also promote high-frequency operation, and extensive efforts have been devoted to reducing this distance.

In the 1960s Frank Wanlass of Fairchild Semiconductor recognized that combinations of an NMOS and a PMOS transistor would draw extremely little current in standby operation—just the tiny, unavoidable leakage currents. These CMOS, or complementary metal-oxide-semiconductor, transistor circuits consume significant power only when the gate voltage exceeds some threshold and a current flows from source to drain. Thus, they can serve as very low-power devices, often a million times lower than the equivalent bipolar junction transistors. Together with their inherent simplicity of fabrication, this feature of CMOS transistors has made them the natural choice for manufacturing microchips, which today cram millions of transistors into a surface area smaller than a fingernail. In such cases the waste heat generated by the component’s power consumption must be kept to an absolute minimum, or the chips will simply melt.

Field-effect transistors

Another kind of unipolar transistor, called the metal-semiconductor field-effect transistor (MESFET), is particularly well suited for microwave and other high-frequency applications because it can be manufactured from semiconductor materials with high electron mobilities that do not support an insulating oxide surface layer. These include compound semiconductors such as germanium-silicon and gallium math. A MESFET is built much like a MOS transistor but with no oxide layer between the gate and the underlying conduction channel. Instead, the gate makes a direct, rectifying contact with the channel, which is generally a thin layer of n-type semiconductor supported underneath by an insulating substrate. A negative voltage on the gate induces a depletion layer just beneath it that restricts the flow of electrons between source and drain. The device acts like a voltage-controlled resistor; if the gate voltage is large enough, it can block this flow almost completely. By contrast, a positive voltage on the gate encourages electrons to traverse the channel.

To improve MESFET performance even further, advanced devices known as heterojunction field-effect transistors have been developed, in which p-n junctions are established between two slightly dissimilar semiconductor materials, such as gallium math and aluminum gallium math. By properly controlling the impurities in the two substances, a high-conductivity channel can be formed at their interface, promoting the flow of electrons through it. If one semiconductor is a high-purity material, its electron mobility can be large, resulting in a high operating frequency for this kind of transistor. (The electron mobility of gallium math, for example, is five times that of silicon.) Heterojunction MESFETs are increasingly used for microwave applications such as cellular telephone systems.

Transistors and Moore’s law

In 1965, four years after Fairchild Semiconductor Corporation and Texas Instruments Inc. marketed their first integrated circuits, Fairchild research director Gordon E. Moore made a prediction in a special issue of Electronics magazine. Observing that the total number of components in these circuits had roughly doubled each year, he blithely extrapolated this annual doubling to the next decade, estimating that microcircuits of 1975 would contain an astounding 65,000 components per chip.

History proved Moore correct. His bold extrapolation has since become enshrined as Moore’s law—though its doubling period was lengthened to 18 months in the mid-1970s. What has made this dramatic explosion in circuit complexity possible is the steadily shrinking size of transistors over the decades. Measured in millimetres in the late 1940s, the dimensions of a typical transistor are typically about 10 nanometres, a reduction factor of over 100,000. Submicron transistor features were attained during the 1980s, when dynamic random-access memory (DRAM) chips began offering megabit storage capacities. At the dawn of the 21st century, these features approached 0.1 micron across, which allowed the manufacture of gigabit memory chips and microprocessors that operate at gigahertz frequencies. Moore’s law continued into the second decade of the 21st century with the introduction of three-dimensional transistors that were tens of nanometres in size.

As the size of transistors has shrunk, their cost has plummeted correspondingly from tens of dollars apiece to thousandths of a penny. As Moore was fond of saying, every year more transistors are produced than raindrops over California, and it costs less to make one than to print a single character on the page of a book. They are by far the most common human artifact on the planet. Deeply embedded in everything electronic, transistors permeate modern life almost as thoroughly as molecules permeate matter. Cheap, portable, and reliable equipment based on this remarkable device can be found in almost any village and hamlet in the world. This tiny invention, by making possible the Information Age, has transformed the world into a truly global society, making it a far more intimately connected place than ever before.

Additional Information:

What is a transistor?

Transistors are the key building blocks of integrated circuits and microchips. They’re basically microscopic electronic switches or amplifiers. As such, they control the flow of electrical signals, enabling the chip to process and store information.

A transistor is usually made from silicon or another semiconductor material. The properties of these types of material are in between those of an electric conductive material (like a metal) and an insulator (like rubber).

Depending on the temperature, for example, or on the presence of impurities, they can either conduct or block electricity. This makes them perfectly suited to control electrical signals.

A transistor consists of three terminals: the base, collector, and emitter. Through these terminals, the transistor can control the flow of current in a circuit.

How does a transistor work?

When a small electrical current is applied to the transistor base, it allows a larger current to flow between the collector and the emitter. This is like a valve: a little pressure on the base controls a much bigger flow of electricity.

* If there is no current at the base, the transistor acts like a closed switch. No current flows between the collector and emitter.
* If there is a current at the base, the transistor opens up. Current flows through.

This ability to control electrical current allows transistors to work as a switch (switching things on and off) or as an amplifier (making signals stronger).

* As a switch: Transistors can rapidly turn on and off, representing binary states (0 and 1) that form the foundation of digital computing. When a small voltage is applied to the base terminal, it allows a larger current to flow between the collector and emitter, switching the transistor ‘on.’ When the voltage is removed, the current stops, and the transistor turns ‘off.’
* As an amplifier: Transistors can also be used to boost weak electrical signals. A small input signal applied to the base can control a larger output signal between the collector and emitter, amplifying it. This is essential in devices like radios, televisions, and audio systems, where signal amplification is necessary for proper operation.

In digital electronics, transistors are used in large numbers to build logic gates. These form the foundation of computer processors. By switching on and off very quickly, transistors help process the binary code that computers use to operate.

The first transistors

The history of transistors starts in the 1940s. Scientists were looking for better ways to control electrical signals in devices like radios and televisions. At the time, these devices used vacuum tubes, which were big, used a lot of power, and often broke.

In 1947, three scientists at Bell Labs—John Bardeen, Walter Brattain, and William Shockley—created the first working transistor. This compact device could do the same job as a vacuum tube. But it was much smaller, used less power, and was more reliable.

By the 1950s, transistors were used in radios and early computers, making these devices smaller and easier to carry around. In 1956, MIT created the first computer that used transistors instead of vacuum tubes, showing just how useful they were for building faster, more efficient machines. By the 1960s, scientists figured out how to put many transistors onto a single chip. This lead to the creation of increasingly more powerful computer chips.

Why are transistors so important?

Transistors are the core components in integrated circuits and chips. Today’s microchips can contain billions of transistors, allowing devices to perform complex tasks at high speeds while using less power. Their invention and continuous miniaturizations revolutionized electronics. They make it possible to build smaller, faster, and more efficient devices.

Today, transistors are everywhere—in smartphones, laptops, and virtually all electronic devices. They are the key to the digital world we live in, and keep getting smaller and more powerful as technology improves.

npn-pnp-symbols.png

#2 Re: Dark Discussions at Cafe Infinity » crème de la crème » Today 17:39:58

2428) Igor Tamm

Gist:

Work

In certain media the speed of light is lower than in a vacuum and particles can travel faster than light. One result of this was discovered in 1934 by Pavel Cherenkov, when he saw a bluish light around a radioactive preparation placed in water. Igor Tamm and Ilya Frank explained the phenomenon in 1937. On their way through a medium, charged particles disturb electrons in the medium. When these resume their position, they emit light. Normally this does not produce any light that can be observed, but if the particle moves faster than light, a kind of backwash of light appears.

Summary

Igor Yevgenyevich Tamm (born July 8 [June 26, Old Style], 1895, Vladivostok, Siberia, Russia—died April 12, 1971, Moscow, Russia, Soviet Union) was a Soviet physicist who shared the 1958 Nobel Prize for Physics with Pavel A. Cherenkov and Ilya M. Frank for his efforts in explaining Cherenkov radiation. Tamm was one of the theoretical physicists who contributed to the construction of the first Soviet thermonuclear bomb.

Tamm’s father was an engineer in the city of Yelizavetgrad (now Kirovohrad, Ukr.), where he was responsible for building and managing electric power stations and water systems. Tamm graduated from the gymnasium there in 1913 and went abroad to study at the University of Edinburgh. The following year he returned to Moscow State University, and he graduated in 1918. In 1924 he became a lecturer in the physics department, and in 1930 he succeeded his mentor, Leonid I. Mandelstam, to the chair of theoretical physics. In 1933 Tamm was elected a corresponding member of the Soviet Academy of Sciences. The following year, he joined the P.N. Lebedev Physics Institute of the Soviet Academy of Sciences (FIAN), where he organized and headed the theoretical division, a position he occupied until his death.

Tamm’s early studies of unique forms of electron bonding (“Tamm surface levels”) on the surfaces of crystalline solids had important applications in the later development of solid-state semiconductor devices. In 1934 Cherenkov had discovered that light is emitted when gamma rays pass through a liquid medium. In 1937 Tamm and Frank explained this phenomenon as the emission of light waves by electrically charged particles moving faster than the speed of light in a medium. Tamm developed this theory more fully in a paper published in 1939. For these discoveries Tamm, Frank, and Cherenkov received the 1958 Nobel Prize for Physics.

Immediately after World War II, Tamm, though a major theoretician, was not assigned to work on the atomic bomb project, possibly for political reasons. In particular, he was branded a “bourgeois idealist” and his brother an “enemy of the state.” Nevertheless, in June 1948, when physicist Igor V. Kurchatov needed a strong team to investigate the feasibility of creating a thermonuclear bomb, Tamm was recruited to organize the theoretical division of FIAN in Moscow. The Tamm group came to include physicists Yakov B. Zeldovich, Vitaly L. Ginzburg, Semyon Z. Belenky, Andrey D. Sakharov, Efim S. Fradkin, Yuri A. Romanov, and Vladimir Y. Fainberg. Between March and April 1950, Tamm and several members of his group were sent to the secret installation known as Arzamas-16 (near the present-day village of Sarov) to work under physicist Yuly Khariton’s direction on a thermonuclear bomb project. One bomb design, known as the Sloika (“Layer Cake”), was successfully tested on Aug. 12, 1953. Tamm was elected a full member of the Academy of Sciences in October 1953 and the same year was awarded a Hero of Socialist Labour. On Nov. 22, 1955, the Soviet Union successfully tested a more modern thermonuclear bomb that was analogous to the design of the American physicists Edward Teller and Stanislaw Ulam.

Tamm spent the latter decades of his career at the Lebedev Institute, where he worked on building a fusion reactor to control fusion, using a powerful magnetic field in a donut-shaped device known as a Tokamak reactor.
Britannica Quiz

Details

Igor Yevgenyevich Tamm (8 July 1895 – 12 April 1971) was a Soviet physicist who received the 1958 Nobel Prize in Physics, jointly with Pavel Alekseyevich Cherenkov and Ilya Mikhailovich Frank, for their 1934 discovery and demonstration of Cherenkov radiation. He also predicted the quasi-particle of sound: the phonon; and in 1951, together with Andrei Sakharov, proposed the Tokamak system.

Biography

Igor Tamm was born in 1895 in Vladivostok into the family of Eugene Tamm, a civil engineer, and his wife Olga Davydova. According to Russian sources, Tamm had German noble descent on his father's side through his grandfather Theodor Tamm, who emigrated from Thuringia. Although his surname "Tamm" is rather common in Estonia, other sources state he was Jewish or had Jewish ancestry.

He studied at a gymnasium in Elisavetgrad (now Kropyvnytskyi, Ukraine). In 1913–1914 he studied at the University of Edinburgh together with his school-friend Boris Hessen.

At the outbreak of World War I in 1914 he joined the army as a volunteer field medic. In 1917 he joined the Revolutionary movement and became an active anti-war campaigner, serving on revolutionary committees after the March Revolution. He returned to the Moscow State University from which he graduated in 1918.

Tamm married Nataliya Shuyskaya (1894–1980) in September 1917. Shе belonged to a noble Rurikid Shuysky family. They eventually had two children, Irina (1921–2009, chemist) and Evgeny (1926–2008, experimental physicist and famous mountain climber, leader of the Soviet Everest expedition in 1982[).

On 1 May 1923, Tamm began teaching physics at the Second Moscow State University. The same year, he finished his first scientific paper, Electrodynamics of the Anisotropic Medium in the Special Theory of Relativity. In 1928, he spent a few months with Paul Ehrenfest at the University of Leiden and made a life-long friendship with Paul Dirac. From 1934 until his death in 1971 Tamm was the head of the theoretical department at Lebedev Physical Institute in Moscow.

In 1932, Tamm published a paper with his proposal of the concept of surface states. This concept is important for metal–oxide–semiconductor field-effect transistor (MOSFET) physics.

In 1934, Tamm and Semen Altshuller suggested that the neutron has a non-zero magnetic moment, the idea was met with scepticism at that time, as the neutron was supposed to be an elementary particle with zero charge, and thus could not have a magnetic moment. The same year, Tamm coined an idea that proton-neutron interactions can be described as an exchange force transmitted by a yet unknown massive particle, this idea was later developed by Hideki Yukawa into a theory of meson forces.

In 1945 he developed an approximation method for many-body physics. As Sidney Dancoff developed it independently in 1950, it is now called the Tamm-Dancoff approximation.

He was the Nobel Laureate in Physics for the year 1958 together with Pavel Cherenkov and Ilya Frank for the discovery and the interpretation of the Cherenkov-Vavilov effect.

In late 1940s to early 1950s Tamm was involved in the Soviet thermonuclear bomb project; in 1949–1953 he spent most of his time in the "secret city" of Sarov, working as a head of the theoretical group developing the hydrogen bomb, however he retired from the project and returned to the Moscow Lebedev Physical Institute after the first successful test of a hydrogen bomb in 1953.

In 1951, together with Andrei Sakharov, Tamm proposed a tokamak system for the realization of controlled thermonuclear fusion on the basis of toroidal magnetic thermonuclear reactor and soon after the first such devices were built by the INF. Results from the T-3 Soviet magnetic confinement device in 1968, when the plasma parameters unique for that time were obtained, showed temperatures in their machine to be over an order of magnitude higher than what was expected by the rest of the community. The western scientists visited the experiment and verified the high temperatures and confinement, sparking a wave of optimism for the prospects of the tokamak as well as construction of new experiments, which is still the dominant magnetic confinement device today.

In 1964 he was elected a Member of the German Academy of Sciences Leopoldina.

Tamm was a student of Leonid Isaakovich Mandelshtam in science and life.

Tamm was an atheist.

Tamm died in Moscow, Soviet Union on 12 April 1971, the Lunar crater Tamm is named after him. He is buried at Novodevichy Cemetery.

tamm-13131-portrait-medium.jpg

#3 Re: This is Cool » Miscellany » Today 17:23:15

2490) Refractory

Gist

Refractory materials are specialized, non-metallic substances engineered to withstand extreme temperatures, (often >1000 Degrees Fahrenheit or 538 Degrees Centigrade), high pressure, and corrosion without softening or deformation. They are critical for lining furnaces, kilns, and reactors. Key properties include high thermal stability, low thermal expansion, and resistance to molten slag and thermal shock.

Refractory materials are inorganic (not from living material), non-metal substances that can withstand extremely high temperatures without any loss of strength or shape. They are used in devices, such as furnaces, that heat substances and in tanks and other storage devices that hold hot materials.

Summary:

What Are Refractories?

Refractories are ceramic materials designed to withstand the very high temperatures (in excess of 1,000°F [538°C]) encountered in modern manufacturing. More heat-resistant than metals, they are used to line the hot surfaces found inside many industrial processes.

In addition to being resistant to thermal stress and other physical phenomena induced by heat, refractories can withstand physical wear and corrosion caused by chemical agents. Thus, they are essential to the manufacture of petrochemical products and the refining of gasoline.

Refractory products generally fall into one of two broad categories: preformed shapes or unformed compositions, often called specialty or monolithic refractories. Then, there are refractory ceramic fibers, which resemble residential insulation, but insulate at much higher temperatures. Bricks and shapes are the more traditional form of refractories and historically have accounted for the majority of refractory production.

Refractories come in all shapes and sizes. They can be pressed or molded for use in floors and walls, produced in interlocking shapes and wedges, or curved to fit the insides of boilers and ladles. Some refractory parts are small and possess a complex and delicate geometry; others, in the form of precast or fusion-cast blocks, are massive and may weigh several tons.

What Are Refractories Made Of?

Refractories are produced from natural and synthetic materials, usually nonmetallic, or combinations of compounds and minerals such as alumina, fireclays, bauxite, chromite, dolomite, magnesite, silicon carbide, and zirconia.

What Are Refractories Used For?

From the simple (e.g., fireplace brick linings) to the sophisticated (e.g., reentry heat shields for the space shuttle), refractories are used to contain heat and protect processing equipment from intense temperatures. In industry, they are used to line boilers and furnaces of all types (reactors, ladles, stills, kilns, etc.).

It is a tribute to refractory engineers, scientists, technicians, and plant personnel that more than 5,000 brand name products in the United States are listed in the latest “Product Directory of the Refractories Industry.”

Details

In materials science, a refractory (or refractory material) is a material that is resistant to decomposition by heat or chemical attack and that retains its strength and rigidity at high temperatures. They are inorganic, non-metallic compounds that may be porous or non-porous, and their crystallinity varies widely: they may be crystalline, polycrystalline, amorphous, or composite. They are typically composed of oxides, carbides or nitrides of the following elements: silicon, aluminium, magnesium, calcium, boron, chromium and zirconium. Many refractories are ceramics, but some such as graphite are not, and some ceramics such as clay pottery are not considered refractory. Refractories are distinguished from the refractory metals, which are elemental metals and their alloys that have high melting temperatures.

Refractories are defined by ASTM C71 as "non-metallic materials having those chemical and physical properties that make them applicable for structures, or as components of systems, that are exposed to environments above 1,000 °F (811 K; 538 °C)". Refractory materials are used in furnaces, kilns, incinerators, and reactors. Refractories are also used to make crucibles and molds for casting glass and metals. The iron and steel industry and metal casting sectors use approximately 70% of all refractories produced.

Refractory materials

Refractory materials must be chemically and physically stable at high temperatures. Depending on the operating environment, they must be resistant to thermal shock, be chemically inert, and/or have specific ranges of thermal conductivity and of the coefficient of thermal expansion.

The oxides of aluminium (alumina), silicon (silica) and magnesium (magnesia) are the most important materials used in the manufacturing of refractories. Another oxide usually found in refractories is the oxide of calcium (lime). Fire clays are also widely used in the manufacture of refractories.

Refractories must be chosen according to the conditions they face. Some applications require special refractory materials. Zirconia is used when the material must withstand extremely high temperatures. Silicon carbide and carbon (graphite) are two other refractory materials used in some very severe temperature conditions, but they cannot be used in contact with oxygen, as they would oxidize and burn.

Binary compounds such as tungsten carbide or boron nitride can be very refractory. Hafnium carbide is the most refractory binary compound known, with a melting point of 3890 °C. The ternary compound tantalum hafnium carbide has one of the highest melting points of all known compounds (4215 °C).

Molybdenum disilicide has a high melting point of 2030 °C and is often used as a heating element.

Uses

Refractory materials are useful for the following functions:

* Serving as a thermal barrier between a hot medium and the wall of a containing vessel
* Withstanding physical stresses and preventing erosion of vessel walls due to the hot medium
* Protecting against corrosion
* Providing thermal insulation

Refractories have multiple useful applications. In the metallurgy industry, refractories are used for lining furnaces, kilns, reactors, and other vessels which hold and transport hot media such as metal and slag. Refractories have other high temperature applications such as fired heaters, hydrogen reformers, ammonia primary and secondary reformers, cracking furnaces, utility boilers, catalytic cracking units, air heaters, and sulfur furnaces. They are used for surfacing flame deflectors in rocket launch structures.

Additional Information

A refractory is any material that has an unusually high melting point and that maintains its structural properties at very high temperatures. Composed principally of ceramics, refractories are employed in great quantities in the metallurgical, glassmaking, and ceramics industries, where they are formed into a variety of shapes to line the interiors of furnaces, kilns, and other devices that process materials at high temperatures.

In this article the essential properties of ceramic refractories are reviewed, as are the principal refractory materials and their applications. At certain points in the article reference is made to the processing techniques employed in the manufacture of ceramic refractories; more detailed description of these processes can be found in the articles traditional ceramics and advanced ceramics. The connection between the properties of ceramic refractories and their chemistry and microstructure is explained in ceramic composition and properties.

Properties

Because of the high strengths exhibited by their primary chemical bonds, many ceramics possess unusually good combinations of high melting point and chemical inertness. This makes them useful as refractories. (The word refractory comes from the French réfractaire, meaning “high-melting.”) The property of chemical inertness is of special importance in metallurgy and glassmaking, where the furnaces are exposed to extremely corrosive molten materials and gases. In addition to temperature and corrosion resistance, refractories must possess superior physical wear or abrasion resistance, and they also must be resistant to thermal shock. Thermal shock occurs when an object is rapidly cooled from high temperature. The surface layers contract against the inner layers, leading to the development of tensile stress and the propagation of cracks. Ceramics, in spite of their well-known brittleness, can be made resistant to thermal shock by adjusting their microstructure during processing. The microstructure of ceramic refractories is quite coarse when compared with whitewares such as porcelain or even with less finely textured structural clay products such as brick. The size of filler grains can be on the scale of millimetres, instead of the micrometre scale seen in whiteware ceramics. In addition, most ceramic refractory products are quite porous, with large amounts of air spaces of varying size incorporated into the material. The presence of large grains and pores can reduce the load-bearing strength of the product, but it also can blunt cracks and thereby reduce susceptibility to thermal shock. However, in cases where a refractory will come into contact with corrosive substances (for example, in glass-melting furnaces), a porous structure is undesirable. The ceramic material can then be made with a higher density, incorporating smaller amounts of pores.

Composition and processing

The composition and processing of ceramic refractories vary widely according to the application and the type of refractory. Most refractories can be classified on the basis of composition as either clay-based or nonclay-based. In addition, they can be classified as either acidic (containing silica [SiO2] or zirconia [ZrO2]) or basic (containing alumina [Al2O3] or alkaline-earth oxides such as lime [CaO] or magnesia [MgO]). Among the clay-based refractories are fireclay, high-alumina, and mullite ceramics. There is a wide range of nonclay refractories, including basic, extra-high alumina, silica, silicon carbide, and zircon materials. Most clay-based products are processed in a manner similar to other traditional ceramics such as structural clay products; e.g., stiff-mud processes such as press forming or extrusion are employed to form the ware, which is subsequently dried and passed through long tunnel kilns for firing. Firing, as described in the article traditional ceramics, induces partial vitrification, or glass formation, which is a liquid-sintering process that binds particles together. Nonclay-based refractories, on the other hand, are bonded using techniques reserved for advanced ceramic materials. For instance, extra-high alumina and zircon ceramics are bonded by transient-liquid or solid-state sintering, basic bricks are bonded by chemical reactions between constituents, and silicon carbide is reaction-bonded from silica sand and coke. These processes are described in the article advanced ceramics.

Clay-based refractories

In this section the composition and properties of the clay-based refractories are described. Most are produced as preformed brick. Much of the remaining products are so-called monolithics, materials that can be formed and solidified on-site. This category includes mortars for cementing bricks and mixes for ramming or gunning (spraying from a pressure gun) into place. In addition, lightweight refractory insulation can be made in the form of fibreboards, blankets, and vacuum-cast shapes.

Fireclay

The workhorse of the clay-based refractories are the so-called fireclay materials. These are made from clays containing the aluminosilicate mineral kaolinite (Al2[Si2O5][OH]4) plus impurities such as alkalis and iron oxides. The alumina content ranges from 25 to 45 percent. Depending upon the impurity content and the alumina-to-silica ratio, fireclays are classified as low-duty, medium-duty, high-duty, and super-duty, with use temperature rising as alumina content increases. Fireclay bricks, or firebricks, exhibit relatively low expansion upon heating and are therefore moderately resistant against thermal shock. They are fairly inert in acidic environments but are quite reactive in basic environments. Fireclay bricks are used to line portions of the interiors of blast furnaces, blast-furnace stoves, and coke ovens.

High alumina

High-alumina refractories are made from bauxite, a naturally occurring material containing aluminum hydroxide (Al[OH]3) and kaolinitic clays. These raw materials are roasted to produce a mixture of synthetic alumina and mullite (an aluminosilicate mineral with the chemical formula 3Al2O3 · 2SiO2). By definition high-alumina refractories contain between 50 and 87.5 percent alumina. They are much more robust than fireclay refractories at high temperatures and in basic environments. In addition, they exhibit better volume stability and abrasion resistance. High-alumina bricks are used in blast furnaces, blast-furnace stoves, and liquid-steel ladles.

Mullite

Mullite is an aluminosilicate compound with the specific formula 3Al2O3 · 2SiO3 and an alumina content of approximately 70 percent. It has a melting point of 1,850° C (3,360° F). Various clays are mixed with bauxite in order to achieve this composition. Mullite refractories are solidified by sintering in electric furnaces at high temperatures. They are the most stable of the aluminosilicate refractories and have excellent resistance to high-temperature loading. Mullite bricks are used in blast-furnace stoves and in the forehearth roofs of glass-melting furnaces.

Non-clay-based refractories

Nonclay refractories such as those described below are produced almost exclusively as bricks and pressed shapes, though some magnesite-chrome and alumina materials are fuse-cast into molds. The usual starting materials for these products are carbonates or oxides of metals such as magnesium, aluminum, and zirconium.

Basic

Basic refractories include magnesia, dolomite, chrome, and combinations of these materials. Magnesia brick is made from periclase, the mineral form of magnesia (MgO). Periclase is produced from magnesite (a magnesium carbonate, MgCO3), or it is produced from magnesium hydroxide (Mg[OH]2), which in turn is derived from seawater or underground brine solutions. Magnesia bricks can be chemically bonded, pitch-bonded, burned, or burned and then pitch-impregnated.

Dolomite refractories take their name from the dolomite ore, a combination of calcium and magnesium carbonates (CaCO3 · MgCO3), from which they are produced. After burning they must be impregnated with tar or pitch to prevent rehydration of lime (CaO). Chrome brick is made from chromium ores, which are complex solid solutions of the spinel type (a series of oxide minerals including chromite and magnetite) plus silicate gangue, or impurity phases.

All the basic refractories exhibit outstanding resistance to iron oxides and the basic slags associated with steelmaking—especially when they incorporate carbon additions either as flakes or as residual carbon from pitch-bonding or tar-impregnation. For this reason they find wide employment in the linings of basic oxygen furnaces, electric furnaces, and open-hearth furnaces. They also are used to line the insides of copper converters.

Extra-high alumina

Extra-high alumina refractories are classified as having between 87.5 and 100 percent Al2O3 content. The alumina grains are fused or densely sintered together to obtain high density. Extra-high alumina refractories exhibit excellent volume stability to over 1,800° C (3,275° F).

Silica

Silica refractories are made from quartzites and silica gravel deposits with low alumina and alkali contents. They are chemically bonded with 3–3.5 percent lime. Silica refractories have good load resistance at high temperatures, are abrasion-resistant, and are particularly suited to containing acidic slags. Of the various grades—coke-oven quality, conventional, and super-duty—the super-duty, which has particularly low impurity contents, is used in the superstructures of glass-melting furnaces.

Zircon

Refractories made of zircon (a zirconium silicate, ZrSiO4) also are used in glass tanks because of their good resistance to the corrosive action of molten glasses. They possess good volume stability for extended periods at elevated temperatures, and they also show good creep resistance (i.e., low deformation under hot loading).

Silicon carbide

Silicon carbide (SiC) ceramics are made by a process referred to as reaction bonding, invented by the American Edward G. Acheson in 1891. In the Acheson process, pure silica sand and finely divided carbon (coke) are reacted in an electric furnace at temperatures in the range of 2,200°–2,480° C (4,000°–4,500° F). SiC ceramics have outstanding high-temperature load-bearing strength and dimensional stability. They also exhibit great thermal shock resistance because of their high thermal conductivity. (In this case, high thermal conductivity prevents the formation of extreme temperature differences between inner and outer layers of a material, which frequently are a source of thermal expansion stresses.) Therefore, SiC makes good kiln furniture for supporting other ceramics during their firing.

Other non-clay-based refractories

Other refractories produced in smaller quantities for special applications include graphite (a layered, multicrystalline form of carbon), zirconia (ZrO2), forsterite (Mg2SiO4), and combinations such as magnesia-alumina, magnesite-chrome, chrome-alumina, and alumina-zirconia-silica. Alumina-zirconia-silica (AZS), which is melted and cast into molds or directly into the melting tanks of glass furnaces, is an excellent corrosion-resistant refractory that does not release impurities into the glass melt. AZS is also poured to make tank blocks (also called soldier blocks or sidewall blocks) used in the construction and repair of glass furnaces.

refractories-1302808816_600x400.jpg

#4 Science HQ » Reflection » Today 16:44:00

Jai Ganesh
Replies: 0

Reflection

Gist

Reflection is the phenomenon where waves, like light or sound, bounce off a surface and return to their original medium, creating an image or echo, with the angle of incidence (incoming angle) equaling the angle of reflection (bouncing-off angle) for smooth surfaces, as seen in mirrors or water. It also describes a "flip" in math, creating a mirror image, and can refer to thoughtful consideration or a sign of something.

Reflection of light is the process where a light ray strikes a surface and bounces back into the same medium, enabling vision and image formation. It follows two key laws: the angle of incidence equals the angle of reflection, and the incident ray, reflected ray, and normal all lie in the same plane. 

Reflection of light is the process where light rays strike a surface—typically smooth and shiny like a mirror—and bounce back into the original medium. This phenomenon, which enables vision and image formation, follows two fundamental laws: the angle of incidence equals the angle of reflection, and the incident ray, normal, and reflected ray all lie in the same plane.

Summary

Reflection is an abrupt change in the direction of propagation of a wave that strikes the boundary between different mediums. At least part of the oncoming wave disturbance remains in the same medium. Regular reflection, which follows a simple law, occurs at plane boundaries. The angle between the direction of motion of the oncoming wave and a perpendicular to the reflecting surface (angle of incidence) is equal to the angle between the direction of motion of the reflected wave and a perpendicular (angle of reflection). Reflection at rough, or irregular, boundaries is diffuse. The reflectivity of a surface material is the fraction of energy of the oncoming wave that is reflected by it.

Details

Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected.

In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.

Reflection of Light

A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass, although the reflection is generally less effective compared with mirrors.

In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle.

Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector.

When light reflects off a material with higher refractive index than the medium in which is traveling, it undergoes a 180° phase shift. In contrast, when light reflects off a material with lower refractive index the reflected light is in phase with the incident light. This is an important principle in the field of thin-film optics.

Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.

Laws of reflection

If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows:

1) The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane.
2) The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal.
3) The reflected ray and the incident ray are on the opposite sides of the normal.

These three laws can all be derived from the Fresnel equations.

Mechanism

In classical electrodynamics, light is considered as an electromagnetic wave, which is described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle.

In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, and the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light. The reflected light is the combination of the backward radiation of all of the electrons.

In metals, electrons with no binding energy are called free electrons. When these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is π radians (180°), so the forward radiation cancels the incident light, and backward radiation is just the reflected light.

Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter.

Diffuse reflection

When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law.

The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.

Retroreflection

Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came.

When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet.

Some animals' retinas act as retroreflectors (see tapetum lucidum for more detail), as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight.

A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror. A surface can be made partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes.

Multiple reflections

When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle. The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.

Note that these are theoretical ideals, requiring perfect alignment of perfectly smooth, perfectly flat perfect reflectors that absorb none of the light. In practice, these situations can only be approached but not achieved because the effects of any surface imperfections in the reflectors propagate and magnify, absorption gradually extinguishes the image, and any observing equipment (biological or technological) will interfere.

Complex conjugate reflection

In this process (which is also known as phase conjugation), light bounces exactly back in the direction from which it came due to a nonlinear optical process. Not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time. If one were to look into a complex conjugating mirror, it would be black because only the photons which left the pupil would reach the pupil.

Other types of reflection:

Neutron reflection

Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off atoms within a material is commonly used to determine the material's internal structure.

Sound reflection

When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space. In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction. Sound reflection can affect the acoustic space.

Seismic reflection

Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits.

Time reflections

Scientists have speculated that there could be time reflections. Scientists from the Advanced Science Research Center at the CUNY Graduate Center report that they observed time reflections by sending broadband signals into a strip of metamaterial filled with electronic switches. The "time reflections" in electromagnetic waves are discussed in a 2023 paper published in the journal Nature Physics.

Additional Information:

Introduction to the Reflection of Light

Light reflection occurs when a ray of light bounces off a surface and changes direction. From a detailed definition of ‘reflection of light’ to the different types of reflection and example images, our introductory article tells you everything you need to know about the reflection of light.

What is Reflection of Light?

Reflection of light (and other forms of electromagnetic radiation) occurs when the waves encounter a surface or other boundary that does not absorb the energy of the radiation and bounces the waves away from the surface.

Reflection of Light Example

The simplest example of visible light reflection is the surface of a smooth pool of water, where incident light is reflected in an orderly manner to produce a clear image of the scenery surrounding the pool. Throw a rock into the pool, and the water is perturbed to form waves, which disrupt the reflection by scattering the reflected light rays in all directions.

Who Discovered the Reflection of Light?

Some of the earliest accounts of light reflection originate from the ancient Greek mathematician Euclid, who conducted a series of experiments around 300 BC, and appears to have had a good understanding of how light is reflected. However, it wasn’t until a millennium and a half later that the Arab scientist Alhazen proposed a law describing exactly what happens to a light ray when it strikes a smooth surface and then bounces off into space.

The incoming light wave is referred to as an incident wave, and the wave that is bounced away from the surface is termed the reflected wave. Visible white light that is directed onto the surface of a mirror at an angle (incident) is reflected back into space by the mirror surface at another angle (reflected) that is equal to the incident angle, as presented for the action of a beam of light from a flashlight on a smooth, flat mirror. Thus, the angle of incidence is equal to the angle of reflection for visible light as well as for all other wavelengths of the electromagnetic radiation spectrum. This concept is often termed the Law of Reflection. It is important to note that the light is not separated into its component colors because it is not being “bent” or refracted, and all wavelengths are being reflected at equal angles. The best surfaces for reflecting light are very smooth, such as a glass mirror or polished metal, although almost all surfaces will reflect light to some degree.

Because light behaves in some ways as a wave and in other ways as if it were composed of particles, several independent theories of light reflection have emerged. According to wave-based theories, the light waves spread out from the source in all directions, and upon striking a mirror, are reflected at an angle determined by the angle at which the light arrives. The reflection process inverts each wave back-to-front, which is why a reverse image is observed. The shape of light waves depends upon the size of the light source and how far the waves have traveled to reach the mirror. Wavefronts that originate from a source near the mirror will be highly curved, while those emitted by distant light sources will be almost linear, a factor that will affect the angle of reflection.

According to particle theory, which differs in some important details from the wave concept, light arrives at the mirror in the form of a stream of tiny particles, termed photons, which bounce away from the surface upon impact. Because the particles are so small, they travel very close together (virtually side by side) and bounce from different points, so their order is reversed by the reflection process, producing a mirror image. Regardless of whether light is acting as particles or waves, the result of reflection is the same. The reflected light produces a mirror image.

The amount of light reflected by an object, and how it is reflected, is highly dependent upon the degree of smoothness or texture of the surface. When surface imperfections are smaller than the wavelength of the incident light (as in the case of a mirror), virtually all of the light is reflected equally. However, in the real world most objects have convoluted surfaces that exhibit a diffuse reflection, with the incident light being reflected in all directions. Many of the objects that we casually view every day (people, cars, houses, animals, trees, etc.) do not themselves emit visible light but reflect incident natural sunlight and artificial light. For instance, an apple appears a shiny red color because it has a relatively smooth surface that reflects red light and absorbs other non-red (such as green, blue, and yellow) wavelengths of light.

How Many Types of Reflection of Light Are There?

The reflection of light can be roughly categorized into two types of reflection. Specular reflection is defined as light reflected from a smooth surface at a definite angle, whereas diffuse reflection is produced by rough surfaces that tend to reflect light in all directions. There are far more occurrences of diffuse reflection than specular reflection in our everyday environment.

To visualize the differences between specular and diffuse reflection, consider two very different surfaces: a smooth mirror and a rough reddish surface. The mirror reflects all of the components of white light (such as red, green, and blue wavelengths) almost equally and the reflected specular light follows a trajectory having the same angle from the normal as the incident light. The rough reddish surface, however, does not reflect all wavelengths because it absorbs most of the blue and green components, and reflects the red light. Also, the diffuse light that is reflected from the rough surface is scattered in all directions.

How Do Mirrors Reflect Light?

Perhaps the best example of specular reflection, which we encounter on a daily basis, is the mirror image produced by a household mirror that people might use many times a day to view their appearance. The mirror’s smooth reflective glass surface renders a virtual image of the observer from the light that is reflected directly back into the eyes. This image is referred to as “virtual” because it does not actually exist (no light is produced) and appears to be behind the plane of the mirror due to an assumption that the brain naturally makes. The way in which this occurs is easiest to visualize when looking at the reflection of an object placed on one side of the observer, so that the light from the object strikes the mirror

The type of reflection that is seen in a mirror depends upon the mirror’s shape and, in some cases, how far away from the mirror the object being reflected is positioned. Mirrors are not always flat and can be produced in a variety of configurations that provide interesting and useful reflection characteristics. Concave mirrors, commonly found in the largest optical telescopes, are used to collect the faint light emitted from very distant stars. The curved surface concentrates parallel rays from a great distance into a single point for enhanced intensity. This mirror design is also commonly found in shaving or cosmetic mirrors where the reflected light produces a magnified image of the face. The inside of a shiny spoon is a common example of a concave mirror surface, and can be used to demonstrate some properties of this mirror type. If the inside of the spoon is held close to the eye, a magnified upright view of the eye will be seen (in this case the eye is closer than the focal point of the mirror). If the spoon is moved farther away, a demagnified upside-down view of the whole face will be seen. Here the image is inverted because it is formed after the reflected rays have crossed the focal point of the mirror surface.

Another common mirror having a curved-surface, the convex mirror, is often used in automobile rear-view reflector applications where the outward mirror curvature produces a smaller, more panoramic view of events occurring behind the vehicle. When parallel rays strike the surface of a convex mirror, the light waves are reflected outward so that they diverge. When the brain retraces the rays, they appear to come from behind the mirror where they would converge, producing a smaller upright image (the image is upright since the virtual image is formed before the rays have crossed the focal point). Convex mirrors are also used as wide-angle mirrors in hallways and businesses for security and safety. The most amusing applications for curved mirrors are the novelty mirrors found at state fairs, carnivals, and fun houses. These mirrors often incorporate a mixture of concave and convex surfaces, or surfaces that gently change curvature, to produce bizarre, distorted reflections when people observe themselves.

The concave mirror has a reflection surface that curves inward, resembling a portion of the interior of a sphere. When light rays that are parallel to the principal or optical axis reflect from the surface of a concave mirror (in this case, light rays from the owl's feet), they converge on the focal point (red dot) in front of the mirror. The distance from the reflecting surface to the focal point is known as the mirror's focal length. The size of the image depends upon the distance of the object from the mirror and its position with respect to the mirror surface. In this case, the owl is placed away from the center of curvature and the reflected image is upside down and positioned between the mirror's center of curvature and its focal point.

The convex mirror has a reflecting surface that curves outward, resembling a portion of the exterior of a sphere. Light rays parallel to the optical axis are reflected from the surface in a direction that diverges from the focal point, which is behind the mirror (Figure 5). Images formed with convex mirrors are always right side up and reduced in size. These images are also termed virtual images, because they occur where reflected rays appear to diverge from a focal point behind the mirror.

Total Internal Reflection of Light

The principle of total internal reflection is the basis for fiber optic light transmission that makes possible medical procedures such as endoscopy, telephone voice transmissions encoded as light pulses, and devices such as fiber optic illuminators that are widely used in microscopy and other tasks requiring precision lighting effects. The prisms employed in binoculars and in single-lens reflex cameras also utilize total internal reflection to direct images through several 90-degree angles and into the user’s eye. In the case of fiber optic transmission, light entering one end of the fiber is reflected internally numerous times from the wall of the fiber as it zigzags toward the other end, with none of the light escaping through the thin fiber walls. This method of “piping” light can be maintained for long distances and with numerous turns along the path of the fiber.

Total internal reflection is only possible under certain conditions. The light is required to travel in a medium that has relatively high refractive index, and this value must be higher than that of the surrounding medium. Water, glass, and many plastics are therefore suitable for use when they are surrounded by air. If the materials are chosen appropriately, reflections of the light inside the fiber or light pipe will occur at a shallow angle to the inner surface, and all light will be totally contained within the pipe until it exits at the far end. At the entrance to the optic fiber, however, the light must strike the end at a high incidence angle in order to travel across the boundary and into the fiber.

The principles of reflection are exploited to great benefit in many optical instruments and devices, and this often includes the application of various mechanisms to reduce reflections from surfaces that take part in image formation. The concept behind antireflection technology is to control the light used in an optical device in such a manner that the light rays reflect from surfaces where it is intended and beneficial, and do not reflect away from surfaces where this would have a deleterious effect on the image being observed. One of the most significant advances made in modern lens design, whether for microscopes, cameras, or other optical devices, is the improvement in antireflection coating technology.

ReflectionofLight.png

#5 Dark Discussions at Cafe Infinity » Combined Quotes - I » Today 15:52:46

Jai Ganesh
Replies: 0

Combined Quotes - I

1. Your positive action combined with positive thinking results in success. - Shiv Khera

2. All the armies of Europe, Asia and Africa combined, with all the treasure of the earth (our own excepted) in their military chest; with a Buonaparte for a commander, could not by force, take a drink from the Ohio, or make a track on the Blue Ridge, in a trial of a thousand years. - Abraham Lincoln

3. A multitude of causes unknown to former times are now acting with a combined force to blunt the discriminating powers of the mind, and unfitting it for all voluntary exertion to reduce it to a state of almost savage torpor. - William Wordsworth

4. I always tell people that religious institutions and political institutions should be separate. So while I'm telling people this, I myself continue with them combined. Hypocrisy! - Dalai Lama

5. I love playing ego and insecurity combined. - Jim Carrey

6. Talent and effort, combined with our various backgrounds and life experiences, has always been the lifeblood of our singular American genius. - Michelle Obama

7. Rapid population growth and technological innovation, combined with our lack of understanding about how the natural systems of which we are a part work, have created a mess. - David Suzuki

8. A life of stasis would be population control, combined with energy rationing. That is the stasis world that you live in if you stay. And even with improvements in efficiency, you'll still have to ration energy. That, to me, doesn't sound like a very exciting civilization for our grandchildren's grandchildren to live in. - Jeff Bezos.

#6 Jokes » Corn Jokes - I » Today 15:30:08

Jai Ganesh
Replies: 0

Q: Why didn't anyone laugh at the gardener's jokes?
A: Because they were too corny!
* * *'
Q: How did the tomato court the corn?
A: He whispered sweet nothings into her ear.
* * *
Q: What did the corn say when he got complimented?
A: Aww, shucks!
* * *
Q: What does Chuck Norris do when he wants popcorn?
A: He breathes on Nebraska!
* * *
Q: What do you tell a vegetable after it graduates from College?
A: Corn-gratulations.
* * *

#7 Re: Jai Ganesh's Puzzles » General Quiz » Today 15:17:55

Hi,

#10737. What does the term in Geography Cryosphere mean?

#10738. What does the term in Geography Cryoturbation mean?

#8 Re: Jai Ganesh's Puzzles » English language puzzles » Today 15:06:30

Hi,

#5933. What does the adjective inflexible mean?

#5934. What does the verd (used with object) inflict mean?

#9 Re: Jai Ganesh's Puzzles » Doc, Doc! » Today 14:56:29

Hi,

#2563. What does the medical term Hallux rigidus mean?

#13 This is Cool » X-ray » Yesterday 20:05:12

Jai Ganesh
Replies: 0

X-ray

Gist

An X-ray is a quick, painless test that captures images of the structures inside the body — particularly the bones. X-ray beams pass through the body. These beams are absorbed in different amounts depending on the density of the material they pass through.

The full name for "X-ray" is X-radiation, referring to its nature as a form of high-energy electromagnetic radiation, with the 'X' signifying its unknown nature when first discovered by Wilhelm Conrad Röntgen in 1895.

In many languages, it's also called Röntgen radiation, honoring its discoverer. Through experimentation, he found that the mysterious light would pass through most substances but leave shadows of solid objects. Because he did not know what the rays were, he called them 'X,' meaning 'unknown,' rays.

Summary

An X-ray is a form of high-energy electromagnetic radiation with a wavelength shorter than those of ultraviolet rays and longer than those of gamma rays. Roughly, X-rays have a wavelength ranging from 10 nanometers to 10 picometers, corresponding to frequencies in the range of 30 petahertz to 30 exahertz (3×{10}^{16} Hz to 3×{10}^{19} Hz) and photon energies in the range of 100 eV to 100 keV, respectively.

X-rays were discovered in 1895 by the German scientist Wilhelm Conrad Röntgen, who named it X-radiation to signify an unknown type of radiation.

X-rays can penetrate many solid substances such as construction materials and living tissue, so X-ray radiography is widely used in medical diagnostics (e.g., checking for broken bones) and materials science (e.g., identification of some chemical elements and detecting weak points in construction materials). However X-rays are ionizing radiation and exposure can be hazardous to health, causing DNA damage, cancer and, at higher intensities, burns and radiation sickness. Their generation and use is strictly controlled by public health authorities.

Details

X-rays are a way for healthcare providers to get pictures of the inside of your body. X-rays use radiation to create black-and-white images that a radiologist reads. Then, they send a report to your provider. X-rays are mostly known for looking at bones and joints. But providers can use them to diagnose other conditions, too.

Overview:

What is an X-ray?

An X-ray is a type of medical imaging that uses radiation to take pictures of the inside of your body. We often think of an X-ray as something that checks for broken bones. But X-ray images can help providers diagnose other injuries and diseases, too.

Many people think of X-rays as black-and-white, two-dimensional images. But modern X-ray technology is often combined with other technologies to make more advanced types of images.

Types of X-rays

Some specific imaging tests that use X-rays are:

* Bone density (DXA) scan: This test captures X-ray images while also checking the strength and mineral content of your bones.
* CT scan (computed tomography): CT scans use X-ray and computers to create 3D images of the inside of your body.
* Dental X-ray: A dental provider takes X-rays of your mouth to look for cavities or issues with your gums.
* Fluoroscopy: This test uses a series of X-rays to show the inside of your body in real time. Providers use it to help diagnose issues with specific body parts. They also use it to help guide certain procedures, like an angiogram.
* Mammogram: This is a special X-ray of your breasts that shows irregularities that could lead to breast cancer.

X-rays can help healthcare providers diagnose various conditions in your body. Some of the most common areas on your body to get an X-ray are:

* Abdominal X-ray: This X-ray helps providers evaluate parts of your digestive system and diagnose conditions like kidney stones and bladder stones.
* Bone X-ray: You might get a bone X-ray if your provider suspects you have a broken bone, dislocated joint or arthritis. Images from bone X-rays can also show signs of bone cancer or infection.
* Chest X-ray: Your provider might order a chest X-ray if you have symptoms like chest pain, shortness of breath or a cough. It can look for signs of infection in your lungs or congestive heart failure.
* Head X-ray: These can help providers see skull fractures from head injuries or conditions that affect how the bones in your skull form, like craniosynostosis.
* Spine X-ray: A provider can use a spine X-ray to look for arthritis and scoliosis.

Test Details:

How do X-rays work?

X-rays work by sending beams of radiation through your body to create images on an X-ray detector nearby. Radiation beams are invisible, and you can’t feel them.

As the beams go through your body, bones, soft tissues and other structures absorb radiation in different ways:

* Solid or dense tissues (like bones and tumors) absorb radiation easily, so they appear bright white on the image.
* Soft tissues (like organs, muscle and fat) don’t absorb radiation as easily, so they appear in shades of gray on the X-ray.

A radiologist interprets the image and writes a report for the physician who ordered the X-ray. They make note of anything that’s abnormal or concerning. Then, your healthcare provider shares the results with you.

How do I prepare?

Preparation for an X-ray depends on the type of X-ray you’re getting. Your provider may ask you to:

* Remove metal objects like jewelry, hairpins or hearing aids (metal can interfere with X-rays and make the results inaccurate)
* Wear comfortable clothing or change into a gown before the X-ray

Tell your healthcare provider about your health history, allergies and any medications you’re taking. Let them know if you’re pregnant or think you could be.

What can I expect during an X-ray?

The exact steps of an X-ray depend on the kind you’re getting. In general, your provider will follow these steps during an X-ray:

* They’ll ask you to sit, stand or lie down on a table. In the past, your provider may have covered you with a lead shield (apron), but new evidence suggests that they aren’t necessary.
* Your provider will position the camera near the body part that they’re getting a picture of.
* Then, they’ll move your body or limbs in different positions and ask you to hold still. They may also ask you to hold your breath for a few seconds so the images aren’t blurry.

Sometimes, children can’t stay still long enough to produce clear images. Your child’s provider may recommend using a restraint during an X-ray. The restraint helps your child stay still and reduces the need for retakes. The restraints don’t hurt and won’t harm your child.

What happens after?

Most of the time, there aren’t any restrictions on what you can do after an X-ray. You can go back to your typical activities.

What are the risks or side effects of X-rays?

X-rays are safe and low risk.

X-rays use a safe and small amount of radiation — not much more than the naturally occurring radiation you get in your daily life. For instance, a dental X-ray exposes you to about the same amount of background radiation you’d get in one day.

X-ray radiation is usually safe for adults. But it can be harmful to a fetus. If you’re pregnant, your provider may choose another imaging test, like ultrasound.

Additional Information:

Overview

An X-ray is a quick, painless test that captures images of the structures inside the body — particularly the bones.

X-ray beams pass through the body. These beams are absorbed in different amounts depending on the density of the material they pass through. Dense materials, such as bone and metal, show up as white on X-rays. The air in the lungs shows up as black. Fat and muscle appear as shades of gray.

For some types of X-ray tests, a contrast medium — such as iodine or barium — is put into the body to get greater detail on the images.

Why it's done:

X-ray technology is used to examine many parts of the body.

Bones and teeth

* Fractures and infections. In most cases, fractures and infections in bones and teeth show up clearly on X-rays.
* Arthritis. X-rays of the joints can show evidence of arthritis. X-rays taken over the years can help your healthcare team tell if your arthritis is worsening.
* Dental decay. Dentists use X-rays to check for cavities in the teeth.
* Osteoporosis. Special types of X-ray tests can measure bone density.
* Bone cancer. X-rays can reveal bone tumors.

Chest

* Lung infections or conditions. Evidence of pneumonia, tuberculosis or lung cancer can show up on chest X-rays.
* Breast cancer. Mammography is a special type of X-ray test used to examine breast tissue.
* Enlarged heart. This sign of congestive heart failure shows up clearly on X-rays.
* Blocked blood vessels. Injecting a contrast material that contains iodine can help highlight sections of the circulatory system so they can be seen easily on X-rays.

Abdomen

* Digestive tract issues. Barium, a contrast medium delivered in a drink or an enema, can help show problems in the digestive system.
* Swallowed items. If a child has swallowed something such as a key or a coin, an X-ray can show the location of that object.

X-ray technology is used to examine many parts of the body.

Risks:

Radiation exposure

Some people worry that X-rays aren't safe. This is because radiation exposure can cause cell changes that may lead to cancer. The amount of radiation you're exposed to during an X-ray depends on the tissue or organ being examined. Sensitivity to the radiation depends on your age, with children being more sensitive than adults.

Generally, however, radiation exposure from an X-ray is low, and the benefits from these tests far outweigh the risks.

However, if you are pregnant or suspect that you may be pregnant, tell your healthcare team before having an X-ray. Though most diagnostic X-rays pose only small risk to an unborn baby, your care team may decide to use another imaging test, such as ultrasound.

Contrast medium

In some people, the injection of a contrast medium can cause side effects such as:

* A feeling of warmth or flushing.
* A metallic taste.
* Lightheadedness.
* Nausea.
* Itching.
* Hives.

Rarely, severe reactions to a contrast medium occur, including:

* Very low blood pressure.
* Difficulty breathing.
* Swelling of the throat or other parts of the body.

How you prepare:

Different types of X-rays require different preparations. Ask your healthcare team to provide you with specific instructions.

What to wear

In general, you undress whatever part of your body needs examination. You may wear a gown during the exam depending on which area is being X-rayed. You also may be asked to remove jewelry, eyeglasses and any metal objects because they can show up on an X-ray.

Contrast material

Before having some types of X-rays, you're given a liquid called contrast medium. Contrast mediums, such as barium and iodine, help outline a specific area of your body on the X-ray image. You may swallow the contrast medium or receive it as an injection or an enema.

What you can expect:

During the X-ray

X-rays are performed at medical offices, dentists' offices, emergency rooms and hospitals — wherever an X-ray machine is available. The machine produces a safe level of radiation that passes through the body and records an image on a specialized plate. You can't feel an X-ray.

A technologist positions your body to get the necessary views. Pillows or sandbags may be used to help you hold the position. During the X-ray exposure, you remain still and sometimes hold your breath to avoid moving so that the image doesn't blur.

An X-ray procedure may take just a few minutes for a simple X-ray or longer for more-involved procedures, such as those using a contrast medium.

Your child's X-ray

If a young child is having an X-ray, restraints or other tools may be used to keep the child still. These won't harm the child and they prevent the need for a repeat procedure, which may be necessary if the child moves during the X-ray exposure.

You may be allowed to remain with your child during the test. If you remain in the room during the X-ray exposure, you'll likely be asked to wear a lead apron to shield you from unnecessary X-ray exposure.

After the X-ray

After an X-ray, you generally can resume usual activities. Routine X-rays usually have no side effects. However, if you're given contrast medium before your X-ray, drink plenty of fluids to help rid your body of the contrast. Call your healthcare team if you have pain, swelling or redness at the injection site. Ask your team about other symptoms to watch for.

Results:

X-rays are saved digitally on computers and can be viewed on-screen within minutes. A radiologist typically views and interprets the results and sends a report to a member of your healthcare team, who then explains the results to you. In an emergency, your X-ray results can be made available in minutes.

iStock_22401848_MEDIUM-58262cb63df78c6f6adebb27.jpg

#14 Re: Dark Discussions at Cafe Infinity » crème de la crème » Yesterday 18:35:44

2427) Ilya Frank

Gist:

Work

In certain media the speed of light is lower than in a vacuum and particles can travel faster than light. One result of this was discovered in 1934 by Pavel Cherenkov, when he saw a bluish light around a radioactive preparation placed in water. Ilya Frank and Igor Tamm explained the phenomenon in 1937. On their way through a medium, charged particles disturb electrons in the medium. When these resume their position, they emit light. Normally this does not produce any light that can be observed, but if the particle moves faster than light, a kind of backwash of light appears.

Summary

Ilya Mikhaylovich Frank (born October 10 [October 23, New Style], 1908, St. Petersburg, Russia—died June 22, 1990, Moscow, Russia, U.S.S.R.) was a Soviet winner of the Nobel Prize for Physics in 1958 jointly with Pavel A. Cherenkov and Igor Y. Tamm, also of the Soviet Union. He received the award for explaining the phenomenon of Cherenkov radiation.

After graduating from Moscow State University in 1930, Frank worked at the Leningrad Optical Institute. He returned to Moscow to work at the P.N. Lebedev Physical Institute (1934–70) and from 1940 was a professor at Moscow State University.

In 1937 Frank and Tamm provided the theoretical explanation of Cherenkov radiation, an effect discovered by Cherenkov in 1934 in which light is emitted when charged particles travel through an optically transparent medium at speeds greater than the speed of light in that medium. The effect led to the development of Cherenkov counters for detecting and measuring the velocity of high-speed particles, allowing discoveries of new elementary particles such as the antiproton.

Frank later worked on theoretical and experimental nuclear physics and the design of reactors, and from 1957 he headed the neutron laboratory at the Joint Institute for Nuclear Research in Dubna. In 1946 Frank was elected a corresponding member, and in 1968 a full member, of the U.S.S.R. Academy of Sciences.

Details

Ilya Mikhailovich Frank (23 October 1908 – 22 June 1990) was a Soviet physicist who received the 1958 Nobel Prize in Physics, jointly with Pavel Alekseyevich Cherenkov and Igor Y. Tamm, also of the Soviet Union. He received the award for his work in explaining the phenomenon of Cherenkov radiation. He received the Stalin prize in 1946 and 1953 and the USSR state prize in 1971.

Life and career

Ilya Frank was born on 23 October 1908 in St. Petersburg. His father, Mikhail Lyudvigovich Frank, was a talented mathematician descended from a Jewish family, while his mother, Yelizaveta Mikhailovna Gratsianova, was a Russian Orthodox physician. His father participated in the student revolutionary movement, and as a result was expelled from Moscow University. After the October Revolution, he was reinstated and appointed professor. Ilya's uncle, Semyon Frank, a philosopher, was expelled from Soviet Russia in 1922 together with 160 other intellectuals. Ilya had one elder brother, Gleb Mikhailovich Frank, who became an eminent biophysicist and member of the Academy of Sciences of the USSR.

Ilya Frank studied mathematics and theoretical physics at Moscow State University. From his second year he worked in the laboratory of Sergey Ivanovich Vavilov, whom he regarded as his mentor. After graduating in 1930, on recommendation of Vavilov, he started working at the State Optical Institute in Leningrad. There he wrote his first publication—about luminescence— with Vavilov. The work he did there would form the basis of his doctoral dissertation in 1935.

In 1934, Frank moved to the Institute of Physics and Mathematics of the USSR Academy of Sciences (which shortly would be moved to Moscow, where it was transformed into the Institute of Physics). Here he started working on nuclear physics, a new field for him. He became interested in the effect discovered by Pavel Cherenkov, that charged particles moving through water at high speeds emit light. Together with Igor Tamm, he developed a theoretical explanation: the effect occurs when charged particles travel through an optically transparent medium at speeds greater than the speed of light in that medium, causing a shock wave in the electromagnetic field. The amount of energy radiated in this process is given by the Frank–Tamm formula.

The discovery and explanation of the effect resulted in the development of new methods for detecting and measuring the velocity of high-speed nuclear particles and became of great importance for research in nuclear physics. Cherenkov radiation is also widely used in biomedical research for detection of radioactive isotopes. In 1946, Cherenkov, Vavilov, Tamm, and Frank were awarded a Stalin Prize for their discovery, and 1958 Cherenkov, Tamm, and Frank received the Nobel Prize in Physics.

In 1944, Frank was appointed professor and became head of a department at the Institute of Physics and of the Nuclear Physics Laboratory (which was later transferred to the Institute of Nuclear Research). Frank's laboratory was involved in the (then secret) study of nuclear reactors. In particular, they studied the diffusion and thermalization of neutrons.

In 1957, Frank also become director of the Laboratory of Neutron Physics at the Joint Institute for Nuclear Research. The laboratory was based on the neutron fast-pulse reactor (IBR) then under construction at the site. Under Frank's supervision the reactor was used in the development of neutron spectroscopy techniques.

Personal life and death

Frank married the noted historian, Ella Abramovna Beilikhis, in 1937. Their son, Alexander, was born in the same year, and would continue much of the studies of his father as a physicist.

Frank died on 22 June 1990 in Moscow at the age 81.

frank-13130-portrait-medium.jpg

#15 Re: This is Cool » Miscellany » Yesterday 18:20:25

2489) Specific Gravity

Gist

Specific gravity (SG) is the ratio of a substance's density to the density of a reference material, usually water at 4°C for liquids/solids and air for gases, telling you how much denser or lighter a substance is compared to the reference. It's a dimensionless number (no units), where SG > 1 means the substance sinks (denser than water) and SG < 1 means it floats (lighter). It's used in industries to check purity, identify materials, and determine buoyancy. 

Specific gravity tells you whether something is floating or sinking in water. A specific gravity below 1 means that the sample is less dense (lighter) than water and will, therefore, float. For example, an oil with a specific gravity of 0.825 will float on water.

Summary

Relative density, also called specific gravity, is a dimensionless quantity defined as the ratio of the density (mass divided by volume) of a substance to the density of a given reference material. Specific gravity for solids and liquids is nearly always measured with respect to water at its densest (at 4 °C or 39.2 °F); for gases, the reference is air at room temperature (20 °C or 68 °F). The term "relative density" (abbreviated r.d. or RD) is preferred in SI, whereas the term "specific gravity" is gradually being abandoned.

If a substance's relative density is less than 1 then it is less dense than the reference; if greater than 1 then it is denser than the reference. If the relative density is exactly 1 then the densities are equal; that is, equal volumes of the two substances have the same mass. If the reference material is water, then a substance with a relative density (or specific gravity) less than 1 will float in water. For example, an ice cube, with a relative density of about 0.91, will float. A substance with a relative density greater than 1 will sink.

Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm (101.325 kPa). Where it is not, it is more usual to specify the density directly. For specific gravity, the reference temperature for water is often 4 °C (39.2 °F) because it's the point where water is densest (1 g/cc), but 15 °C (59 °F), 15.6 °C (60 °F), or 20 °C (68 °F) are also common standards, depending on the industry (like brewing or petroleum). In British brewing practice, the specific gravity, as specified above, is multiplied by 1000. Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines, must weight (syrups, juices, honeys, brewers wort, must, etc.) and acids.

Details

Specific gravity, also known as relative density, measures the density of a substance compared to the density of water.

Specific gravity is related to density, as both are physical properties used to determine how "dense" a particular material is. This material can be a gas, liquid, or solid.

Specific gravity and density are used to identify a material, determine the concentration of a liquid solution e.g., alcohol or sugar in a drink), or test whether a product is within specifications.

1. What Are Density and Specific Gravity?

The density of a sample is defined as its mass divided by its volume.

Specific gravity, also known as relative density, is used to describe the density of a substance compared to the density of water. To calculate specific gravity, divide the sample's density by the density of water.

2. What Is the Difference Between Specific Gravity and Density?

The main difference between specific gravity and density is that specific gravity is dimensionless, meaning it has no units, while density has a unit g/cc.

Specific gravity tells you whether something is floating or sinking in water. A specific gravity below 1 means that the sample is less dense (lighter) than water and will, therefore, float. For example, an oil with a specific gravity of 0.825 will float on water.

3. How to Convert Specific Gravity to Density?

Density of the Substance = Specific Gravity of the Substance x Density of Water.

4. How Does the Temperature of Samples Affect the Specific Gravity and Density?

The temperature of a sample affects both the density and the specific gravity. The higher the temperature, the higher the volume and the lower the density. If the temperature increases, the volume increases, and the density decreases. However, the mass of the substance does not change with temperature.

The most notable exception to this rule is liquid water, which reaches its maximum density at 3.98 ºC; above this point, the volume of water increases, and it becomes less dense. The opposite is true when water is cooled.

Since specific gravity is the density of the sample divided by the density of water, both densities will decrease with increasing temperature, but not by the same magnitude. The effect of the temperature will usually be slightly less important for specific gravity than for density.

5. Which Instruments Are Used to Measure Specific Gravity and Density?

There are different types of instruments to measure both specific gravity and density:

Hydrometer

A hydrometer is a cost-effective instrument used to determine the specific gravity/density of liquids. Made of blown glass, it consists of a bulbous bottom weighted with lead or steel shot and a long, narrow stem with a scale. The hydrometer is immersed into the sample liquid until it floats. The density reading is taken by looking at the scale, where the level of the sample liquid aligns with a marking on the hydrometer scale. Most hydrometers measure the specific gravity of samples: in simple terms, a hydrometer tells the user if a liquid is denser or less dense than water. It will float higher in a liquid with a greater specific gravity, such as water with sugar dissolved, compared to one with a lower specific gravity, such as pure water or alcohol.

When using a hydrometer, the user has two options:

a) Use the hydrometer at its calibration temperature (usually 16 °C or 20 °C). Depending on the sample volume, it can take some time for the sample to reach this temperature.
b) Simply record the measurement value at the surrounding temperature. Both measurement and temperature values must be recorded. If needed, a correction factor can be applied later to obtain the temperature-corrected measurement value.

Pycnometer

Typically made of glass, a pycnometer is a flask of a pre-defined volume used to measure the specific gravity/density of a liquid. It can also determine the specific gravity/density of dispersions, solids, and even gases.

A thermometer is also required to measure the temperature. User training is required to guarantee accurate measurements with the pycnometer.

Portable digital density meter

Portable digital density meters are used to quickly and accurately determine the specific gravity/density of liquids. Determination of density or specific gravity using digital meters is based on two factors:

a) The oscillation, or vibration, of a U-shaped glass tube (U-tube).
b) The relationship between the liquid sample mass and the frequency of oscillation of the U-tube. Filling the U-tube with sample liquid affects its frequency of oscillation: due to factory adjustment with samples of known densities, this frequency of oscillation can be directly correlated with the density of any liquid sample with an accuracy of 0.001 g/{cm}^{3} or a specific gravity with an accuracy of 0.001. Handheld digital density meters measure the sample at ambient temperature. If a result is needed at a certain temperature, the digital density meter can apply a correction factor to the measured result to compensate the result to a defined temperature. Each measurement takes only a few seconds, allowing users to move on to the next sample quickly. The measured density can be automatically converted into other units and concentrations for specific applications, such as specific gravity, API, alcohol%, °Brix, etc.

Benchtop digital density meter

Benchtop digital density meters use the same technology as portable digital density meters, the oscillation of a U-shaped glass tube (U-tube). In addition, they feature a built-in Peltier temperature control, which brings the sample to the selected temperature (e.g., 20°C). The temperature control can range from 0 °C to 95 °C. These density meters can reach an accuracy of 0.000005 g/{m}^{3} for density, or 0.000005 for specific gravity.

Some benchtop digital density meters can be connected to sample automation solutions for single or multiple samples, which offer automated sampling, rinsing, and drying. These density meters can often be upgraded into a dedicated automated multi-parameter system combining density, refractive index, pH, color, conductivity, and more to save time, increase data quality, and prevent any alteration of samples between individual analyses.

One of the benefits of digital density meters using the U-shaped glass tube is the small volume of sample required (typically 1.5 mL), which allows for a faster temperature equilibrium of the sample.

Additional Information

Relative density is the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material (i.e., water). It is usually measured at room temperature (20 Celsius degrees) and standard atmosphere (101.325kPa). It is unitless. You can often find it in the section 9 of a safety data sheet (SDS).

Regulatory Implications of Relative Density

Relative density is often used to calculate the volume or weight of samples needed for preparing a solution with a specified concentration. It also helps us understand the environmental distribution of insoluble substances (i.e, oil spill) in aquatic eco-system (on water surface or bottom sediment) if the substance is released to water.

Relative density test is not required for every chemical. Under REACH, the study does not need to be conducted if:

* the substance is only stable in solution in a particular solvent and the solution density is similar to that of the solvent. In such cases, an indication of whether the solution density is higher or lower than the solvent density is sufficient, or
* the substance is a gas. In this case, an estimation based on calculation shall be made from its molecular weight and the Ideal Gas Laws.

(REACH: REACH stands for Registration, Evaluation, Authorisation and Restriction of Chemicals, a comprehensive European Union regulation that governs the production and use of chemical substances to protect human health and the environment. It requires companies to register chemical substances, assess their risks, and manage them safely, placing the responsibility on industry to ensure chemical safety throughout the supply chain.)

Difference-Between-Density-and-Relative-Density1-768x483.jpg

#16 Dark Discussions at Cafe Infinity » Combine Quotes - II » Yesterday 17:34:19

Jai Ganesh
Replies: 0

Combine Quotes - II

1. A walk in nature is a perfect backdrop to combine exercise, prayer, and meditation while enhancing the benefit of these activities. - Chuck Norris

2. When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle. - Edmund Burke

3. I think no matter what you do you go through stages when you play. There was a number of times when I didn't do very well or was tired. It was too much to combine school and tennis altogether. Parents need to step in and say, take a little time off, do something fun. - Jana Novotna

4. I try to combine in my paintings cinematic feeling, emotional feeling, and sometimes actually writing on the page to combine all the different elements of communication. - Sylvester Stallone

5. When two elements combine and form more than one compound, the masses of one element that react with a fixed mass of the other are in the ratio of small whole numbers. - Humphry Davy

6. I've always loved to combine different scents to come up with my own unique thing. - Jennifer Aniston

7. In PhD, my topic was Stage Techniques in Sanskrit Drama - theory and practice. I wanted to combine my drama training with Sanskrit drama, which has a very rich history in literature. - Neena Gupta

8. I like making different recipes of my own, I love making food, I love learning about what food to combine with what other food. - Pooja Batra.

#17 Science HQ » Dispersion » Yesterday 17:06:57

Jai Ganesh
Replies: 0

Dispersion

Gist

Dispersion of light is the phenomenon where white light splits into its seven constituent colors (VIBGYOR: Violet, Indigo, Blue, Green, Yellow, Orange, Red) as it passes through a transparent medium, like a glass prism or water droplets. This occurs because different colors (wavelengths) of light travel at different speeds in the medium, causing them to bend, or refract, at slightly different angles, creating a spectrum.

In physics, dispersion is the phenomenon where a wave (like light, sound, or water waves) splits into its constituent frequencies or wavelengths, causing them to travel at different speeds, most famously seen as white light separating into a rainbow spectrum (VIBGYOR) when passing through a prism or water droplet. This happens because the refractive index or phase velocity of the medium changes with the wave's frequency, meaning different colors bend or travel at different rates, separating from the original beam. 

Summary

Dispersion, in wave motion, is any phenomenon associated with the propagation of individual waves at velocities that depend on their wavelengths.

Ocean waves in deep water, for example, move at speeds proportional to the square root of their wavelengths; these speeds vary from a few meters per second for ripples to hundreds of kilometers per hour for tsunamis. (When ocean waves come closer to land in shallow water, the waves are nondispersive and move at a constant speed equal to the square root of the acceleration due to gravity times the depth of the water.)

In a vacuum, a wave of light has a defined speed, but in a transparent medium that speed varies inversely with the index of refraction (a measure of the angle by which the direction of a wave is changed as it moves from one medium into another). Any transparent medium—e.g., a glass prism—will cause an incident parallel beam of light to fan out according to the refractive index of the glass for each of the component wavelengths, or colors. This effect also causes rainbows, in which sunlight entering raindrops is spread out into its different wavelengths before it is reflected. This separation of light into colors is called angular dispersion or sometimes chromatic dispersion.

Chromatic dispersion is the change of index of refraction with wavelength. Generally the index decreases as wavelength increases, blue light traveling more slowly in the material than red light. Dispersion is the phenomenon which gives you the separation of colors in a prism. It also gives the generally undesirable chromatic aberration in lenses. Usually the dispersion of a material is characterized by measuring the index at the blue F line of hydrogen (486.1 nm), the yellow sodium D lines (589.3 nm), and the red hydrogen C line (656.3 nm).

Details

Dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency. Sometimes the term chromatic dispersion is used to refer to optics specifically, as opposed to wave propagation in general. A medium having this common property may be termed a dispersive medium.

Although the term is used in the field of optics to describe light and other electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as acoustic dispersion in the case of sound and seismic waves, and in gravity waves (ocean waves). Within optics, dispersion is a property of telecommunication signals along transmission lines (such as microwaves in coaxial cable) or the pulses of light in optical fiber.

In optics, one important and familiar consequence of dispersion is the change in the angle of refraction of different colors of light, as seen in the spectrum produced by a dispersive prism and in chromatic aberration of lenses. Design of compound achromatic lenses, in which chromatic aberration is largely cancelled, uses a quantification of a glass's dispersion given by its Abbe number V, where lower Abbe numbers correspond to greater dispersion over the visible spectrum. In some applications such as telecommunications, the absolute phase of a wave is often not important but only the propagation of wave packets or "pulses"; in that case one is interested only in variations of group velocity with frequency, so-called group-velocity dispersion.

All common transmission media also vary in attenuation (normalized to transmission length) as a function of frequency, leading to attenuation distortion; this is not dispersion, although sometimes reflections at closely spaced impedance boundaries (e.g. crimped segments in a cable) can produce signal distortion which further aggravates inconsistent transit time as observed across signal bandwidth.

Examples

Dispersion causes a rainbow's spatial separation of a white light into components of different wavelengths (different colors). However, dispersion also has an effect in many other circumstances: for example, group-velocity dispersion causes pulses to spread in optical fibers, degrading signals over long distances; also, a cancellation between group-velocity dispersion and nonlinear effects leads to soliton waves.

Material and waveguide dispersion

Most often, chromatic dispersion refers to bulk material dispersion, that is, the change in refractive index with optical frequency. However, in a waveguide there is also the phenomenon of waveguide dispersion, in which case a wave's phase velocity in a structure depends on its frequency simply due to the structure's geometry. More generally, "waveguide" dispersion can occur for waves propagating through any inhomogeneous structure (e.g., a photonic crystal), whether or not the waves are confined to some region. In a waveguide, both types of dispersion will generally be present, although they are not strictly additive. For example, in fiber optics the material and waveguide dispersion can effectively cancel each other out to produce a zero-dispersion wavelength, important for fast fiber-optic communication.

Additional Information

A rainbow shining against a gloomy stormy sky is a sight that everyone loves. How does sunshine shining through pure raindrops produce the rainbow of colors observed? A transparent glass prism or a diamond uses the same method to break white light into colors. There are about six colors in a rainbow—red, black, yellow, green, blue, and violet; indigo is often identified as well.

Specific wavelengths of light are correlated with certain colors. Depending on the wavelength, we expect to see only one of the six colors as we absorb pure-wavelength light. Our eye's response to a combination of various wavelengths produces the thousands of other colors we can detect in other conditions. White light, in fact, is a combination of all visible wavelengths that are fairly uniform.

Because of the combination of wavelengths, sunlight, which is known bright, tends to be a little yellow, but it does include all visible wavelengths. The colors in rainbows are in the same order as the colors plotted against wavelength. This means the white light in a rainbow is distributed according to wavelength. This scattering of white light is known as Dispersion. More precisely, dispersion happens if a mechanism changes the direction of light in a wavelength-dependent way. Dispersion can occur with any form of wave and is often associated with wavelength-dependent processes.

What is a White Light?

Sometimes you have noticed that when you face towards the sun and see the sky you see the white light in the sky it is not really a white light it is a mixture of several colors. We can say that white light is the mixture of several colors having different wavelengths and frequency points on the same spot. We can also say that the complete blend of all the wavelengths of the spectrum is known as White Light.

The natural sources of white light are stars and the sun. The source of white light in the solar system is the sun. The artificial white light can be created with the help of LED and fluorescent light bulbs.

What is the Visible Light Spectrum?

Visible light waves are one of the significant forms of electromagnetic waves just like X-rays, infrared radiation, UV-rays, and microwaves.   These waves can be visualized as the colors of the rainbow, with each color possessing a different wavelength. The wavelength of red is the longest, while that of violet is the smallest.

White light is formed when all the waves are seen together. As white light passes through the lens, it splits into the visible light spectrum's colors. The visible light spectrum is a portion of an electromagnetic spectrum which can we can see from our naked eyes. The human eye can only see light with a specific wavelength only, and it ranges between 380 and 740 nm. If we are considering the frequency then the range of frequency varies between 405 and 790 THz.

Dispersion

The phenomenon of splitting of visible light into its component colors is called dispersion. Dispersion of light is caused by the change of speed of light ray (resulting in angle of deviation) of each wavelength by a different amount. 

The dispersion of a light wave by a prism is shown in the diagram. As white light is incident on a glass prism, the emergent light appears to be multicolored (violet, indigo, blue, green, yellow, orange and red). The light that bends the least is red, while the light that bends the most is violet. Dispersion is the process of light breaking into its constituent colors. The continuum of light is the pattern of color components in light.

When light falls on the surface it dispersed into several colors depending on the wavelength of the color or the frequency, as we know that frequency and wavelength are inversely proportional to each other. Each color has its own wavelength and frequency, so we see different colors for the same white light.

Causes of the Dispersion of Light

* The various degrees of refraction produced by different colors of light cause dispersion. In a vacuum, various colors of light travel at the same speed, but in a refracting medium, they travel at different speeds.
* Violet light travels at a much slower speed than red light. As a result, violet light has the highest refractive index of the medium, while red light has the lowest.
* As a result, violet light has the highest refractive index, while red light has the lowest refractive index (in the visible spectrum). As a consequence, violet-colored light refracts or bends the most, while red-colored light refracts the least.
* The dispersion of white light into its constituent colors as it emerges from a prism is caused by the disparity in the degree of bending of various colors of light.

Examples of Dispersion of Light

* Dispersion of white light through a prism: As shown in the figure, when white light falls on the prism a collection of seven colors found to come out from the prism due to the dispersion.
* Dispersion due to Oil on Road: Small amounts of oil are usually present on the road surface e.g. lubricating oil from automobiles, which give rise to bands of beautiful colors when it rains.
* Formation of Rainbow: A rainbow is considered to be one of the most amazing light displays ever seen on the planet. A rainbow is a multicolored arc formed by light striking water droplets. Rainbows are formed during rain by the absorption, refraction, and dispersion of light in water droplets. All of these phenomena provide a light spectrum in the sky, which is known as a rainbow.
* Dispersion in a Diamond: Diamond dispersion is where white light enters a diamond (or any dense object), separates into all the spectral colors of the rainbow, and bounces back to the viewer’s eyes in a wonderful display of colored light, also known as diamond fire.

Rainbow Formation

A Rainbow is formed of seven colors (VIBGYOR) Violet, Indigo, Blue, Green, Yellow, Orange, Red. When rain happens the drops of rain falling on the surface works like a prism and when sunlight falls on the drops of water the rays of the sun scatter into different colors and form a rainbow, and sometimes we may also see multiple rainbows. In this concept drops of water, acts likes a prism and create a rainbow. Drops of water are nothing but the spherical ball containing the water and having the refractive index of water (1.333) which makes the white light to dispersed and forming a beam of light of several called rainbow.

Therefore, the necessary conditions for the formation of the rainbow are: the presence of water droplets or raindrops and the position of Sun must be at the back side of the observer of rainbow. 

1111.png

#18 Re: Jai Ganesh's Puzzles » General Quiz » Yesterday 15:55:54

Hi,

#10735. What does the term in Geography Crossroads mean?

#10736. What does the term in Geography Crust (geology) mean?

#19 Re: Jai Ganesh's Puzzles » English language puzzles » Yesterday 15:36:36

Hi,

#5931. What does the verb (used with object) infringe mean?

#5932. What does the adjective ingenious mean?

#20 Re: Jai Ganesh's Puzzles » Doc, Doc! » Yesterday 15:23:33

Hi,

#2562. What does the medical term Gustatory cortex mean?

#21 Jokes » Cookie Jokes - II » Yesterday 15:14:41

Jai Ganesh
Replies: 0

Q: Why did the Oreo go to the dentist?
A: Because it lost its filling!
* * *
Q: What does the ginger bread man put on his bed?
A: A cookie sheet.
* * *
Q: What kind of keys do kids like to carry?
A: Cookies!
* * *
Q: What is a monkey's favorite cookie?
A: Chocolate chimp!
* * *
Q: What word backwards can predict the future?
A: Cookies (Seikooc as in psychic of you say it).
* * *

#25 This is Cool » Intelligence Quotient » 2026-02-06 22:26:37

Jai Ganesh
Replies: 0

Intelligence Quotient

Gist

An Intelligence Quotient (IQ) is a standardized, numerical score derived from tests designed to measure human cognitive abilities—such as reasoning, logic, memory, and problem-solving—relative to a peer group. Modern IQ scores are calculated using a normal distribution (bell curve) with a mean of 100 and a standard deviation of 15, meaning ~68% of the population scores between 85 and 115.

IQ (Intelligence Quotient) is a score from standardized tests measuring cognitive abilities, originally calculated by dividing a person's mental age (MA) by their chronological age (CA) and multiplying by 100: IQ = (MA / CA) × 100, though modern tests use statistical norms with a mean of 100, says Wikipedia. It assesses logic, memory, problem-solving, and pattern recognition, with average scores around 100, while scores below 70 suggest extremely low intelligence and above 129 indicate giftedness.

Summary

IQ, (from “intelligence quotient”), is a number used to express the relative intelligence of a person. It is one of many intelligence tests.

IQ was originally computed by taking the ratio of mental age to chronological (physical) age and multiplying by 100. Thus, if a 10-year-old child had a mental age of 12 (that is, performed on the test at the level of an average 12-year-old), the child was assigned an IQ of 12/10 × 100, or 120. If the 10-year-old had a mental age of 8, the child’s IQ would be 8/10 × 100, or 80. Based on this calculation, a score of 100—where the mental age equals the chronological age—would be average. Few tests continue to involve the computation of mental ages.

Details

An intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence. Originally, IQ was a score obtained by dividing a person's estimated mental age, obtained by administering an intelligence test, by the person's chronological age. The resulting fraction (quotient) was multiplied by 100 to obtain the IQ score. For modern IQ tests, the raw score is transformed to a normal distribution with mean 100 and standard deviation 15. This results in approximately two-thirds of the population scoring between IQ 85 and IQ 115 and about 2 percent each above 130 and below 70.

Scores from intelligence tests are estimates of intelligence. Unlike quantities such as distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence". IQ scores have been shown to be associated with factors such as nutrition, parental socioeconomic status, morbidity and mortality, parental social status, and perinatal environment. While the heritability of IQ has been studied for nearly a century, there is still debate over the significance of heritability estimates and the mechanisms of inheritance. The best estimates for heritability range from 40 to 60% of the variance between individuals in IQ being explained by genetics.

IQ scores were used for educational placement, assessment of intellectual ability, and evaluating job applicants. In research contexts, they have been studied as predictors of job performance and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate of three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform research on human intelligence.

Historically, many proponents of IQ testing have been eugenicists who used pseudoscience to push later debunked views of racial hierarchy in order to justify segregation and oppose immigration. Such views have been rejected by a strong consensus of mainstream science, though fringe figures continue to promote them in pseudo-scholarship and popular culture.

Additional Information

Earlier this year, 11-year-old Kashmea Wahi of London, England scored 162 on an IQ test. That’s a perfect score. The results were published by Mensa, a group for highly intelligent people. Wahi is the youngest person ever to get a perfect score on that particular test.           

Does her high score mean she will go on to do great things — like Stephen Hawking or Albert Einstein, two of the world’s greatest scientists? Maybe. But maybe not.

IQ, short for intelligence quotient, is a measure of a person’s reasoning ability. In short, it is supposed to gauge how well someone can use information and logic to answer questions or make predictions. IQ tests begin to assess this by measuring short- and long-term memory. They also measure how well people can solve puzzles and recall information they’ve heard — and how quickly.

Every student can learn, no matter how intelligent. But some students struggle in school because of a weakness in one specific area of intelligence. These students often benefit from special education programs. There, they get extra help in the areas where they’re struggling. IQ tests can help teachers figure out which students would benefit from such extra help.

IQ tests also can help identify students who would do well in fast-paced “gifted education” programs. Many colleges and universities also use exams similar to IQ tests to select students. And the U.S. government — including its military — uses IQ tests when choosing who to hire. These tests help predict which people would make good leaders, or be better at certain specific skills.

It’s tempting to read a lot into someone’s IQ score. Most non-experts think intelligence is the reason successful people do so well. Psychologists who study intelligence find this is only partly true. IQ tests can predict how well people will do in particular situations, such as thinking abstractly in science, engineering or art. Or leading teams of people. But there’s more to the story. Extraordinary achievement depends on many things. And those extra categories include ambition, persistence, opportunity, the ability to think clearly — even luck.

Intelligence matters. But not as much as you might think.

Measuring IQ

IQ tests have been around for more than a century. They were originally created in France to help identify students who needed extra help in school.

The U.S. government later used modified versions of these tests during World War I. Leaders in the armed forces knew that letting unqualified people into battle could be dangerous. So they used the tests to help find qualified candidates. The military continues to do that today. The Armed Forces Qualification Test is one of many different IQ tests in use.

IQ tests have many different purposes, notes Joel Schneider. He is a psychologist at Illinois State University in Normal. Some IQ tests have been designed to assess children at specific ages. Some are for adults. And some have been designed for people with particular disabilities.

But any of these tests will tend to work well only for people who share a similar cultural or social upbringing. “In the United States,” for instance, “a person who has no idea who George Washington was probably has lower-than-average intelligence,” Schneider says. “In Japan, not knowing who Washington was reveals very little about the person’s intelligence.”

Questions about important historical figures fall into the “knowledge” category of IQ tests. Knowledge-based questions test what a person knows about the world. For example, they might ask whether people know why it’s important to wash their hands before they eat.

IQ tests also ask harder questions to measure someone’s knowledge. What is abstract art? What does it mean to default on a loan? What is the difference between weather and climate? These types of questions test whether someone knows about things that are valued in their culture, Schneider explains.

Such knowledge-based questions measure what scientists call crystallized intelligence. But some categories of IQ tests don’t deal with knowledge at all.

Some deal with memory. Others measure what’s called fluid intelligence. That’s a person’s ability to use logic and reason to solve a problem. For example, test-takers might have to figure out what a shape would look like if it were rotated. Fluid intelligence is behind “aha” moments — times when you suddenly connect the dots to see the bigger picture.

Aki Nikolaidis is a neuroscientist, someone who studies structures in the brain. He works at the University of Illinois at Urbana-Champaign. And he wanted to know what parts of the brain are active during those “aha” episodes.

In a study published earlier this year, he and his team studied 71 adults. The researchers tested the volunteers’ fluid intelligence with a standard IQ test that had been designed for adults. At the same time, they mapped out which areas of test takers’ brains were working hardest. They did this using a brain scan called magnetic resonance spectroscopy, or MRS. It uses magnets to hunt for particular molecules of interest in the brain.

As brain cells work, they gobble up glucose, a simple sugar, and spit out the leftovers. MRS scans let researchers spy those leftovers. That told them which specific areas of people’s brains were working hard and breaking down more glucose.

People who scored higher on fluid intelligence tended to have more glucose leftovers in certain parts of their brains. These areas are on the left side of the brain and toward the front. They’re involved with planning movements, with spatial visualization and with reasoning. All are key aspects of problem solving.

“It’s important to understand how intelligence is related to brain structure and function,” says Nikolaidis. That, he adds, could help scientists develop better ways to boost fluid intelligence.

Personal intelligence

IQ tests “measure a set of skills that are important to society,” notes Scott Barry Kaufman. He’s a psychologist at the University of Pennsylvania in Philadelphia. But, he adds, such tests don’t tell the full story about someone’s potential. One reason: IQ tests favor people who can think on the spot. It’s a skill plenty of capable people lack.

It’s also something Kaufman appreciates as well as anyone.

As a boy, he needed extra time to process the words he heard. That slowed his learning. His school put him into special education classes, where he stayed until high school. Eventually, an observant teacher suggested he might do well in regular classes. He made the switch and, with hard work, indeed did well.

Kaufman now studies what he calls “personal intelligence.” It’s how people’s interests and natural abilities combine to help them work toward their goals. IQ is one such ability. Self-control is another. Both help people focus their attention when they need to, such as at school.

Psychologists lump together a person’s focused attention, self-control and problem-solving into a skill they call executive function. The brain cells behind executive function are known as the executive control network. This network turns on when someone is taking an IQ test. Many of the same brain areas are involved in fluid intelligence.

But personal intelligence is more than just executive function. It’s tied to personal goals. If people are working toward some goal, they’ll be interested and focused on what they are doing. They might daydream about a project even while not actively working on it. Although daydreaming may seem like a waste of time to outsiders, it can have major benefits for the person doing it.

When engaged in some task, such as learning, people want to keep at it, Kaufman explains. That means they will push forward, long after they might otherwise have been expected to give up. Engagement also lets a person switch between focused attention and mind wandering.

That daydreaming state can be an important part of intelligence. It is often while the mind is “wandering” that sudden insights or hunches emerge about how something works.

While daydreaming, a so-called default mode network within the brain kicks into action. Its nerve cells are active when the brain is at rest. For a long time, psychologists thought the default mode network was active only when the executive control network rested. In other words, you could not focus on an activity and daydream at the same time.

To see if that was really true, last year Kaufman teamed up with researchers at the University of North Carolina in Greensboro and at the University of Graz in Austria. They scanned the brains of volunteers using functional magnetic resonance imaging, or fMRI. This tool uses a strong magnetic field to record brain activity.

As they scanned the brains of 25 college students, the researchers asked the students to think of as many creative uses as they could for everyday objects. And as students were being as creative as possible, parts of both the default mode network and the executive control network lit up. The two systems weren’t at odds with each other. Rather, Kaufman suspects, the two networks work together to make creativity possible.

“Creativity seems to be a unique state of consciousness,” Kaufman now says. And he thinks it is essential for problem-solving.

Turning potential into achievement

Just being intelligent doesn’t mean someone will be successful. And just because someone is less intelligent doesn’t mean that person will fail. That’s one take-home message from the work of people like Angela Duckworth.

She works at the University of Pennsylvania in Philadelphia. Like many other psychologists, Duckworth wondered what makes one person more successful than another. In 2007, she interviewed people from all walks of life. She asked each what they thought made someone successful. Most people believed intelligence and talent were important. But smart people don’t always live up to their potential.

When Duckworth dug deeper, she found that the people who performed best — those who were promoted over and over, or made a lot of money — shared a trait independent of intelligence. They had what she now calls grit. Grit has two parts: passion and perseverance. Passion points to a lasting interest in something. People who persevere work through challenges to finish a project.

Duckworth developed a set of questions to assess passion and perseverance. She calls it her “grit scale.”

In one study of people 25 and older, she found that as people age, they become more likely to stick with a project. She also found that grit increases with education. People who had finished college scored higher on the grit scale than did people who quit before graduation. People who went to graduate school after college scored even higher.

She then did another study with college students. Duckworth wanted to see how intelligence and grit affected performance in school. So she compared scores on college-entrance exams (like the SAT), which estimate IQ, to school grades and someone’s score on the grit scale. Students with higher grades tended to have more grit. That’s not surprising. Getting good grades takes both smarts and hard work. But Duckworth also found that intelligence and grit don’t always go hand in hand. On average, students with higher exam scores tended to be less gritty than those who scored lower.

Students who perform best in the National Spelling Bee are those with grit. Their passion, drive, and persistance pay off and help them succeed against less “gritty” competitors.

But some people counter that this grit may not be all it’s cracked up to be. Among those people is Marcus Credé. He’s a psychologist at Iowa State University in Ames. He recently pooled the results of 88 studies on grit. Together, those studies involved nearly 67,000 people. And grit did not predict success, Credé found.

However, he thinks grit is very similar to conscientiousness. That someone’s ability to set goals, work toward them and think things through before acting. It’s a basic personality trait, Credé notes — not something that can be changed.

“Study habits and skills, test anxiety and class attendance are far more strongly related to performance than grit,” Credé concludes. “We can teach [students] how to study effectively. We can help them with their test anxiety,” he adds. “I’m not sure we can do that with grit.”

In the end, hard work can be just as important to success as IQ. “It’s okay to struggle and go through setbacks,” Kaufman says. It might not be easy. But over the long haul, toughing it out can lead to great accomplishments.

Illustration%20explaining%20what%20an%20IQ%20Test%20measures%20and%20evaluates-1763696554792.webp

Board footer

Powered by FluxBB