Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 Re: This is Cool » Miscellany » Today 18:32:41

2495) Adhesive

Gist

An adhesive is a substance—such as glue, cement, or paste—that binds materials together through surface attachment, resisting separation. Primarily divided into natural (e.g., starch) and synthetic (e.g., epoxy, acrylic) types, they are used for bonding, sealing, and coating in industrial and consumer applications.

Adhesives are used to bond materials together across virtually every industry, from construction, automotive, and aerospace to everyday household tasks, by creating strong, durable connections that distribute stress evenly, join dissimilar materials (like metal to plastic), and offer aesthetic advantages over mechanical fasteners like nails or screws, by avoiding holes and creating smoother finishes. They're essential for product assembly, packaging (like sealing boxes), labeling (stickers, bottle labels), and repairs, enabling lighter, stronger, and more complex designs in everything from smartphones to spacecraft.

Summary

Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation.

The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, and welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin.

Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present.

Details

An adhesive is any substance that is capable of holding materials together in a functional manner by surface attachment that resists separation. “Adhesive” as a general term includes cement, mucilage, glue, and paste—terms that are often used interchangeably for any organic material that forms an adhesive bond. Inorganic substances such as portland cement also can be considered adhesives, in the sense that they hold objects such as bricks and beams together through surface attachment, but this article is limited to a discussion of organic adhesives, both natural and synthetic.

Natural adhesives have been known since antiquity. Egyptian carvings dating back 3,300 years depict the gluing of a thin piece of veneer to what appears to be a plank of sycamore. Papyrus, an early nonwoven fabric, contained fibres of reedlike plants bonded together with flour paste. Bitumen, tree pitches, and beeswax were used as sealants (protective coatings) and adhesives in ancient and medieval times. The gold leaf of illuminated manuscripts was bonded to paper by egg white, and wooden objects were bonded with glues from fish, horn, and cheese. The technology of animal and fish glues advanced during the 18th century, and in the 19th century rubber- and nitrocellulose-based cements were introduced. Decisive advances in adhesives technology, however, awaited the 20th century, during which time natural adhesives were improved and many synthetics came out of the laboratory to replace natural adhesives in the marketplace. The rapid growth of the aircraft and aerospace industries during the second half of the 20th century had a profound impact on adhesives technology. The demand for adhesives that had a high degree of structural strength and were resistant to both fatigue and severe environmental conditions led to the development of high-performance materials, which eventually found their way into many industrial and domestic applications.

This article begins with a brief explanation of the principles of adhesion and then proceeds to a review of the major classes of natural and synthetic adhesives.

Adhesion

In the performance of adhesive joints, the physical and chemical properties of the adhesive are the most important factors. Also important in determining whether the adhesive joint will perform adequately are the types of adherend (that is, the components being joined—e.g., metal alloy, plastic, composite material) and the nature of the surface pretreatment or primer. These three factors—adhesive, adherend, and surface—have an impact on the service life of the bonded structure. The mechanical behaviour of the bonded structure in turn is influenced by the details of the joint design and by the way in which the applied loads are transferred from one adherend to the other.

Implicit in the formation of an acceptable adhesive bond is the ability of the adhesive to wet and spread on the adherends being joined. Attainment of such interfacial molecular contact is a necessary first step in the formation of strong and stable adhesive joints. Once wetting is achieved, intrinsic adhesive forces are generated across the interface through a number of mechanisms. The precise nature of these mechanisms have been the object of physical and chemical study since at least the 1960s, with the result that a number of theories of adhesion exist. The main mechanism of adhesion is explained by the adsorption theory, which states that substances stick primarily because of intimate intermolecular contact. In adhesive joints this contact is attained by intermolecular or valence forces exerted by molecules in the surface layers of the adhesive and adherend.

In addition to adsorption, four other mechanisms of adhesion have been proposed. The first, mechanical interlocking, occurs when adhesive flows into pores in the adherend surface or around projections on the surface. The second, interdiffusion, results when liquid adhesive dissolves and diffuses into adherend materials. In the third mechanism, adsorption and surface reaction, bonding occurs when adhesive molecules adsorb onto a solid surface and chemically react with it. Because of the chemical reaction, this process differs in some degree from simple adsorption, described above, although some researchers consider chemical reaction to be part of a total adsorption process and not a separate adhesion mechanism. Finally, the electronic, or electrostatic, attraction theory suggests that electrostatic forces develop at an interface between materials with differing electronic band structures. In general, more than one of these mechanisms play a role in achieving the desired level of adhesion for various types of adhesive and adherend.

In the formation of an adhesive bond, a transitional zone arises in the interface between adherend and adhesive. In this zone, called the interphase, the chemical and physical properties of the adhesive may be considerably different from those in the noncontact portions. It is generally believed that the interphase composition controls the durability and strength of an adhesive joint and is primarily responsible for the transference of stress from one adherend to another. The interphase region is frequently the site of environmental attack, leading to joint failure.

The strength of adhesive bonds is usually determined by destructive tests, which measure the stresses set up at the point or line of fracture of the test piece. Various test methods are employed, including peel, tensile lap shear, cleavage, and fatigue tests. These tests are carried out over a wide range of temperatures and under various environmental conditions. An alternate method of characterizing an adhesive joint is by determining the energy expended in cleaving apart a unit area of the interphase. The conclusions derived from such energy calculations are, in principle, completely equivalent to those derived from stress analysis.

Adhesive materials

Virtually all synthetic adhesives and certain natural adhesives are composed of polymers, which are giant molecules, or macromolecules, formed by the linking of thousands of simpler molecules known as monomers. The formation of the polymer (a chemical reaction known as polymerization) can occur during a “cure” step, in which polymerization takes place simultaneously with adhesive-bond formation (as is the case with epoxy resins and cyanoacrylates), or the polymer may be formed before the material is applied as an adhesive, as with thermoplastic elastomers such as styrene-isoprene-styrene block copolymers. Polymers impart strength, flexibility, and the ability to spread and interact on an adherend surface—properties that are required for the formation of acceptable adhesion levels.

Natural adhesives

Natural adhesives are primarily of animal or vegetable origin. Though the demand for natural products has declined since the mid-20th century, certain of them continue to be used with wood and paper products, particularly in corrugated board, envelopes, bottle labels, book bindings, cartons, furniture, and laminated film and foils. In addition, owing to various environmental regulations, natural adhesives derived from renewable resources are receiving renewed attention. The most important natural products are described below.

Animal glue

The term animal glue usually is confined to glues prepared from mammalian collagen, the principal protein constituent of skin, bone, and muscle. When treated with acids, alkalies, or hot water, the normally insoluble collagen slowly becomes soluble. If the original protein is pure and the conversion process is mild, the high-molecular-weight product is called gelatin and may be used for food or photographic products. The lower-molecular-weight material produced by more vigorous processing is normally less pure and darker in colour and is called animal glue.

Animal glue traditionally has been used in wood joining, book bindery, sandpaper manufacture, heavy gummed tapes, and similar applications. In spite of its advantage of high initial tack (stickiness), much animal glue has been modified or entirely replaced by synthetic adhesives.

Casein glue

This product is made by dissolving casein, a protein obtained from milk, in an aqueous alkaline solvent. The degree and type of alkali influences product behaviour. In wood bonding, casein glues generally are superior to true animal glues in moisture resistance and aging characteristics. Casein also is used to improve the adhering characteristics of paints and coatings.

Blood albumen glue

Glue of this type is made from serum albumen, a blood component obtainable from either fresh animal blood or dried soluble blood powder to which water has been added. Addition of alkali to albumen-water mixtures improves adhesive properties. A considerable quantity of glue products from blood is used in the plywood industry.

Starch and dextrin

Starch and dextrin are extracted from corn, wheat, potatoes, or rice. They constitute the principal types of vegetable adhesives, which are soluble or dispersible in water and are obtained from plant sources throughout the world. Starch and dextrin glues are used in corrugated board and packaging and as a wallpaper adhesive.

Natural gums

Substances known as natural gums, which are extracted from their natural sources, also are used as adhesives. Agar, a marine-plant colloid (suspension of extremely minute particles), is extracted by hot water and subsequently frozen for purification. Algin is obtained by digesting seaweed in alkali and precipitating either the calcium salt or alginic acid. Gum arabic is harvested from acacia trees that are artificially wounded to cause the gum to exude. Another exudate is natural rubber latex, which is harvested from Hevea trees. Most gums are used chiefly in water-remoistenable products.

Synthetic adhesives

Although natural adhesives are less expensive to produce, most important adhesives are synthetic. Adhesives based on synthetic resins and rubbers excel in versatility and performance. Synthetics can be produced in a constant supply and at constantly uniform properties. In addition, they can be modified in many ways and are often combined to obtain the best characteristics for a particular application.

The polymers used in synthetic adhesives fall into two general categories—thermoplastics and thermosets. Thermoplastics provide strong, durable adhesion at normal temperatures, and they can be softened for application by heating without undergoing degradation. Thermoplastic resins employed in adhesives include nitrocellulose, polyvinyl acetate, vinyl acetate-ethylene copolymer, polyethylene, polypropylene, polyamides, polyesters, acrylics, and cyanoacrylics.

Thermosetting systems, unlike thermoplastics, form permanent, heat-resistant, insoluble bonds that cannot be modified without degradation. Adhesives based on thermosetting polymers are widely used in the aerospace industry. Thermosets include phenol formaldehyde, urea formaldehyde, unsaturated polyesters, epoxies, and polyurethanes. Elastomer-based adhesives can function as either thermoplastic or thermosetting types, depending on whether cross-linking is necessary for the adhesive to perform its function. The characteristics of elastomeric adhesives include quick assembly, flexibility, variety of type, economy, high peel strength, ease of modification, and versatility. The major elastomers employed as adhesives are natural rubber, butyl rubber, butadiene rubber, styrene-butadiene rubber, nitrile rubber, silicone, and neoprene.

An important challenge facing adhesive manufacturers and users is the replacement of adhesive systems based on organic solvents with systems based on water. This trend has been driven by restrictions on the use of volatile organic compounds (VOC), which include solvents that are released into the atmosphere and contribute to the depletion of ozone. In response to environmental regulation, adhesives based on aqueous emulsions and dispersions are being developed, and solvent-based adhesives are being phased out.

The polymer types noted above are employed in a number of functional types of adhesives. These functional types are described below.

Contact cements

Contact adhesives or cements are usually based on solvent solutions of neoprene. They are so named because they are usually applied to both surfaces to be bonded. Following evaporation of the solvent, the two surfaces may be joined to form a strong bond with high resistance to shearing forces. Contact cements are used extensively in the assembly of automotive parts, furniture, leather goods, and decorative laminates. They are effective in the bonding of plastics.

Structural adhesives

Structural adhesives are adhesives that generally exhibit good load-carrying capability, long-term durability, and resistance to heat, solvents, and fatigue. Ninety-five percent of all structural adhesives employed in original equipment manufacture fall into six structural-adhesive families: (1) epoxies, which exhibit high strength and good temperature and solvent resistance, (2) polyurethanes, which are flexible, have good peeling characteristics, and are resistant to shock and fatigue, (3) acrylics, a versatile adhesive family that bonds to oily parts, cures quickly, and has good overall properties, (4) anaerobics, or surface-activated acrylics, which are good for bonding threaded metal parts and cylindrical shapes, (5) cyanoacrylates, which bond quickly to plastic and rubber but have limited temperature and moisture resistance, and (6) silicones, which are flexible, weather well out-of-doors, and provide good sealing properties. Each of these families can be modified to provide adhesives that have a range of physical and mechanical properties, cure systems, and application techniques.

Polyesters, polyvinyls, and phenolic resins are also used in industrial applications but have processing or performance limitations. High-temperature adhesives, such as polyimides, have a limited market.

Hot-melt adhesives

Hot-melt adhesives are employed in many nonstructural applications. Based on thermoplastic resins, which melt at elevated temperatures without degrading, these adhesives are applied as hot liquids to the adherend. Commonly used polymers include polyamides, polyesters, ethylene-vinyl acetate, polyurethanes, and a variety of block copolymers and elastomers such as butyl rubber, ethylene-propylene copolymer, and styrene-butadiene rubber.

Hot-melts find wide application in the automotive and home-appliance fields. Their utility, however, is limited by their lack of high-temperature strength, the upper use temperature for most hot-melts being in the range of 40–65 °C (approximately 100–150 °F). In order to improve performance at higher temperatures, so-called structural hot-melts—thermoplastics modified with reactive urethanes, moisture-curable urethanes, or silane-modified polyethylene—have been developed. Such modifications can lead to enhanced peel adhesion, higher heat capability (in the range of 70–95 °C [160–200 °F]), and improved resistance to ultraviolet radiation.

Pressure-sensitive adhesives

Pressure-sensitive adhesives, or PSAs, represent a large industrial and commercial market in the form of adhesive tapes and films directed toward packaging, mounting and fastening, masking, and electrical and surgical applications. PSAs are capable of holding adherends together when the surfaces are mated under briefly applied pressure at room temperature. (The difference between these adhesives and contact cements is that the latter require no pressure to bond.)

Materials used to formulate PSA systems include natural and synthetic rubbers, thermoplastic elastomers, polyacrylates, polyvinylalkyl ethers, and silicones. These polymers, in both solvent-based and hot-melt formulations, are applied as a coating onto a substrate of paper, cellophane, plastic film, fabric, or metal foil. As solvent-based adhesive formulations are phased out in response to environmental regulations, water-based PSAs will find greater use.

Ultraviolet-cured adhesives

Ultraviolet-cured adhesives became available in the early 1960s but developed rapidly with advances in chemical and equipment technology during the 1980s. These types of adhesive normally consist of a monomer (which also can serve as the solvent) and a low-molecular-weight prepolymer combined with a photoinitiator. Photoinitiators are compounds that break down into free radicals upon exposure to ultraviolet radiation. The radicals induce polymerization of the monomer and prepolymer, thus completing the chain extension and cross-linking required for the adhesive to form. Because of the low process temperatures and very rapid polymerization (from 2 to 60 seconds), ultraviolet-cured adhesives are making rapid advances in the electronic, automotive, and medical areas. They consist mainly of acrylated formulations of silicones, urethanes, and methacrylates. Combined ultraviolet–heat-curing formulations also exist.

Additional Information

An adhesive is a type of substance that holds two or more materials together with cohesive forces and surface attachment in a practical way. Adhesives can be made from and be composed of a variety of substances such as tree sap, bee wax, cement, and epoxy.Ideally, there are two broad adhesive categories, natural and synthetic adhesives. Most commercially found adhesives are synthetic adhesives as they provide better consistency, bond strength, and adaptability compared to natural adhesives. Synthetic adhesives are further classified as consumer adhesives and Industrial adhesives based on their application.

An adhesive’s chemical composition determines its application methods, usage, and bonding strength. Therefore, adhesive manufacturers need to custom-engineer modern synthetic based on the needs of different industries and applications.

Adhesive Applications In Various Industries

1. Bonding:
Bonding is a process in which two surfaces are practically joined together with the help of a suitable adhesive, such as epoxy adhesives. Adhesives are used for bonding materials in various industries, such as electronics, medical, food, optical, chemical and oil and gas industries to bond a range of metals, ceramics, glass, plastics, rubbers and composites.

2. Sealing:
Unlike bonding which sees two surfaces fused together, sealants are ideal for closing gaps and cavities to block fluids, dust, and dirt from either entering or getting out. Sealants are widely used in aerospace, oil and gas, chemical, electronic, optical, automotive and specialty OEM industries.

3. Coating:
Coatings are predominantly used in aerospace, electronic conformal coating, along with some other uses in OEM and oil & chemical industries. Industrial adhesive coatings can provide superior protection against chemicals, dust and moisture,  reduce friction, improve abrasion resistance and provide EMI/RFI shielding.

4. Potting:
Potting is an encapsulation method used in the electronics industry to cover small or large electrical components placed inside a housing with a suitable potting material that can withstand high temperatures, protect the circuits from moisture, dirt, dust and other harsh conditions. Potting and encapsulation are used for electronic and microelectronic components, such as sensors, motors, coils, transformers, capacitors, switches, connectors, power supplies, and cable harnesses.

5. Impregnation:
Impregnation is a method used to wet various fibres, such as glass, carbon, kevlar, aramid among others. Once the fibres are completely saturated with the resin, the resin is allowed to fully cure in place forming a composite substrate. Such impregnated composite surfaces are widely used in the aerospace, windmill and electronics and electrical industries.

How Do Adhesives Work?

The working of adhesive depends on the types of bonding process used to attach the surfaces to each other. Mechanical adhesion and chemical adhesion are two types of bondings that can be used to stick one surface to another with adhesives.

Usually, surfaces that need to be attached with the help of adhesives, have a lot of micropores. These pores when filled with adhesives act as grips to keep another surface attached to them. This is called mechanical adhesion. With mechanical adhesion, the adhesives are in liquid form. The liquid adhesives will gradually penetrate the pores during the drying and curing process. You should also keep in mind that mechanical bonding is dependent on the surface roughness and surface energy of the substrates to be bonded. The higher the surface energy and roughness  of a substance, the stronger is the bond.

On the other hand, chemical bonding is completely different which sees the surface of a material completely bond with another material on a molecular level. It is a complex process but very effective at the same time. Chemical bonding is further categorised into two types; adsorption and chemisorption depending on the type of bond between the adhesive’s molecules and the surface. Although chemical adhesives are easily available, they are not a common form of adhesive used in Industries.

Types of Adhesives

1. Hot Melt:
Hot melt is a type of thermoplastic polymer adhesive. Thermoplastic polymer adhesives are in a solid state at room temperature. During the application process , they are liquified by heating to be applied as an adhesive. Hot melt adhesives are used for manufacturing and packaging purposes in a wide array of industries due to their superior bonding strength, versatility and setting time. They are also eco-friendly, safe and have a longer shelf life.

Different hot melt adhesives might have different softening points and hardening times as per their applications. Some of the common types of hot melt adhesives are polyurethane, metallocene, EVA hot melt, and polyethene hot melt adhesives.

Let’s talk about reactive hot melt adhesives, which are different from hot melt adhesives. Reactive hot melt adhesives, once applied to a surface and cured, will not be able to melt again as they generate additional chemical bonds during the curing process. This makes reactive hot melt adhesives a better choice than simple hot melt adhesives as they have stronger adherence. Reactive hot melt or RHM are high-quality adhesives also known for their heightened resistance to moisture, and other chemicals with higher thermal stability.

2. Thermosetting:
Thermosetting adhesives are materials which cannot be re-melted after they have cured. Thermosetting adhesives are usually made of two parts, namely, the resin and hardener. However one-part forms can also be found.

There are various types of thermosetting grades such as:

* Phenolics
* Epoxies
* Polyesters
* Polyurethanes
* Silicones

Out of these, epoxy thermosetting resins are the most commonly used in various industries such as electronics and electrical, oil and chemical, automotive, aerospace, optical etc. This is due to their excellent resistance to heat and harsh chemicals and superior mechanical bonding properties.

3. Pressure Sensitive:
Pressure-sensitive adhesives are low-modulus elastomers which means they can be easily taken apart, but are the best choice for light usage. Pressure-sensitive adhesives can be easily found in tapes, bandages, sticky notes, etc. Pressure sensitive adhesives are non-structural adhesives which are not suitable for high-pressure industrial applications. However, they can be used for lighter and thinner material surfaces for which strong adhesives are not suitable. Pressure sensitive adhesives are also cheaper as compared to other adhesive materials and can be found more easily.

4. Contact Adhesive:
Contact adhesives are generally used to create strong mechanical bonds by applying adhesive to both surfaces that are supposed to be bonded together. Contact adhesives are also elastomeric which means the polymers used in the adhesives have rubber-like properties which helps them stay in shape. This gives contact adhesives excellent flexibility and mechanical strength. These adhesives are commonly used in the automotive industry, construction, aerospace and OEM for sealing and coating. They can also be found in rubber cement or countertop laminates. Contact adhesives are ideal for applications which require stability and durability.

Adhesive Application Methods

1. Manual:
As the name suggests, in this method, the applicator uses handheld devices and tools to apply adhesives to the surfaces. Manual adhesive application methods can include spraying, web coating, using a brush and a roller, curtain coating etc. Manual application is cost-effective and is recommended for smaller applications.

2. Glue Applicator:
Glue applicators are handheld devices that assist you to apply adhesives uniformly and at a faster rate than manually. These applicators contain a gun fitted with a cartridge containing the adhesive. A mixing tip is attached to the front of the cartridge to eliminate the need for any manual mixing. These semi-automatic devices enable higher speed, precision and efficiency. Glue applicators are ideal for medium to large-scale applications and are commonly used in the aerospace, electronics and optical industry to fuse small and detailed pieces of equipment.

3. Automatic Dispensing:
Automatic dispensing is ideal for fast-paced and high-volume environments where consistency and quality finish is crucial. This method is more costly as compared to the above two, however, automatic dispensing can increase efficiency, reduce waste and complete the task at a large scale. Metre-mix-dispense systems are used for two component adhesives and robotic dispensing is used for single component adhesives.

Conclusion

Adhesives are used in almost every manufacturing and packaging industry and are an important part of their process. As we have seen, there are different types of adhesives with varying properties suitable for numerous industries.

461.webp

#2 Dark Discussions at Cafe Infinity » Come Quotes - III » Today 18:01:30

Jai Ganesh
Replies: 0

Come Quotes - III

1. Put two ships in the open sea, without wind or tide, and, at last, they will come together. Throw two planets into space, and they will fall one on the other. Place two enemies in the midst of a crowd, and they will inevitably meet; it is a fatality, a question of time; that is all. - Jules Verne

2. On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. - Charles Babbage

3. I've always believed that if you put in the work, the results will come. - Michael Jordan

4. From the deepest desires often come the deadliest hate. - Socrates

5. The most important thing about Spaceship Earth - an instruction book didn't come with it. - R. Buckminster Fuller

6. Hope smiles from the threshold of the year to come, whispering, 'It will be happier.' - Alfred Lord Tennyson

7. Tears come from the heart and not from the brain. - Leonardo da Vinci

8. What goes up must come down. - Isaac Newton.

#3 Jokes » Doughnet Jokes - II » Today 17:43:27

Jai Ganesh
Replies: 0

Q: What kind of donuts can fly?
A: A plain one.
* * *
Q: What do you call a Jamaican donut?
A: Cinnamon.
* * *
Q: What did one donut say to the other?
A: I donut care.
* * *
Q: How did the police department figure out a perp stole a cop car?
A: The lojacked cop car went 5 hours without stopping at a Dunkin Donuts!
* * *
Donuts will make your clothes shrink.
* * *

#4 Re: Jai Ganesh's Puzzles » Doc, Doc! » Today 17:35:30

Hi,

#2568. What does the medical term Humectant mean?

#8 Science HQ » Acoustics » Today 16:34:38

Jai Ganesh
Replies: 0

Acoustics

Gist

Acoustics: The branch of physics that is concerned with the study of sound is known as acoustics. We can define acoustics as, The science that deals with the study of sound and its production, transmission, and effects.

"Acoustic" relates to sound or hearing, describing things like instruments not needing electronic amplification (acoustic guitar), materials controlling sound (acoustic panels), or the scientific study of sound waves (acoustics) in physics, architecture, and medicine. It signifies natural sound production or properties that affect sound quality, from the science of hearing to the design of concert halls or musical styles.

Summary

Acoustics is a branch of continuum mechanics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.

Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well-accepted overview of the various fields in acoustics.

Details

Acoustics is the science concerned with the production, control, transmission, reception, and effects of sound. The term is derived from the Greek akoustos, meaning “heard.”

Beginning with its origins in the study of mechanical vibrations and the radiation of these vibrations through mechanical waves, acoustics has had important applications in almost every area of life. It has been fundamental to many developments in the arts—some of which, especially in the area of musical scales and instruments, took place after long experimentation by artists and were only much later explained as theory by scientists. For example, much of what is now known about architectural acoustics was actually learned by trial and error over centuries of experience and was only recently formalized into a science.

Other applications of acoustic technology are in the study of geologic, atmospheric, and underwater phenomena. Psychoacoustics, the study of the physical effects of sound on biological systems, has been of interest since Pythagoras first heard the sounds of vibrating strings and of hammers hitting anvils in the 6th century bc, but the application of modern ultrasonic technology has only recently provided some of the most exciting developments in medicine. Even today, research continues into many aspects of the fundamental physical processes involved in waves and sound and into possible applications of these processes in modern life.

Sound waves follow physical principles that can be applied to the study of all waves; these principles are discussed thoroughly in the article mechanics of solids. The article here explains in detail the physiological process of hearing—that is, receiving certain wave vibrations and interpreting them as sound.

Early experimentation

The origin of the science of acoustics is generally attributed to the Greek philosopher Pythagoras (6th century bc), whose experiments on the properties of vibrating strings that produce pleasing musical intervals were of such merit that they led to a tuning system that bears his name. Aristotle (4th century bc) correctly suggested that a sound wave propagates in air through motion of the air—a hypothesis based more on philosophy than on experimental physics; however, he also incorrectly suggested that high frequencies propagate faster than low frequencies—an error that persisted for many centuries. Vitruvius, a Roman architectural engineer of the 1st century bc, determined the correct mechanism for the transmission of sound waves, and he contributed substantially to the acoustic design of theatres. In the 6th century ad, the Roman philosopher Boethius documented several ideas relating science to music, including a suggestion that the human perception of pitch is related to the physical property of frequency.

The modern study of waves and acoustics is said to have originated with Galileo Galilei (1564–1642), who elevated to the level of science the study of vibrations and the correlation between pitch and frequency of the sound source. His interest in sound was inspired in part by his father, who was a mathematician, musician, and composer of some repute. Following Galileo’s foundation work, progress in acoustics came relatively rapidly. The French mathematician Marin Mersenne studied the vibration of stretched strings; the results of these studies were summarized in the three Mersenne’s laws. Mersenne’s Harmonicorum Libri (1636) provided the basis for modern musical acoustics. Later in the century Robert Hooke, an English physicist, first produced a sound wave of known frequency, using a rotating cog wheel as a measuring device. Further developed in the 19th century by the French physicist Félix Savart, and now commonly called Savart’s disk, this device is often used today for demonstrations during physics lectures. In the late 17th and early 18th centuries, detailed studies of the relationship between frequency and pitch and of waves in stretched strings were carried out by the French physicist Joseph Sauveur, who provided a legacy of acoustic terms used to this day and first suggested the name acoustics for the study of sound.

One of the most interesting controversies in the history of acoustics involves the famous and often misinterpreted “bell-in-vacuum” experiment, which has become a staple of contemporary physics lecture demonstrations. In this experiment the air is pumped out of a jar in which a ringing bell is located; as air is pumped out, the sound of the bell diminishes until it becomes inaudible. As late as the 17th century many philosophers and scientists believed that sound propagated via invisible particles originating at the source of the sound and moving through space to affect the ear of the observer. The concept of sound as a wave directly challenged this view, but it was not established experimentally until the first bell-in-vacuum experiment was performed by Athanasius Kircher, a German scholar, who described it in his book Musurgia Universalis (1650). Even after pumping the air out of the jar, Kircher could still hear the bell, so he concluded incorrectly that air was not required to transmit sound. In fact, Kircher’s jar was not entirely free of air, probably because of inadequacy in his vacuum pump. By 1660 the Anglo-Irish scientist Robert Boyle had improved vacuum technology to the point where he could observe sound intensity decreasing virtually to zero as air was pumped out. Boyle then came to the correct conclusion that a medium such as air is required for transmission of sound waves. Although this conclusion is correct, as an explanation for the results of the bell-in-vacuum experiment it is misleading. Even with the mechanical pumps of today, the amount of air remaining in a vacuum jar is more than sufficient to transmit a sound wave. The real reason for a decrease in sound level upon pumping air out of the jar is that the bell is unable to transmit the sound vibrations efficiently to the less dense air remaining, and that air is likewise unable to transmit the sound efficiently to the glass jar. Thus, the real problem is one of an impedance mismatch between the air and the denser solid materials—and not the lack of a medium such as air, as is generally presented in textbooks. Nevertheless, despite the confusion regarding this experiment, it did aid in establishing sound as a wave rather than as particles.

Measuring the speed of sound

Once it was recognized that sound is in fact a wave, measurement of the speed of sound became a serious goal. In the 17th century, the French scientist and philosopher Pierre Gassendi made the earliest known attempt at measuring the speed of sound in air. Assuming correctly that the speed of light is effectively infinite compared with the speed of sound, Gassendi measured the time difference between spotting the flash of a gun and hearing its report over a long distance on a still day. Although the value he obtained was too high—about 478.4 metres per second (1,569.6 feet per second)—he correctly concluded that the speed of sound is independent of frequency. In the 1650s, Italian physicists Giovanni Alfonso Borelli and Vincenzo Viviani obtained the much better value of 350 metres per second using the same technique. Their compatriot G.L. Bianconi demonstrated in 1740 that the speed of sound in air increases with temperature. The earliest precise experimental value for the speed of sound, obtained at the Academy of Sciences in Paris in 1738, was 332 metres per second—incredibly close to the presently accepted value, considering the rudimentary nature of the measuring tools of the day. A more recent value for the speed of sound, 331.45 metres per second (1,087.4 feet per second), was obtained in 1942; it was amended in 1986 to 331.29 metres per second at 0° C (1,086.9 feet per second at 32° F).

The speed of sound in water was first measured by Daniel Colladon, a Swiss physicist, in 1826. Strangely enough, his primary interest was not in measuring the speed of sound in water but in calculating water’s compressibility—a theoretical relationship between the speed of sound in a material and the material’s compressibility having been established previously. Colladon came up with a speed of 1,435 metres per second at 8° C; the presently accepted value interpolated at that temperature is about 1,439 metres per second.

Two approaches were employed to determine the velocity of sound in solids. In 1808 Jean-Baptiste Biot, a French physicist, conducted direct measurements of the speed of sound in 1,000 metres of iron pipe by comparing it with the speed of sound in air. A better measurement had earlier been carried out by a German, Ernst Florenz Friedrich Chladni, using analysis of the nodal pattern in standing-wave vibrations in long rods.

Modern advances

Simultaneous with these early studies in acoustics, theoreticians were developing the mathematical theory of waves required for the development of modern physics, including acoustics. In the early 18th century, the English mathematician Brook Taylor developed a mathematical theory of vibrating strings that agreed with previous experimental observations, but he was not able to deal with vibrating systems in general without the proper mathematical base. This was provided by Isaac Newton of England and Gottfried Wilhelm Leibniz of Germany, who, in pursuing other interests, independently developed the theory of calculus, which in turn allowed the derivation of the general wave equation by the French mathematician and scientist Jean Le Rond d’Alembert in the 1740s. The Swiss mathematicians Daniel Bernoulli and Leonhard Euler, as well as the Italian-French mathematician Joseph-Louis Lagrange, further applied the new equations of calculus to waves in strings and in the air. In the 19th century, Siméon-Denis Poisson of France extended these developments to stretched membranes, and the German mathematician Rudolf Friedrich Alfred Clebsch completed Poisson’s earlier studies. A German experimental physicist, August Kundt, developed a number of important techniques for investigating properties of sound waves.

One of the most important developments in the 19th century involved the theory of vibrating plates. In addition to his work on the speed of sound in metals, Chladni had earlier introduced a technique of observing standing-wave patterns on vibrating plates by sprinkling sand onto the plates—a demonstration commonly used today. Perhaps the most significant step in the theoretical explanation of these vibrations was provided in 1816 by the French mathematician Sophie Germain, whose explanation was of such elegance and sophistication that errors in her treatment of the problem were not recognized until some 35 years later, by the German physicist Gustav Robert Kirchhoff.

The analysis of a complex periodic wave into its spectral components was theoretically established early in the 19th century by Jean-Baptiste-Joseph Fourier of France and is now commonly referred to as the Fourier theorem. The German physicist Georg Simon Ohm first suggested that the ear is sensitive to these spectral components; his idea that the ear is sensitive to the amplitudes but not the phases of the harmonics of a complex tone is known as Ohm’s law of hearing (distinguishing it from the more famous Ohm’s law of electrical resistance).

Hermann von Helmholtz made substantial contributions to understanding the mechanisms of hearing and to the psychophysics of sound and music. His book On the Sensations of Tone As a Physiological Basis for the Theory of Music (1863) is one of the classics of acoustics. In addition, he constructed a set of resonators, covering much of the audio spectrum, which were used in the spectral analysis of musical tones. The Prussian physicist Karl Rudolph Koenig, an extremely clever and creative experimenter, designed many of the instruments used for research in hearing and music, including a frequency standard and the manometric flame. The flame-tube device, used to render standing sound waves “visible,” is still one of the most fascinating of physics classroom demonstrations. The English physical scientist John William Strutt, 3rd Baron Rayleigh, carried out an enormous variety of acoustic research; much of it was included in his two-volume treatise, The Theory of Sound, publication of which in 1877–78 is now thought to mark the beginning of modern acoustics. Much of Rayleigh’s work is still directly quoted in contemporary physics textbooks.

The study of ultrasonics was initiated by the American scientist John LeConte, who in the 1850s developed a technique for observing the existence of ultrasonic waves with a gas flame. This technique was later used by the British physicist John Tyndall for the detailed study of the properties of sound waves. The piezoelectric effect, a primary means of producing and sensing ultrasonic waves, was discovered by the French physical chemist Pierre Curie and his brother Jacques in 1880. Applications of ultrasonics, however, were not possible until the development in the early 20th century of the electronic oscillator and amplifier, which were used to drive the piezoelectric element.

Among 20th-century innovators were the American physicist Wallace Sabine, considered to be the originator of modern architectural acoustics, and the Hungarian-born American physicist Georg von Békésy, who carried out experimentation on the ear and hearing and validated the commonly accepted place theory of hearing first suggested by Helmholtz. Békésy’s book Experiments in Hearing, published in 1960, is the magnum opus of the modern theory of the ear.

Amplifying, recording, and reproducing

The earliest known attempt to amplify a sound wave was made by Athanasius Kircher, of “bell-in-vacuum” fame; Kircher designed a parabolic horn that could be used either as a hearing aid or as a voice amplifier. The amplification of body sounds became an important goal, and the first stethoscope was invented by a French physician, René Laënnec, in the early 19th century.

Attempts to record and reproduce sound waves originated with the invention in 1857 of a mechanical sound-recording device called the phonautograph by Édouard-Léon Scott de Martinville. The first device that could actually record and play back sounds was developed by the American inventor Thomas Alva Edison in 1877. Edison’s phonograph employed grooves of varying depth in a cylindrical sheet of foil, but a spiral groove on a flat rotating disk was introduced a decade later by the German-born American inventor Emil Berliner in an invention he called the gramophone. Much significant progress in recording and reproduction techniques was made during the first half of the 20th century, with the development of high-quality electromechanical transducers and linear electronic circuits. The most important improvement on the standard phonograph record in the second half of the century was the compact disc, which employed digital techniques developed in mid-century that substantially reduced noise and increased the fidelity and durability of the recording.

Architectural acoustics:

Reverberation time

Although architectural acoustics has been an integral part of the design of structures for at least 2,000 years, the subject was only placed on a firm scientific basis at the beginning of the 20th century by Wallace Sabine. Sabine pointed out that the most important quantity in determining the acoustic suitability of a room for a particular use is its reverberation time, and he provided a scientific basis by which the reverberation time can be determined or predicted.

When a source creates a sound wave in a room or auditorium, observers hear not only the sound wave propagating directly from the source but also the myriad reflections from the walls, floor, and ceiling. These latter form the reflected wave, or reverberant sound. After the source ceases, the reverberant sound can be heard for some time as it grows softer. The time required, after the sound source ceases, for the absolute intensity to drop by a factor of {10}^{6} - or, equivalently, the time for the intensity level to drop by 60 decibels—is defined as the reverberation time (RT, sometimes referred to as RT60). Sabine recognized that the reverberation time of an auditorium is related to the volume of the auditorium and to the ability of the walls, ceiling, floor, and contents of the room to absorb sound. Using these assumptions, he set forth the empirical relationship through which the reverberation time could be determined: RT = 0.05V/A, where RT is the reverberation time in seconds, V is the volume of the room in cubic feet, and A is the total sound absorption of the room, measured by the unit sabin. The sabin is the absorption equivalent to one square foot of perfectly absorbing surface—for example, a one-square-foot hole in a wall or five square feet of surface that absorbs 20 percent of the sound striking it.

Both the design and the analysis of room acoustics begin with this equation. Using the equation and the absorption coefficients of the materials from which the walls are to be constructed, an approximation can be obtained for the way in which the room will function acoustically. Absorbers and reflectors, or some combination of the two, can then be used to modify the reverberation time and its frequency dependence, thereby achieving the most desirable characteristics for specific uses.

While there is no exact value of reverberation time that can be called ideal, there is a range of values deemed to be appropriate for each application. These vary with the size of the room, but the averages can be calculated and indicated by lines on a graph. The need for clarity in understanding speech dictates that rooms used for talking must have a reasonably short reverberation time. On the other hand, the full sound desirable in the performance of music of the Romantic era, such as Wagner operas or Mahler symphonies, requires a long reverberation time. Obtaining a clarity suitable for the light, rapid passages of Bach or Mozart requires an intermediate value of reverberation time. For playing back recordings on an audio system, the reverberation time should be short, so as not to create confusion with the reverberation time of the music in the hall where it was recorded.

Acoustic criteria

Many of the acoustic characteristics of rooms and auditoriums can be directly attributed to specific physically measurable properties. Because the music critic or performing artist uses a different vocabulary to describe these characteristics than does the physicist, it is helpful to survey some of the more important features of acoustics and correlate the two sets of descriptions.

“Liveness” refers directly to reverberation time. A live room has a long reverberation time and a dead room a short reverberation time. “Intimacy” refers to the feeling that listeners have of being physically close to the performing group. A room is generally judged intimate when the first reverberant sound reaches the listener within about 20 milliseconds of the direct sound. This condition is met easily in a small room, but it can also be achieved in large halls by the use of orchestral shells that partially enclose the performers. Another example is a canopy placed above a speaker in a large room such as a cathedral: this leads to both a strong and a quick first reverberation and thus to a sense of intimacy with the person speaking.

The amplitude of the reverberant sound relative to the direct sound is referred to as fullness. Clarity, the opposite of fullness, is achieved by reducing the amplitude of the reverberant sound. Fullness generally implies a long reverberation time, while clarity implies a shorter reverberation time. A fuller sound is generally required of Romantic music or performances by larger groups, while more clarity would be desirable in the performance of rapid passages from Bach or Mozart or in speech.

“Warmth” and “brilliance” refer to the reverberation time at low frequencies relative to that at higher frequencies. Above about 500 hertz, the reverberation time should be the same for all frequencies. But at low frequencies an increase in the reverberation time creates a warm sound, while, if the reverberation time increased less at low frequencies, the room would be characterized as more brilliant.

“Texture” refers to the time interval between the arrival of the direct sound and the arrival of the first few reverberations. To obtain good texture, it is necessary that the first five reflections arrive at the observer within about 60 milliseconds of the direct sound. An important corollary to this requirement is that the intensity of the reverberations should decrease monotonically; there should be no unusually large late reflections.

“Blend” refers to the mixing of sounds from all the performers and their uniform distribution to the listeners. To achieve proper blend it is often necessary to place a collection of reflectors on the stage that distribute the sound randomly to all points in the audience.

Although the above features of auditorium acoustics apply to listeners, the idea of ensemble applies primarily to performers. In order to perform coherently, members of the ensemble must be able to hear one another. Reverberant sound cannot be heard by the members of an orchestra, for example, if the stage is too wide, has too high a ceiling, or has too much sound absorption on its sides.

Acoustic problems

Certain acoustic problems often result from improper design or from construction limitations. If large echoes are to be avoided, focusing of the sound wave must be avoided. Smooth, curved reflecting surfaces such as domes and curved walls act as focusing elements, creating large echoes and leading to bad texture. Improper blend results if sound from one part of the ensemble is focused to one section of the audience. In addition, parallel walls in an auditorium reflect sound back and forth, creating a rapid, repetitive pulsing of sound known as flutter echo and even leading to destructive interference of the sound wave. Resonances at certain frequencies should also be avoided by use of oblique walls.

Acoustic shadows, regions in which some frequency regions of sound are attenuated, can be caused by diffraction effects as the sound wave passes around large pillars and corners or underneath a low balcony. Large reflectors called clouds, suspended over the performers, can be of such a size as to reflect certain frequency regions while allowing others to pass, thus affecting the mixture of the sound.

External noise can be a serious problem for halls in urban areas or near airports or highways. One technique often used for avoiding external noise is to construct the auditorium as a smaller room within a larger room. Noise from air blowers or other mechanical vibrations can be reduced using techniques involving impedance and by isolating air handlers.

Good acoustic design must take account of all these possible problems while emphasizing the desired acoustic features. One of the problems in a large auditorium involves simply delivering an adequate amount of sound to the rear of the hall. The intensity of a spherical sound wave decreases in intensity at a rate of six decibels for each factor of two increase in distance from the source, as shown above. If the auditorium is flat, a hemispherical wave will result. Absorption of the diffracted wave by the floor or audience near the bottom of the hemisphere will result in even greater absorption, so that the resulting intensity level will fall off at twice the theoretical rate, at about 12 decibels for each factor of two in distance. Because of this absorption, the floors of an auditorium are generally sloped upward toward the rear.

Additional Information

Acoustics is defined as the science that deals with the production, control, transmission, reception, and effects of sound (as defined by Merriam-Webster). Many people mistakenly think that acoustics is strictly musical or architectural in nature. While acoustics does include the study of musical instruments and architectural spaces, it also covers a vast range of topics, including: noise control, SONAR for submarine navigation, ultrasounds for medical imaging, thermoacoustic refrigeration, seismology, bioacoustics, and electroacoustic communication. Below is the so called "Lindsay's Wheel of Acoustics", created by R. Bruce Lindsey in J. Acoust. Soc. Am. V. 36, p. 2242 (1964). This wheel describes the scope of acoustics starting from the four broad fields of Earth Sciences, Engineering, Life Sciences, and the Arts. The outer circle lists the various disciplines one may study to prepare for a career in acoustics. The inner circle lists the fields within acoustics that the various disciplines naturally lead to.

Curiously enough, Lindsey (himself a physicist) didn't list physics specifically in the outer circle. This is likely because a background in physics provides one with the foundational knowledge necessary to study nearly any of the fields of acoustics research. In fact, the Acoustical Society of America (ASA) (founded in 1929) was one of the five original societies that helped in the formation of the American Institute of Physics in 1931. The ASA is composed of 13 main areas of study called Technical Committees (TCs):

* Acoustical Oceanography (AO)
* Animal Bioacoustics (AB)
* Architectural Acoustics (AA)
* Biomedical Ultrasound/Bioresponse to Vibration (BB)
* Engineering Acoustics (EA)
* Musical Acoustics (MU)
* Noise (NS)
* Physical Acoustics (PA)
* Psychological and Physiological Acoustics (PP)
* Signal Processing in Acoustics (SP)
* Speech Communication (SC)
* Structural Acoustics and Vibration (SA)
* Underwater Acoustics (UW).

Spectrum-of-Acoustics.jpg

WheelOfAcoustics.jpg

#9 Re: Dark Discussions at Cafe Infinity » crème de la crème » Yesterday 19:29:42

2432) Joshua Lederberg

Gist:

Work

It was long thought that bacteria multiply by dividing, so that all bacteria have the same genetic make-up. Joshua Lederberg and Edward Tatum demonstrated in 1946 that bacteria's genes can also change in a way similar to that of sexual reproduction seen in more complex organisms. Bacteria can go through a phase in which two bacteria exchange genetic material with one another by passing pieces of DNA across a bridge-like connection. Lederberg also proved the phenomenon known as transduction, in which DNA is transferred between bacteria via bacteriophages.

Summary

Joshua Lederberg (born May 23, 1925, Montclair, N.J., U.S.—died Feb. 2, 2008, New York, N.Y.) was an American geneticist and a pioneer in the field of bacterial genetics. He shared the 1958 Nobel Prize for Physiology or Medicine (with George W. Beadle and Edward L. Tatum) for discovering the mechanisms of genetic recombination in bacteria.

Lederberg studied under Tatum at Yale (Ph.D., 1948) and taught at the University of Wisconsin (1947–59), where he established a department of medical genetics. In 1959 he joined the faculty of the Stanford Medical School, serving as director of the Kennedy Laboratories of Molecular Medicine there from 1962 to 1978, when he moved to New York City to become president of Rockefeller University. He held that post until 1990.

With Tatum he published “Gene Recombination in Escherichia coli” (1946), in which he reported that the mixing of two different strains of a bacterium resulted in genetic recombination between them and thus to a new, crossbred strain of the bacterium. Scientists had previously thought that bacteria only reproduced asexually—i.e., by cells splitting in two; Lederberg and Tatum showed that they could also reproduce sexually, and that bacterial genetic systems are similar to those of multicellular organisms.

While biologists who had not previously believed that “sex” existed in bacteria such as E. coli were still confirming Lederberg’s discovery, he and his student Norton D. Zinder reported another and equally surprising finding. In the paper “Genetic Exchange in Salmonella” (1952), they revealed that certain bacteriophages (bacteria-infecting viruses) were capable of carrying a bacterial gene from one bacterium to another, a phenomenon they termed transduction.

Lederberg’s discoveries greatly increased the utility of bacteria as a tool in genetics research, and it soon became as important as the fruit fly Drosophila and the bread mold Neurospora. Moreover, his discovery of transduction provided the first hint that genes could be inserted into cells. The realization that the genetic material of living things could be directly manipulated eventually bore fruit in the field of genetic engineering, or recombinant DNA technology.

At the dawn of space exploration, Lederberg coined the term exobiology to describe the scientific study of life outside Earth’s atmosphere. He later served as a consultant to NASA’s Viking mission to Mars.

Details

Joshua Lederberg (May 23, 1925 – February 2, 2008) was an American molecular biologist known for his work in microbial genetics, artificial intelligence, and the United States space program. He was 33 years old when he won the 1958 Nobel Prize in Physiology or Medicine for discovering that bacteria can mate and exchange genes (bacterial conjugation). He shared the prize with Edward Tatum and George Beadle, who won for their work with genetics.

In addition to his contributions to biology, Lederberg did extensive research in artificial intelligence. This included work in the NASA experimental programs seeking life on Mars and the chemistry expert system Dendral.

Early life and education

Lederberg was born in Montclair, New Jersey, to a Jewish family, son of Esther Goldenbaum Schulman Lederberg and Rabbi Zvi Hirsch Lederberg, in 1925, and moved to Washington Heights, Manhattan as an infant. He had two younger brothers. Lederberg graduated from Stuyvesant High School in New York City at the age of 15 in 1941. After graduation, he was allowed lab space as part of the American Institute Science Laboratory, a forerunner of the Westinghouse Science Talent Search. He enrolled in Columbia University in 1941, majoring in zoology. Under the mentorship of Francis J. Ryan, he conducted biochemical and genetic studies on the bread mold Neurospora crassa. Intending to receive his MD and fulfill his military service obligations, Lederberg worked as a hospital corpsman during 1943 in the clinical pathology laboratory at St. Albans Naval Hospital, where he examined sailors' blood and stool samples for malaria. He went on to receive his undergraduate degree in 1944.

Bacterial genetics

Joshua Lederberg began medical studies at Columbia's College of Physicians and Surgeons while continuing to perform experiments. Inspired by Oswald Avery's discovery of the importance of DNA, Lederberg began to investigate his hypothesis that, contrary to prevailing opinion, bacteria did not simply pass down exact copies of genetic information, making all cells in a lineage essentially clones. After making little progress at Columbia, Lederberg wrote to Edward Tatum, Ryan's post-doctoral mentor, proposing a collaboration. In 1946 and 1947, Lederberg took a leave of absence to study under the mentorship of Tatum at Yale University. Lederberg and Tatum showed that the bacterium Escherichia coli entered a sexual phase during which it could share genetic information through bacterial conjugation. With this discovery and some mapping of the E. coli chromosome, Lederberg was able to receive his Ph.D. from Yale University in 1947. Joshua married Esther Miriam Zimmer (herself a student of Edward Tatum) on December 13, 1946.

Instead of returning to Columbia to finish his medical degree, Lederberg chose to accept an offer of an assistant professorship in genetics at the University of Wisconsin–Madison. His wife Esther Lederberg went with him to Wisconsin. She received her doctorate there in 1950.

Joshua Lederberg and Norton Zinder showed in 1951 that genetic material could be transferred from one strain of the bacterium Salmonella typhimurium to another using viral material as an intermediary step. This process is called transduction. In 1956, M. Laurance Morse, Esther Lederberg and Joshua Lederberg also discovered specialized transduction. The research in specialized transduction focused upon lambda phage infection of E. coli. Transduction and specialized transduction explained how bacteria of different species could gain resistance to the same antibiotic very quickly.

During her time in Joshua Lederberg's laboratory, Esther Lederberg also discovered fertility factor F, later publishing with Joshua Lederberg and Luigi Luca Cavalli-Sforza. In 1956, the Society of Illinois Bacteriologists simultaneously awarded Joshua Lederberg and Esther Lederberg the Pasteur Medal, for "their outstanding contributions to the fields of microbiology and genetics".

In 1957, Joshua Lederberg founded the Department of Medical Genetics at the University of Wisconsin–Madison. He has held visiting professorship in Bacteriology at the University of California, Berkeley in summer 1950 and University of Melbourne (1957). Also in 1957, he was elected to the National Academy of Sciences.

Sir Gustav Nossal views Lederberg as his mentor, describing him as "lightning fast" and "loving a robust debate."

Post Nobel Prize research

In 1958, Joshua Lederberg received the Nobel Prize and moved to Stanford University, where he was the founder and chairman of the Department of Genetics. He collaborated with Frank Macfarlane Burnet to study viral antibodies.

With the launching of Sputnik in 1957, Lederberg became concerned about the biological impact of space exploration. In a letter to the National Academies of Sciences, he outlined his concerns that extraterrestrial microbes might gain entry to Earth onboard spacecraft, causing catastrophic diseases. He also argued that, conversely, microbial contamination of manmade satellites and probes may obscure the search for extraterrestrial life. He advised quarantine for returning astronauts and equipment and sterilization of equipment prior to launch. Teaming up with Carl Sagan, his public advocacy for what he termed exobiology helped expand the role of biology in NASA.

Lederberg was elected to the American Academy of Arts and Sciences in 1959 and the American Philosophical Society in 1960.

In the 1960s, he collaborated with Edward Feigenbaum in Stanford's computer science department to develop DENDRAL.

In 1978, he became the president of Rockefeller University, until he stepped down in 1990 and became professor-emeritus of molecular genetics and informatics at Rockefeller University, reflecting his extensive research and publications in these disciplines.

Throughout his career, Lederberg was active as a scientific advisor to the U.S. government. Starting in 1950, he was a member of various panels of the Presidential Science Advisory Committee. In 1979, he became a member of the U.S. Defense Science Board and the chairman of President Jimmy Carter's President's Cancer Panel. In 1989, he received National Medal of Science for his contributions to the scientific world. In 1994, he headed the Department of Defense's Task Force on Persian Gulf War Health Effects, which investigated Gulf War Syndrome.

During a 1986 fact finding mission of the 1979 Soviet Union epidemic of anthrax bacteria that killed 66 people in the city of Sverdlovsk (now Yekaterinburg, Russia), Lederberg sided with Soviets that the anthrax outbreak was from animal to human transmission stating, "Wild rumors do spread around every epidemic." "The current Soviet account is very likely to be true." After the fall of the Soviet Union and subsequent US investigations in the early 1990s, a team of scientists confirmed the outbreak was caused by a release of an aerosol of anthrax pathogen from a nearby military facility, the lab leak is one of the deadliest ever documented.

lederberg-13126-portrait-medium.jpg

#10 Re: This is Cool » Miscellany » Yesterday 18:55:43

2494) Metal Detector

Gist

Metal detectors use electromagnetic fields to find hidden metal, with major uses in security (airports, events for weapons), hobby/recreation (treasure hunting, coin collecting), industry (quality control in food/pharma, construction for pipes/wires), and archaeology/recovery (locating artifacts, landmines). They range from handheld wands to large walk-through arches, alerting users with sound or visual cues when metal is detected.

Metal detectors detect any conductive metal by sensing disruptions in their electromagnetic field, finding everything from weapons and coins to jewelry and tools, distinguishing between easier-to-detect ferrous (iron-based) and harder-to-detect non-ferrous (like aluminum, copper) metals, with sensitivity adjusted for different uses like security or treasure hunting.

Summary

A metal detector is an instrument that detects the nearby presence of metal. Metal detectors are useful for finding metal objects on the surface, underground, and under water. A metal detector typically consists of a control box, an adjustable shaft, and a variable-shaped pickup coil. When the coil nears metal, the control box signals its presence with a tone, numerical reading, light, or needle movement. Signal intensity typically increases with proximity or metal size and composition. A common type are stationary "walk through" metal detectors used at access points in prisons, courthouses, airports and psychiatric hospitals to detect concealed metal weapons on a person's body.

The simplest form of a metal detector consists of an oscillator producing an alternating current that passes through a coil producing an alternating magnetic field. If a piece of electrically conductive metal is close to the coil, eddy currents will be induced (inductive sensor) in the metal, and this produces a magnetic field of its own. If another coil is used to measure the magnetic field (acting as a magnetometer), the change in the magnetic field due to the metallic object can be detected.

The first industrial metal detectors came out in the 1960s, and were used for finding minerals, among other things. Metal detectors help find land mines. They also detect weapons like knives and guns, which is important for airport security. People most commonly use them to search for buried objects, like in archaeology and treasure hunting. Metal detectors are also used to detect foreign bodies in food, and in the construction industry to detect steel reinforcing bars in concrete and pipes and wires buried in walls and floors.

Details

Mention the words metal detector and you'll get completely different reactions from different people. For instance, some people think of combing a beach in search of coins or buried treasure. Other people think of airport security, or the handheld scanners at a concert or sporting event.

The fact is that all of these scenarios are valid. Metal-detector technology is a huge part of our lives, with a range of uses that spans from leisure to work to safety. The metal detectors in airports, office buildings, schools, government agencies and prisons help ensure that no one is bringing a weapon onto the premises. Consumer-oriented metal detectors provide millions of people around the world with an opportunity to discover hidden treasures (along with lots of junk).

In this article, you'll learn about metal detectors and the various technologies they use. Our focus will be on consumer metal detectors, but most of the information also applies to mounted detection systems, like the ones used in airports, as well as handheld security scanners.

Anatomy of a Metal Detector

A typical metal detector is light-weight and consists of just a few parts:

Stabilizer (optional) - used to keep the unit steady as you sweep it back and forth
Control box - contains the circuitry, controls, speaker, batteries and the microprocessor
Shaft - connects the control box and the coil; often adjustable so you can set it at a comfortable level for your height
Search coil - the part that actually senses the metal; also known as the "search head," "loop" or "antenna"

Most systems also have a jack for connecting headphones, and some have the control box below the shaft and a small display unit above.

Operating a metal detector is simple. Once you turn the unit on, you move slowly over the area you wish to search. In most cases, you sweep the coil (search head) back and forth over the ground in front of you. When you pass it over a target object, an audible signal occurs. More advanced metal detectors provide displays that pinpoint the type of metal it has detected and how deep in the ground the target object is located.

Metal detectors use one of three technologies:

* Very low frequency (VLF)
* Pulse induction (PI)
* Beat-frequency oscillation (BFO)

In the following sections, we will look at each of these technologies in detail to see how they work.

VLF Technology

Very low frequency (VLF), also known as induction balance, is probably the most popular detector technology in use today. In a VLF metal detector, there are two distinct coils:

* Transmitter coil - This is the outer coil loop. Within it is a coil of wire. Electricity is sent along this wire, first in one direction and then in the other, thousands of times each second. The number of times that the current's direction switches each second establishes the frequency of the unit.
* Receiver coil - This inner coil loop contains another coil of wire. This wire acts as an antenna to pick up and amplify frequencies coming from target objects in the ground.
The current moving through the transmitter coil creates an electromagnetic field, which is like what happens in an electric motor. The polarity of the magnetic field is perpendicular to the coil of wire. Each time the current changes direction, the polarity of the magnetic field changes. This means that if the coil of wire is parallel to the ground, the magnetic field is constantly pushing down into the ground and then pulling back out of it.

As the magnetic field pulses back and forth into the ground, it interacts with any conductive objects it encounters, causing them to generate weak magnetic fields of their own. The polarity of the object's magnetic field is directly opposite the transmitter coil's magnetic field. If the transmitter coil's field is pulsing downward, the object's field is pulsing upward.

The receiver coil is completely shielded from the magnetic field generated by the transmitter coil. However, it is not shielded from magnetic fields coming from objects in the ground. Therefore, when the receiver coil passes over an object giving off a magnetic field, a small electric current travels through the coil. This current oscillates at the same frequency as the object's magnetic field. The coil amplifies the frequency and sends it to the control box of the metal detector, where sensors analyze the signal.

The metal detector can determine approximately how deep the object is buried based on the strength of the magnetic field it generates. The closer to the surface an object is, the stronger the magnetic field picked up by the receiver coil and the stronger the electric current generated. The farther below the surface, the weaker the field. Beyond a certain depth, the object's field is so weak at the surface that it is undetectable by the receiver coil.

In the next section, we'll see how a VLF metal detector distinguishes between different types of metals.

VLF Phase Shifting

How does a VLF metal detector distinguish between different metals? It relies on a phenomenon known as phase shifting. Phase shift is the difference in timing between the transmitter coil's frequency and the frequency of the target object. This discrepancy can result from a couple of things:

* Inductance - An object that conducts electricity easily (is inductive) is slow to react to changes in the current. You can think of inductance as a deep river: Change the amount of water flowing into the river and it takes some time before you see a difference.
* Resistance - An object that does not conduct electricity easily (is resistive) is quick to react to changes in the current. Using our water analogy, resistance would be a small, shallow stream: Change the amount of water flowing into the stream and you notice a drop in the water level very quickly.

Basically, this means that an object with high inductance is going to have a larger phase shift, because it takes longer to alter its magnetic field. An object with high resistance is going to have a smaller phase shift.

Phase shift provides VLF-based metal detectors with a capability called discrimination. Since most metals vary in both inductance and resistance, a VLF metal detector examines the amount of phase shift, using a pair of electronic circuits called phase demodulators, and compares it with the average for a particular type of metal. The detector then notifies you with an audible tone or visual indicator as to what range of metals the object is likely to be in.

Many metal detectors even allow you to filter out (discriminate) objects above a certain phase-shift level. Usually, you can set the level of phase shift that is filtered, generally by adjusting a knob that increases or decreases the threshold. Another discrimination feature of VLF detectors is called notching. Essentially, a notch is a discrimination filter for a particular segment of phase shift. The detector will not only alert you to objects above this segment, as normal discrimination would, but also to objects below it.

Advanced detectors even allow you to program multiple notches. For example, you could set the detector to disregard objects that have a phase shift comparable to a soda-can tab or a small nail. The disadvantage of discrimination and notching is that many valuable items might be filtered out because their phase shift is similar to that of "junk." But, if you know that you are looking for a specific type of object, these features can be extremely useful.

PI Technology

A less common form of metal detector is based on pulse induction (PI). Unlike VLF, PI systems may use a single coil as both transmitter and receiver, or they may have two or even three coils working together. This technology sends powerful, short bursts (pulses) of current through a coil of wire. Each pulse generates a brief magnetic field. When the pulse ends, the magnetic field reverses polarity and collapses very suddenly, resulting in a sharp electrical spike. This spike lasts a few microseconds (millionths of a second) and causes another current to run through the coil. This current is called the reflected pulse and is extremely short, lasting only about 30 microseconds. Another pulse is then sent and the process repeats. A typical PI-based metal detector sends about 100 pulses per second, but the number can vary greatly based on the manufacturer and model, ranging from a couple of dozen pulses per second to over a thousand.

If the metal detector is over a metal object, the pulse creates an opposite magnetic field in the object. When the pulse's magnetic field collapses, causing the reflected pulse, the magnetic field of the object makes it take longer for the reflected pulse to completely disappear. This process works something like echoes: If you yell in a room with only a few hard surfaces, you probably hear only a very brief echo, or you may not hear one at all; but if you yell in a room with a lot of hard surfaces, the echo lasts longer. In a PI metal detector, the magnetic fields from target objects add their "echo" to the reflected pulse, making it last a fraction longer than it would without them.

A sampling circuit in the metal detector is set to monitor the length of the reflected pulse. By comparing it to the expected length, the circuit can determine if another magnetic field has caused the reflected pulse to take longer to decay. If the decay of the reflected pulse takes more than a few microseconds longer than normal, there is probably a metal object interfering with it.

The sampling circuit sends the tiny, weak signals that it monitors to a device call an integrator. The integrator reads the signals from the sampling circuit, amplifying and converting them to direct current (DC). The direct current's voltage is connected to an audio circuit, where it is changed into a tone that the metal detector uses to indicate that a target object has been found.

PI-based detectors are not very good at discrimination because the reflected pulse length of various metals are not easily separated. However, they are useful in many situations in which VLF-based metal detectors would have difficulty, such as in areas that have highly conductive material in the soil or general environment. A good example of such a situation is salt-water exploration. Also, PI-based systems can often detect metal much deeper in the ground than other systems.

BFO Technology

The most basic way to detect metal uses a technology called beat-frequency oscillator (BFO). In a BFO system, there are two coils of wire. One large coil is in the search head, and a smaller coil is located inside the control box. Each coil is connected to an oscillator that generates thousands of pulses of current per second. The frequency of these pulses is slightly offset between the two coils.

As the pulses travel through each coil, the coil generates radio waves. A tiny receiver within the control box picks up the radio waves and creates an audible series of tones (beats) based on the difference between the frequencies.

If the coil in the search head passes over a metal object, the magnetic field caused by the current flowing through the coil creates a magnetic field around the object. The object's magnetic field interferes with the frequency of the radio waves generated by the search-head coil. As the frequency deviates from the frequency of the coil in the control box, the audible beats change in duration and tone.

BFO Technology

The simplicity of BFO-based systems allows them to be manufactured and sold for a very low cost. But these detectors do not provide the level of control and accuracy provided by VLF or PI systems.

Buried Treasure

Metal detectors are great for finding buried objects. But typically, the object must be within a foot or so of the surface for the detector to find it. Most detectors have a normal maximum depth somewhere between 8 and 12 inches (20 and 30 centimeters). The exact depth varies based on a number of factors:

The type of metal detector - The technology used for detection is a major factor in the capability of the detector. Also, there are variations and additional features that differentiate detectors that use the same technology. For example, some VLF detectors use higher frequencies than others, while some provide larger or smaller coils. Plus, the sensor and amplification technology can vary between manufacturers and even between models offered by the same manufacturer.

* The type of metal in the object - Some metals, such as iron, create stronger magnetic fields than others.
* The size of the object - A dime is much harder to detect at deep levels than a quarter.
* The makeup of the soil - Certain minerals are natural conductors and can seriously interfere with the metal detector.
* The object's halo - When certain types of metal objects have been in the ground for a long time, they can actually increase the conductivity of the soil around them.
* Interference from other objects - This can be items in the ground, such as pipes or cables, or items above ground, like power lines.

Hobbyist metal detecting is a fascinating world with several sub-groups. Here are some of the more popular activities:

* Coin shooting - looking for coins after a major event, such as a ball game or concert, or just searching for old coins in general
* Prospecting - searching for valuable metals, such as gold nuggets
* Relic hunting - searching for items of historical value, such as weapons used in the U.S. Civil War
* Treasure hunting - researching and trying to find caches of gold, silver or anything else rumored to have been hidden somewhere
* Many metal-detector enthusiasts join local or national clubs that provide tips and tricks for hunting. Some of these clubs even sponsor organized treasure hunts or other outings for their members.

Detective Work

In addition to recreational use, metal detectors serve a wide range of utilitarian functions. Mounted detectors usually use some variation of PI technology, while many of the basic handheld scanners are BFO-based.

Some nonrecreational applications for metal detectors are:

* Airport security - screen people before allowing access to the boarding area and the plane (see How Airport Security Works)
* Building security - screen people entering a particular building, such as a school, office or prison
* Event security - screen people entering a sporting event, concert or other large gathering of people
* Item recovery - help someone search for a lost item, such as a piece of jewelry
* Archaeological exploration - find metallic items of historical significance
* Geological research - detect the metallic composition of soil or rock formations

Manufacturers of metal detectors are constantly tuning the process to make their products more accurate, more sensitive and more versatile. On the next page, you will find links to the manufacturers, as well as clubs and more information on metal detecting as a hobby.

Additional Information

A metal detector is an electronic device that finds metal objects nearby. These devices are very useful for discovering metal pieces hidden inside other objects. They can also find metal items buried underground. Most metal detectors have a handheld part with a special sensor. You can sweep this sensor over the ground or other things. If the sensor gets close to metal, you will hear a changing sound in earphones. Sometimes, a needle on a display will move. The closer the metal is, the louder the sound or the higher the needle goes.

Another common type of metal detector is a "walk-through" scanner. These are used for security checks at places like airports. They help find hidden metal weapons on a person's body.

The basic idea behind a metal detector is simple. It uses an electronic circuit to create an alternating current. This current goes through a coil, which then makes an invisible magnetic field. If a piece of metal that conducts electricity comes close to this coil, tiny electric currents called eddy currents are created in the metal. These eddy currents then make their own magnetic field. The metal detector has another coil that measures these magnetic fields. When the magnetic field changes because of a metal object, the detector senses it.

History and Uses of Metal Detectors

The first industrial metal detectors were made in the 1960s. They quickly became popular for finding minerals and for other industrial jobs.

Finding Hidden Treasures and More

Metal detectors have many interesting uses today:

* Finding landmines: They help locate dangerous land mines left behind after wars.
* Security: They are used to find weapons like knives and guns, especially at airports and other secure locations.
* Exploring the Earth: Scientists use them for geophysical prospecting, which means exploring the Earth's surface for minerals.
* Archaeology: Archaeologists use them to find old artifacts buried underground.
* Treasure hunting: Many people use metal detectors as a hobby to search for lost coins, jewelry, and other treasures.

Metal Detectors in Everyday Life

Metal detectors are also used in other important ways:

* Food safety: They help find foreign objects, like small pieces of metal, in food products. This keeps our food safe to eat.
* Construction: In the construction industry, they find steel reinforcing bars inside concrete. They can also locate pipes and wires hidden in walls and floors.

584b75a397c2-metal-detecting-on-beach.webp

#11 This is Cool » Cation » Yesterday 18:13:42

Jai Ganesh
Replies: 0

Cation

Gist

A cation is an atom or molecule with a net positive electrical charge, formed when a neutral atom loses one or more negatively charged electrons, resulting in more protons than electrons. Most metals readily form cations, such as sodium (Na⁺) or calcium (Ca²⁺), by losing electrons to achieve a stable electron configuration. 

A cation is a positively charged ion, formed when a metal atom loses one or more electrons, resulting in more protons than electrons. Cations possess a net positive charge and are attracted to the cathode in an electric field. Common examples include sodium, calcium, and aluminum.

Cations are positively charged ions. They are formed when a metal loses its electrons. They lose one or more than one electron. It has fewer electrons than protons.

Summary

Cations are positively charged ions that result from an atom or group of atoms losing one or more valence electrons. The term "cation" is derived from "cathode ion," reflecting their attraction to the cathode in an electrolytic solution. Common examples of cations include sodium (Na⁺) and calcium (Ca²⁺), which typically form when alkali and alkaline earth metals lose their valence electrons to achieve a more stable electron configuration. The formation of cations is crucial to the creation of ionic compounds, such as sodium chloride (NaCl), where cations bond with negatively charged ions, or anions.

The electronic structure of atoms, including the arrangement of electrons in shells and orbitals, plays a significant role in determining the behavior of cations. Elements with similar valence electron configurations often exhibit similar chemical properties, a principle that underpins the organization of the periodic table. Naming conventions for cations are standardized by the International Union of Pure and Applied Chemistry (IUPAC), which includes the use of charge notation for monatomic and polyatomic cations. Understanding cations is fundamental to the study of chemistry, influencing reactions, bonding, and the properties of various compounds. 

Details

A cation is a type of ion that has a positive electric charge. This means it has fewer electrons than protons. The opposite of a cation is an anion, which has a negative charge.

Cations can have only one atom (monatomic cations) or be made of multiple atoms together (polyatomic cations). Most metals form monatomic cations, while polyatomic cations are rarer.

Examples

Most metals make one or more monatomic cations. Alkali metals like sodium can lose one electron to make cations like Na+. Alkaline earth metals like calcium lose two electrons to make cations like Ca2+. These are the only ions these elements form, and so are just named after the element: the sodium cation Na+ is just called "sodium" in compounds like sodium chloride.

Transition metals and post-transition metals can make more than one type of cation: iron forms two cations, Fe2+ and Fe3+. The charge on transition metal cations is usually between +1 (such as silver in silver iodide) and +4 (such as titanium in titanium tetrachloride).

Because transition metal cations can have more than one charge, the charge (formally, the oxidation state), is included in the name of the cation in compounds using Roman numerals. Pyrite is made of Fe2+ and the sulfide anion S2−, so it is called iron(II) sulfide. Magnetite is made of Fe3+ and the oxide anion O2−, so it is called iron(III) oxide. Sometimes in older sources these cations have specific names: another name for iron(II) is the "ferrous" cation, while iron(III) is the "ferric" cation.

Ammonium is an example of a polyatomic cation. It is made of a nitrogen atom connected to four hydrogen atoms. The formula for ammonium is written NH4+. Ammonium is made when an acid gives a hydrogen ion to a molecule of ammonia.

Additional Information:

Frequently Asked Questions

What is an example of a cation?

Calcium in its most common state is a cation. It has a 2+ charge and thus has a net ratio of two more protons than electrons. Calcium is an important cation in the human body and its positive charge is needed to complete muscle contractions.

What is the difference between a cation and an anion?

A cation and an anion have a different net charge. Cations have more protons than electrons, and thus have an overall positive charge. Anions have more electrons than protons, and thus have an overall negative charge.

What is a cation?

A cation is any ion that is positively charged. This results in an atom or molecule which has a net positive charge due to the greater number of protons than electrons.

What are cations and anions?

Cations are atoms or molecules that have a positive charge due to a higher ratio of protons to electrons. Anions are atoms or molecules with a negative charge due to a higher ratio of electrons to protons.

1750402826php7puZyA.jpeg

#12 Dark Discussions at Cafe Infinity » Come Quotes - II » Yesterday 17:33:54

Jai Ganesh
Replies: 0

Come Quotes - II

1. You have to dream before your dreams can come true. - A. P. J. Abdul Kalam

2. Everyone's dream can come true if you just stick to it and work hard. - Serena Williams

3. Hope smiles from the threshold of the year to come, whispering, 'It will be happier.' - Alfred Lord Tennyson

4. Only a man who knows what it is like to be defeated can reach down to the bottom of his soul and come up with the extra ounce of power it takes to win when the match is even. - Muhammad Ali

5. To enjoy good health, to bring true happiness to one's family, to bring peace to all, one must first discipline and control one's own mind. If a man can control his mind he can find the way to Enlightenment, and all wisdom and virtue will naturally come to him. - Buddha

6. I will prepare and some day my chance will come. - Abraham Lincoln

7. I've always believed that if you put in the work, the results will come. - Michael Jordan

8. Death does not concern us, because as long as we exist, death is not here. And when it does come, we no longer exist. - Epicurus.

#13 Re: Jai Ganesh's Puzzles » General Quiz » Yesterday 17:18:34

Hi,

#10745. What does the term in Geography Cuesta mean?

#10746. What does the term in Geography Cultural geography mean?

#14 Re: Jai Ganesh's Puzzles » English language puzzles » Yesterday 17:02:15

Hi,

#5941. What does the noun legroom mean?

#5942. What does the noun legume mean?

#15 Re: Jai Ganesh's Puzzles » Doc, Doc! » Yesterday 16:44:00

Hi,

#2567. What does the medical term Hürthle cell neoplasm mean?

#16 Jokes » Doughnet Jokes - I » Yesterday 16:35:29

Jai Ganesh
Replies: 0

I was on a diet, but I donut care anymore.
* * *
Q: What do you call a pastry that is a priest?
A: A Holy Donut!
* * *
Q: What do you see when the Pillsbury Dough Boy bends over?
A: Doughnuts.
* * *
Q: Why did the donut go to the dentist?
A: To get a filling.
* * *
Q: Do you know how many grams of fat are in a Paczki?
A: Donut patronize me.
* * *

#17 Science HQ » Frequency » Yesterday 16:12:09

Jai Ganesh
Replies: 0

Frequency

Gist

Frequency is the measure of how often a repeating event occurs per unit of time, typically measured in Hertz (Hz), which means cycles per second, and applies to oscillations, waves (sound, light), and rotations. It tells you how many times a wave passes a point or a vibration completes a cycle in one second, with higher frequencies meaning faster, more frequent events. 

Frequency is the number of times a repeating event occurs per unit of time, and its SI unit is the Hertz (Hz), which means one cycle or event per second. It quantifies how often something vibrates, oscillates, or repeats, like sound waves or radio signals, and is the inverse of the period (time for one cycle). 

Summary

Frequency, in physics, is the number of waves that pass a fixed point in unit time; also, the number of cycles or vibrations undergone during one unit of time by a body in periodic motion. A body in periodic motion is said to have undergone one cycle or one vibration after passing through a series of events or positions and returning to its original state. See also angular velocity; simple harmonic motion.

If the period, or time interval, required to complete one cycle or vibration is 1/2 second, the frequency is 2 per second; if the period is 1/100 of an hour, the frequency is 100 per hour. In general, the frequency is the reciprocal of the period, or time interval; i.e., frequency = 1/period = 1/(time interval). The frequency with which the Moon revolves around Earth is slightly more than 12 cycles per year. The frequency of the A string of a violin is 440 vibrations or cycles per second.

The symbols most often used for frequency are f and the Greek letters nu (ν) and omega (ω). Nu is used more often when specifying electromagnetic waves, such as light, X-rays, and gamma rays. Omega is usually used to describe the angular frequency—that is, how much an object rotates or revolves in radians per unit time. Usually, frequency is expressed in the hertz unit, named in honour of the 19th-century German physicist Heinrich Rudolf Hertz, one hertz being equal to one cycle per second, abbreviated Hz; one kilohertz (kHz) is 1,000 Hz, and one megahertz (MHz) is 1,000,000 Hz. In spectroscopy another unit of frequency, the wavenumber, the number of waves in a unit of distance, is sometimes used.

Details

Frequency is the number of occurrences of a repeating event per unit of time. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.

The interval of time between events is called the period. It is the reciprocal of the frequency. For example, if a heart beats at a frequency of 120 times per minute (2 hertz), its period is one half of a second.

Special definitions of frequency are used in certain contexts, such as the angular frequency in rotational or cyclical properties, when the rate of angular progress is measured. Spatial frequency is defined for properties that vary or occur repeatedly in geometry or space.

The unit of measurement of frequency in the International System of Units (SI) is the hertz, having the symbol Hz.

Definitions and units

For cyclical phenomena such as oscillations, waves, or for examples of simple harmonic motion, the term frequency is defined as the number of cycles or repetitions per unit of time. The conventional symbol for frequency is f or ν (the Greek letter nu) is also used. The period T is the time taken to complete one cycle of an oscillation or rotation. The frequency and the period are related by the equation

f = 1/T.

The term temporal frequency is used to emphasise that the frequency is characterised by the number of occurrences of a repeating event per unit time.

The SI unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz by the International Electrotechnical Commission in 1930. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, cycle per second (cps). The SI unit for the period, as for all measurements of time, is the second. A traditional unit of frequency used with rotating mechanical devices, where it is termed rotational frequency, is revolution per minute, abbreviated r/min or rpm. Sixty rpm is equivalent to one hertz.

Stroboscope

An old method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary.

Frequency counter

Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. As of 2018, frequency counters can cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods.

Heterodyne methods

Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly utilizing heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency. To convert higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).

Additional Information

Frequency refers to the number of complete waves or cycles that occur in a specific unit of time, typically measured in hertz (Hz), where one hertz equals one cycle per second. This concept is central to understanding various wave phenomena, including sound and electromagnetic waves. The period of a wave, which is the length of time it takes for one complete cycle to occur, is the reciprocal of frequency. For instance, a wave frequency of 100 Hz corresponds to a period of 0.01 seconds.

Frequency, wavelength, and speed are interrelated properties of waves. The speed of a wave depends on the medium through which it travels; for example, sound waves travel at different speeds in air and water. Additionally, the Doppler effect explains how the frequency and wavelength of waves change based on the relative motion of the source and the observer. Frequency also plays a vital role in determining the characteristics of waves, such as whether they are in or out of phase, influencing phenomena like constructive and destructive interference. Overall, frequency is a fundamental concept in both physics and various applications, impacting fields ranging from acoustics to telecommunications.

Cyclic Phenomena

The term "cycle" generally indicates something that goes around in a circle. In physics, "cycle" indicates that a specific property or function has a value that progresses through a succession of other values and returns to the starting value in a precise manner that repeats. Phenomena that exhibit this behavior are associated with either circular or sinusoidal wave motions and properties. Such motions can be described by the same math functions, the sine and cosine.

The sine and cosine functions are themselves simple ratios of the lengths of the two sides of a right triangle at one vertex. The radius (plural: radii) of the circle can be rotated about the center by any amount to form the corresponding angle. A vertical line to the point on the circumference where it meets the displaced radius forms a right triangle with a base that is proportionately shorter than the length of the radius. In this right triangle, the displaced radius forms the hypotenuse and the vertical height of the triangle is the opposite. (The "opposite" is the side of the right triangle that is opposite the angle formed at the center of the circle.) The base of the triangle is called the "adjacent." The sine of the angle formed by the base and the hypotenuse is just the ratio of the length of the opposite to that of the hypotenuse (i.e., the radius). Likewise, its cosine is the ratio of the length of the adjacent to that of the hypotenuse. The reciprocal values of the sine and cosine are called the "secant" and "cosecant," respectively.

As the radius rotates, the angle that it forms at the center changes continuously. The value of the sine also changes accordingly. A graph of this variation produces the sideways S-shaped curve that is recognized as a sine wave. The value of the cosine follows the same pattern but is shifted from the sine values. The cosine at any angle has the value of the sine of an angle that is greater by 90 degrees.

There are two methods of describing the amount of rotation about the center, or axis of rotation. In one, the amount is stated in degrees of rotation, with one full revolution totaling 360 degrees. The other measurement of angles is in radians (rad). One radian is the angle formed by two radii when the length of the circumference they mark off is equal to the radius of the circle. There are 2π radians in one complete revolution.

Properties of Cyclic Phenomena

All cyclic phenomena, whether wavelike or circular, share several characteristics. The primary feature of all of them is that their behaviors or values repeat in the same regular way. The number of times that the cycle of any particular phenomenon repeats in a specific amount of time is its frequency. The most common of these is revolutions per minute (rpm) for rotational movements of physical objects, and cycles per second (cps) for wavelike properties. The conventional unit for cps is hertz (Hz), in honor of Heinrich Hertz (1857–94), for his contributions to the physics of electromagnetism (EM). The term is most often used to refer to EM waves.

The duration of just one cycle of the phenomenon is the period of the cycle. The period is calculated simply as the reciprocal value of the frequency. For example, a wave frequency of 100 cps has a corresponding period of 1/100, or 0.01, seconds per cycle (spc). If that same wave is progressing, or propagating, at a speed of 100 meters per second (m/s), each cycle will have moved it through a distance of 1 meter. Since this corresponds to the distance covered by just one complete cycle of the wave, it is the specific wavelength. Wavelength is only used to describe phenomena that travel through space or time. It is not applied to rotational motions.

Frequency, Wavelength, and Speed

EM waves such as visible light all travel at the same speed—the speed of light. Physical waves, such as sound, travel at different speeds determined by the density of the medium. The speed of sound in air, for example, is 331 meters per second (about 1,086 feet per second) at 0 degrees Celsius (32 degrees Fahrenheit), and 342 meters per second (about 1,122 feet per second) at 18 degrees Celsius (65 degrees Fahrenheit). In water, the speed of sound is about 1,140 meters per second (about 3,740 feet per second). The greater the density of the medium is, the more it can transmit sound waves. Another factor that affects the transmission of both physical and EM waves is the relative motion of the wave source and the observer or receiver of the emitted waves. When the two move toward each other, the apparent frequency of the waves increases and the apparent wavelength decreases, but if they move apart, the apparent frequency decreases and the wavelength increases. This is known as the "Doppler effect." It accounts for the apparent changes to the sound of a passing train, as well as the red shift and blue shift in the light observed from distant stars.

For physical waves, speed, frequency, and wavelength are all related. For EM waves, however, the constant speed of light requires that the frequency and wavelength are related in a manner that maintains the constancy of the speed of light.

electromagnetic_spectrum_wavelength_frequency.jpg

#21 This is Cool » Cathode Ray Tube » 2026-02-11 19:23:03

Jai Ganesh
Replies: 0

Cathode Ray Tube

Gist

A cathode ray tube (CRT) is a specialized vacuum tube that displays images, data, or electrical waveforms by projecting a focused beam of electrons onto a phosphorescent screen. It operates via an electron gun that emits and accelerates electrons, which are then deflected magnetically or electrostatically to create visible images. Primarily used in older televisions, computer monitors, and oscilloscopes, CRTs are known for their ability to display real-time, high-contrast images.

A CRT is a presentation screen that produces pictures as a video signal. It is a sort of vacuum tube that display pictures when electron beams from an electron gun strike a luminous surface. In other words, the CRT produces beams, accelerates them at high speed, and deflects them to make pictures on a phosphor screen.

Summary

A cathode-ray tube (CRT) is a Vacuum tube that produces images when its phosphorescent surface is struck by electron beams. CRTs can be monochrome (using one electron gun) or colour (typically using three electron guns to produce red, green, and blue images that, when combined, render a multicolour image). They come in a variety of display modes, including CGA (Color Graphics Adapter), VGA (Video Graphics Array), XGA (Extended Graphics Array), and the high-definition SVGA (Super Video Graphics Array).

Details

A cathode ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.

In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.

The tube is a glass envelope which is heavy, fragile, and long from front screen face to rear end. Its interior must be close to a vacuum to prevent the emitted electrons from colliding with air molecules and scattering before they hit the tube's face. Thus, the interior is evacuated to less than a millionth of atmospheric pressure. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. This tube makes up most of the weight of CRT TVs and computer monitors.

Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as liquid-crystal display (LCD), plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and thinner. Flat-panel displays can also be made in very large sizes whereas 40–45 inches (100–110 cm) was about the largest size of a CRT.

A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons

Additional Information

A cathode-ray tube (CRT) is a specialized vacuum tube in which images are produced when an electron beam strikes a phosphorescent surface. Most desktop computer displays make use of CRTs. The CRT in a computer display is similar to the "picture tube" in a television receiver.

A cathode-ray tube consists of several basic components, as illustrated below. The electron gun generates an arrow beam of electrons. The anodes accelerate the electrons. Deflecting coils produce an extremely low frequency electromagnetic field that allows for constant adjustment of the direction of the electron beam. There are two sets of deflecting coils: horizontal and vertical.(In the illustration, only one set of coils is shown for simplicity.) The intensity of the beam can be varied. The electron beam produces a tiny, bright visible spot when it strikes the phosphor-coated screen.

To produce an image on the screen, complex signals are applied to the deflecting coils, and also to the apparatus that controls the intensity of the electron beam. This causes the spot to race across the screen from right to left, and from top to bottom, in a sequence of horizontal lines called the raster. As viewed from the front of the CRT, the spot moves in a pattern similar to the way your eyes move when you read a single-column page of text. But the scanning takes place at such a rapid rate that your eye sees a constant image over the entire screen.

The illustration shows only one electron gun. This is typical of a monochrome, or single-color, CRT. However, virtually all CRTs today render color images. These devices have three electron guns, one for the primary color red, one for the primary color green, and one for the primary color blue. The CRT thus produces three overlapping images: one in red (R), one in green (G), and one in blue (B). This is the so-called RGB color model.

In computer systems, there are several display modes, or sets of specifications according to which the CRT operates. The most common specification for CRT displays is known as SVGA (Super Video Graphics Array). Notebook computers typically use liquid crystal display. The technology for these displays is much different than that for CRTs. The Cathode Ray Tube or Braun’s Tube was invented by the German physicist Karl Ferdinand Braun in 1897 and is today used in computer monitors, TV sets and oscilloscope tubes. The path of the electrons in the tube filled with a low pressure rare gas can be observed in a darkened room as a trace of light. Electron beam deflection can be
effected by means of either an electrical or a magnetic field.

Functional principle

• The source of the electron beam is the electron gun, which produces a stream of electrons through thermionic emission at the heated cathode and focuses it into a thin beam by the control grid (or “Wehnelt cylinder”).
• A strong electric field between cathode and anode accelerates the electrons, before they leave the electron gun through a small hole in the anode.
• The electron beam can be deflected by a capacitor or coils in a way which causes it to display an image on the screen. The image may represent electrical waveforms (oscilloscope), pictures (television, computer monitor), echoes of aircraft detected by radar etc.
• When electrons strike the fluorescent screen, light is emitted. 
• The whole configuration is placed in a vacuum tube to avoid collisions between electrons and gas molecules of the air, which would attenuate the beam.

c5641463fa46c35261d8439e4102389902433d08.png

#22 Re: This is Cool » Miscellany » 2026-02-11 18:41:33

2493) Animation

Gist

Animation is the art of creating the illusion of motion by rapidly displaying a sequence of static, slightly different images, often used in entertainment to bring characters to life. It encompasses traditional hand-drawn techniques, computer-generated imagery (CGI), and stop-motion, with 2D, 3D, and digital methods being dominant today. Key principles like timing, spacing, and squash-and-stretch ensure realistic or stylized motion, while software allows for efficient production.

The term 'animation' is derived from the Japanese word 'anime,' which can be translated as “to move” or “to give life”. The first animated “film” was made by a French cartoonist called Émile Cohl. This was the origin of animation; now, let's discuss the definition of animation.

Summary

Animation is a filmmaking technique whereby pictures are generated or manipulated to create moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Animation has been recognized as an artistic medium, specifically within the entertainment industry. Many animations are either traditional animations or computer animations made with computer-generated imagery (CGI). Stop motion animation, in particular claymation, is also prominent alongside these other forms, albeit to a lesser degree.

Animation is contrasted with live action, although the two do not exist in isolation. Many filmmakers have produced films that are a hybrid of the two. As CGI increasingly approximates photographic imagery, filmmakers can relatively easily composite 3D animated visual effects (VFX) into their film, rather than using practical effects.

General overview

Computer animation can be very detailed 3D animation, while 2D computer animation (which may have the look of traditional animation) can be used for stylistic reasons, low bandwidth, or faster real-time renderings. Other common animation methods apply a stop motion technique to two- and three-dimensional objects like paper cutouts, puppets, or clay figures.

An animated cartoon, or simply a cartoon, is an animated film, usually short, that features an exaggerated style. This style is often inspired by comic strips, gag cartoons, and other non-animated art forms. Cartoons frequently include anthropomorphic animals, superheroes, or the adventures of human protagonists. The action often revolves around exaggerated physical humor, particularly in predator/prey dynamics (e.g. cats and mice, coyotes and birds), where violent pratfalls such as falls, collisions, and explosions occur, often in ways that would be lethal in the real life.

During the late 1980s, the term "cartoon" was shortened to toon, referring to characters in animated productions, or more specifically, cartoonishly-drawn characters. This term gained popularity first in 1988 with the live-action/animated hybrid film Who Framed Roger Rabbit, which introduced ToonTown, a world inhabited by various animated cartoon characters. In 1990, Tiny Toon Adventures embraced the classic cartoon spirit, introducing a new generation of cartoon characters. Then, in 1993, Animaniacs followed, featuring the three rubber-hose-styled Warner siblings, Yakko Warner, Wakko Warner, and Dot Warner, who are trapped in the 1930s, eventually escaped and found themselves in the Warner Bros. water tower in the 1990s.

The illusion of animation—as in motion pictures in general—has traditionally been attributed to the persistence of vision and later to the phi phenomenon and beta movement, but the exact neurological causes are still uncertain. The illusion of motion caused by a rapid succession of images that minimally differ from each other, with unnoticeable interruptions, is a stroboscopic effect. While animators traditionally used to draw each part of the movements and changes of figures on transparent cels that could be moved over a separate background, computer animation is usually based on programming paths between key frames to maneuver digitally created figures throughout a digitally created environment.

Analog mechanical animation media that rely on the rapid display of sequential images include the phenakistiscope, zoetrope, flip book, praxinoscope, and film. Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on computers, technology such as the animated GIF and Flash animation were developed.

In addition to short films, feature films, television series, animated GIFs (Graphics Interchange Format), and other media dedicated to the display of moving images, animation is also prevalent in video games, motion graphics, user interfaces, and visual effects.

The physical movement of image parts through simple mechanics—for instance, moving images in magic lantern shows—can also be considered animation. The mechanical manipulation of three-dimensional puppets and objects to emulate living beings has a very long history in automata. Electronic automata were popularized by Disney as animatronics.

Details

Animation is the art of making inanimate objects appear to move. Animation is an artistic impulse that long predates the movies. History’s first recorded animator is Pygmalion of Greek and Roman mythology, a sculptor who created a figure of a woman so perfect that he fell in love with her and begged Venus to bring her to life. Some of the same sense of magic, mystery, and transgression still adheres to contemporary film animation, which has made it a primary vehicle for exploring the overwhelming, often bewildering emotions of childhood—feelings once dealt with by folktales.

Early history

The theory of the animated cartoon preceded the invention of the cinema by half a century. Early experimenters, working to create conversation pieces for Victorian parlors or new sensations for the touring magic-lantern shows, which were a popular form of entertainment, discovered the principle of persistence of vision. If drawings of the stages of an action were shown in fast succession, the human eye would perceive them as a continuous movement. One of the first commercially successful devices, invented by the Belgian Joseph Plateau in 1832, was the phenakistoscope, a spinning cardboard disk that created the illusion of movement when viewed in a mirror. In 1834 William George Horner invented the zoetrope, a rotating drum lined by a band of pictures that could be changed. The Frenchman Émile Reynaud in 1876 adapted the principle into a form that could be projected before a theatrical audience. Reynaud became not only animation’s first entrepreneur but, with his gorgeously hand-painted ribbons of celluloid conveyed by a system of mirrors to a theater screen, the first artist to give personality and warmth to his animated characters.

With the invention of sprocket-driven film stock, animation was poised for a great leap forward. Although “firsts” of any kind are never easy to establish, the first film-based animator appears to be J. Stuart Blackton, whose Humorous Phases of Funny Faces in 1906 launched a successful series of animated films for New York’s pioneering Vitagraph Company. Later that year, Blackton also experimented with the stop-motion technique—in which objects are photographed, then repositioned and photographed again—for his short film Haunted Hotel.

In France, Émile Cohl was developing a form of animation similar to Blackton’s, though Cohl used relatively crude stick figures rather than Blackton’s ambitious newspaper-style cartoons. Coinciding with the rise in popularity of the Sunday comic sections of the new tabloid newspapers, the nascent animation industry recruited the talents of many of the best-known artists, including Rube Goldberg, Bud Fisher (creator of Mutt and Jeff) and George Herriman (creator of Krazy Kat), but most soon tired of the fatiguing animation process and left the actual production work to others.

The one great exception among these early illustrators-turned-animators was Winsor McCay, whose elegant, surreal Little Nemo in Slumberland and Dream of the Rarebit Fiend remain pinnacles of comic-strip art. McCay created a hand-colored short film of Little Nemo for use during his vaudeville act in 1911, but it was Gertie the Dinosaur, created for McCay’s 1914 tour, that transformed the art. McCay’s superb draftsmanship, fluid sense of movement, and great feeling for character gave viewers an animated creature who seemed to have a personality, a presence, and a life of her own. The first cartoon star had been born.

McCay made several other extraordinary films, including a re-creation of The Sinking of the Lusitania (1918), but it was left to Pat Sullivan to extend McCay’s discoveries. An Australian-born cartoonist who opened a studio in New York City, Sullivan recognized the great talent of a young animator named Otto Messmer, one of whose casually invented characters—a wily black cat named Felix—was made into the star of a series of immensely popular one-reelers. Designed by Messmer for maximum flexibility and facial expressiveness, the round-headed, big-eyed Felix quickly became the standard model for cartoon characters: a rubber ball on legs who required a minimum of effort to draw and could be kept in constant motion.

Walt Disney

This lesson did not go unremarked by the young Walt Disney, then working at his Laugh-O-gram Films studio in Kansas City, Missouri. His first major character, Oswald the Lucky Rabbit, was a straightforward appropriation of Felix; when he lost the rights to the character in a dispute with his distributor, Disney simply modified Oswald’s ears and produced Mickey Mouse.

Far more revolutionary was Disney’s decision to create a cartoon with the novelty of synchronized sound. Steamboat Willie (1928), Mickey’s third film, took the country by storm. A missing element—sound—had been added to animation, making the illusion of life that much more complete, that much more magical. Later, Disney would add carefully synchronized music (The Skeleton Dance, 1929), three-strip Technicolor (Flowers and Trees, 1932), and the illusion of depth with his multiplane camera (The Old Mill, 1937). With each step, Disney seemed to come closer to a perfect naturalism, a painterly realism that suggested academic paintings of the 19th century. Disney’s resident technical wizard was Ub Iwerks, a childhood friend who followed Disney to Hollywood and was instrumental in the creation of the multiplane camera and the synchronization techniques that made the Mickey Mouse cartoons and the Silly Symphonies series seem so robust and fully dimensional.

For Disney, the final step was, of course, Snow White and the Seven Dwarfs (1937). Although not the first animated feature, it was the first to use up-to-the-minute techniques and the first to receive a wide, Hollywood-style release. Instead of amusing his audience with talking mice and singing cows, Disney was determined to give them as profound a dramatic experience as the medium would allow; he reached into his own troubled childhood to interpret this rich fable of parental abandonment, sibling rivalry, and the onrush of adult passion.

With his increasing insistence on photographic realism in films such as Pinocchio (1940), Fantasia (1940), Dumbo (1941), and Bambi (1942), Disney perversely seemed to be trying to put himself out of business by imitating life too well. That was not the temptation followed by Disney’s chief rivals in the 1930s, all of whom came to specialize in their own kind of stylized mayhem.

The Fleischer brothers

Max and Dave Fleischer had become successful New York animators while Disney was still living in Kansas City. The Fleischers invented the rotoscoping process, still in use today, in which a strip of live-action footage can be traced and redrawn as a cartoon. The Fleischers exploited this technique in their pioneering series Out of the Inkwell (1919–29). It was this series, with its lively interaction between human and drawn figures, that Disney struggled to imitate with his early Alice cartoons.

But if Disney was Mother Goose and Norman Rockwell, the Fleischers (Max produced, Dave directed) were stride piano and red whiskey. Their extremely urban, overcrowded, sexually suggestive, and frequently nightmarish work—featuring the curvaceous torch singer Betty Boop and her two oddly infantile colleagues, Bimbo the Dog and Koko the Clown—charts a twisty route through the American subconscious of the 1920s and ’30s, before collapsing into Disneyesque cuteness with the features Gulliver’s Travels (1939) and Mr. Bug Goes to Town (1941; also released as Hoppity Goes to Town). The studio’s mainstay remained the relatively impersonal Popeye series, based on the comic strip created by Elzie Segar. The spinach-loving sailor was introduced as a supporting player in the Betty Boop cartoon Popeye the Sailor (1933) and quickly ascended to stardom, surviving through 105 episodes until the 1942 short Baby Wants a Bottleship, when the Fleischer studio collapsed and rights to the character passed to Famous Studios.

“Termite Terrace”

Less edgy than the Fleischers but every bit as anarchic were the animations produced by the Warner Bros. cartoon studio, known to its residents as “Termite Terrace.” The studio was founded by three Disney veterans, Rudolph Ising, Hugh Harmon, and Friz Freleng, but didn’t discover its identity until Tex Avery, fleeing the Walter Lantz studio at Universal, joined the team as a director. Avery was young and irreverent, and he quickly recognized the talent of staff artists such as Chuck Jones, Bob Clampett, and Bob Cannon. Together they brought a new kind of speed and snappiness to the Warners product, beginning with Gold Diggers of ’49 (1936). With the addition of director Frank Tashlin, musical director Carl W. Stalling, and voice interpreter Mel Blanc, the team was in place to create a new kind of cartoon character: cynical, wisecracking, and often violent, who, refined through a series of cartoons, finally emerged as Bugs Bunny in Tex Avery’s A Wild Hare (1940). Other characters, some invented and some reinterpreted, arrived, including Daffy Duck, Porky Pig, Tweety and Sylvester, Pepe LePew, Foghorn Leghorn, Road Runner, and Wile E. Coyote. Avery left Warner Brothers and in 1942 joined Metro-Goldwyn-Mayer’s moribund animation unit, where, if anything, his work became even wilder in films such as Red Hot Riding Hood (1943) and Bad Luck Blackie (1949).

Animation in Europe

In Europe animation had meanwhile taken a strikingly different direction. Eschewing animated line drawings, filmmakers experimented with widely different techniques: in Russia and later in France, Wladyslaw Starewicz (also billed as Ladislas Starevitch), a Polish art student and amateur entomologist, created stop-motion animation with bugs and dolls; among his most celebrated films are The Cameraman’s Revenge (1912), in which a camera-wielding grasshopper uses the tools of his trade to humiliate his unfaithful wife, and the feature-length The Tale of the Fox (1930), based on German folktales as retold by Johann Wolfgang von Goethe. A Russian working in France, Alexandre Alexeïeff, developed the pinscreen, a board perforated by some 500,000 pins that could be raised or lowered, which created patterns of light and shadow that gave the effect of an animated steel engraving. It took Alexeïeff two years to create A Night on Bald Mountain (1933), which used the music of Modest Mussorgsky; in 1963 Nikolay Gogol was the source of his most widely celebrated film, the dark fable The Nose.

Inspired by the shadow puppet theater of Thailand, Germany’s Lotte Reiniger employed animated silhouettes to create elaborately detailed scenes derived from folktales and children’s books. Her The Adventures of Prince Achmed (1926) may have been the first animated feature; it required more than two years of patient work and earned her the nickname “The Mistress of Shadows,” as bestowed on her by Jean Renoir. Her other works include Dr. Dolittle and His Animals (1928) and shorts based on musical themes by Wolfgang Amadeus Mozart (Papageno, 1935; adapted from The Magic Flute), Gaetano Donizetti (L’elisir d’amore, 1939; “The Elixir of Love”), and Igor Stravinsky (Dream Circus, 1939; adapted from Pulcinella). In the 1950s Reiniger moved to England, where she continued to produce films until her retirement in the ’70s.

Another German-born animator, Oskar Fischinger, took his work in a radically different direction. Abandoning the fairy tales and comic strips that had inspired most of his predecessors, Fischinger took his inspiration from the abstract art that dominated the 1920s. At first he worked with wax figures animated by stop motion, but his most significant films are the symphonies of shapes and sounds he called “colored rhythms,” created from shifting color fields and patterns matched to music by classical composers. He became fascinated by color photography and collaborated on a process called Gasparcolor, which, as utilized in his 1935 film Composition in Blue, won a prize at that year’s Venice Film Festival. The following year, he immigrated to Hollywood, where he worked on special effects for a number of films and was the initial designer of the Toccata and Fugue sequence in Walt Disney’s Fantasia (1940). The Disney artists modified his designs, however, and he asked that his name be removed from the finished film. Through the 1940s and ’50s he balanced his work between experimental films such as Motion Painting No. 1 (1947) and commercials, and he retired from animation in 1961 to devote himself to painting.

Fischinger’s films made a deep impression on the Scottish design student Norman McLaren, who began experimenting with cameraless films—with designs drawn directly on celluloid—as early as 1933 (Seven Till Five). A restless and brilliant researcher, he went to work for John Grierson at the celebrated General Post Office (GPO) Film Unit in London and followed Grierson to Canada in 1941, shortly after the founding of the National Film Board. Supported by government grants, he was able to play out his most radical creative impulses, using watercolors, crayons, and paper cutouts to bring abstract designs to flowing life. Attracted by the possibilities of stop-motion animation, he was able to turn inanimate objects into actors (A Chairy Tale, 1957) and actors into inanimate objects (Neighbours, 1952), a technique he called “pixellation.”

The international success of McLaren’s work (he won an Oscar for Neighbours) opened the possibilities for more personal forms of animation in America. John Hubley, an animator who worked for Disney studios on Snow White, Pinocchio, and Fantasia, left the Disney organization in 1941 and joined the independent animation company United Productions of America in 1945. Working in a radically simplified style, without the depth effects and shading of the Disney cartoons, Hubley created the nearsighted character Mister Magoo for the 1949 short Ragtime Bear. He and his wife, Faith, formed their own studio, Storyboard Productions, in 1955, and they collaborated on a series of increasingly poetic narrative films. They won Oscars for Moonbird (1959) and The Hole (1962). The Hubleys also created a much-admired series of short films based on the jazz improvisations of Dizzy Gillespie, Quincy Jones, and Benny Carter.

The evolution of animation in Eastern Europe was impeded by World War II, but several countries—in particular Poland, Hungary, and Romania—became world leaders in the field by the 1960s. Włodzimierz Haupe and Halina Bielinska were among the first important Polish animators; their Janosik (1954) was Poland’s first animated film, and their Changing of the Guard (1956) employed the stop-action gimmick of animated matchboxes. The collaborative efforts of Jan Lenica and Walerian Borowczyk foresaw the bleak themes and absurdist trends of the Polish school of the 1960s; such films as Był sobie raz… (1957; Once Upon a Time…), Nagrodzone uczucie (1957; Love Rewarded), and Dom (1958; The House) are surreal, pessimistic, plotless, and characterized by a barrage of disturbing images. Borowczyk and Lenica, each of whom went on to a successful solo career, helped launch an industry that produced as many as 120 animated films per year by the early ’60s. Animators such as Miroslaw Kijowicz, Daniel Szczechura, and Stefan Schabenbeck were among the leaders in Polish animation during the second half of the 20th century.

Nontraditional forms

Eastern Europe also became the center of puppet animation, largely because of the sweetly engaging, folkloric work of Jiří Trnka. Based on a Hans Christian Andersen story, Trnka’s The Emperor’s Nightingale became an international success when it was fitted with narration by Boris Karloff and released in 1948. His subsequent work includes ambitious adaptations of The Good Soldier Schweik (1954) and A Midsummer Night’s Dream (1959).

Born in Hungary, George Pal worked as an animator in Berlin, Prague, Paris, and the Netherlands before immigrating to the United States in 1939. There he contracted with Paramount Pictures to produce the Puppetoons series, perhaps the most popular and accomplished puppet animations to be created in the United States. A dedicated craftsman, Pal would produce up to 9,000 model figures for films such as Tulips Shall Grow, his 1942 anti-Nazi allegory. Pal abandoned animation for feature film production in 1947, though in films such as The War of the Worlds (1953) he continued to incorporate elaborate animated special-effects sequences.

Animators in Czechoslovakia and elsewhere took the puppet technique down far darker streets. Jan Švankmajer, for example, came to animation from the experimental theater movement of Prague. His work combines human figures and stop-motion animation to create disturbingly carnal meditations on sexuality and mortality, such as the short Dimensions of Dialogue (1982) and the features Alice (1988), Faust (1994), and Conspirators of Pleasure (1996). Švankmajer’s most dedicated disciples are the Quay brothers, Stephen and Timothy, identical twins born in Philadelphia who moved to London to create a series of meticulous puppet animations steeped in the atmosphere and ironic fatalism of Eastern Europe. Their Street of Crocodiles (1986), obliquely based on the stories of Bruno Schulz, is a parable of obscure import in which a puppet is freed of his strings but remains enslaved by bizarre sexual impulses.

Nick Park, the creator of the Wallace and Gromit series, is the optimist’s answer to the Quay brothers—a stop-motion animator who creates endearing characters and cozy environments that celebrate the security and complacency of provincial English life. He and his colleagues at the British firm Aardman Animations, including founders Peter Lord and Dave Sproxton, have taken the traditionally child-oriented format of clay animation to new heights of sophistication and expressiveness.

More-traditional forms of line animation have continued to be produced in Europe by filmmakers such as France’s Paul Grimault (The King and the Bird, begun in 1948 and released in 1980), Italy’s Bruno Bozzetto (whose 1976 Allegro Non Troppo broadly parodied Fantasia), and Great Britain’s John Halas and Joy Batchelor (Animal Farm, 1955) and Richard Williams (Raggedy Ann and Andy, 1977). George Dunning’s Yellow Submarine (1968) made creative use of the visual motifs of the psychedelic era, luring young adults back to a medium that had largely been relegated to children.

A victim of rising production costs, full-figure, feature-length animation appeared to be dying off until two developments gave it an unexpected boost in the 1980s. The first was the Disney company’s discovery that the moribund movie musical could be revived and made palatable to contemporary audiences by adapting it to cartoon form (The Little Mermaid, 1989); the second was the development of computer animation technology, which greatly reduced expenses while providing for new forms of expression. Although most contemporary animated films use computer techniques to a greater or lesser degree, the finest, purest achievements in the genre are the work of John Lasseter, whose Pixar Animation Studios productions have evolved from experimental shorts, such as Luxor, Jr. (1986), to lush features, such as Toy Story (1995; the first entirely computer-animated feature-length film), A Bug’s Life (1998), Finding Nemo (2003), The Incredibles (2004), WALL-E (2008), and Up (2009). Computer techniques are commonly incorporated into traditional line animations, giving films such as Disney’s Mulan (1998) and Dreamworks’s The Road to El Dorado (2000) a visual sweep and dimensionality that would otherwise require countless hours of manual labor.

Contemporary developments

A century after its birth, animation continues to evolve. The most exciting developments are found on two distinct fronts: the anime (“animation”) of Japan and the prime-time television cartoons of the United States. An offspring of the dense, novelistic style of Japanese manga comic books and the cut-rate techniques developed for television production in 1960, anime such as Miyazaki Hayao’s Princess Mononoke (1997) are the modern equivalent of the epic folk adventures once filmed by Mizoguchi Kenji (The 47 Ronin, 1941) and Kurosawa Akira (Yojimbo, 1961; “The Bodyguard”). Kon Satoshi’s Perfect Blue (1997) suggests the early Japanese New Wave films of director Oshima Nagisa with its violent exploration of a media-damaged personality.

U.S. television animation, pioneered in the 1950s by William Hanna and Joseph Barbera (Yogi Bear, The Flintstones) was for years synonymous with primitive techniques and careless writing. But with the debut of The Simpsons in 1989, TV animation became home to a kind of mordant social commentary or outright absurdism (John Kricfalusi’s Ren and Stimpy) that was too pointedly aggressive for live-action realism. When Mike Judge’s Beavis and Butt-Head debuted on the MTV network in 1993, the rock-music cable channel discovered that cartoons could push the limits of censorship in ways no live-action television productions could. Following Judge’s success in 1997 were Trey Parker and Matt Stone with South Park, a series centered on foulmouthed kids growing up in the American Rocky Mountain West and rendered in a flat, cutout animation style that would have looked primitive in 1906. The spiritual father of the new television animation is Jay Ward, whose Rocky and His Friends, first broadcast in 1959, turned the threadbare television style into a vehicle for absurdist humor and adult satire.

Despite these boundary-pushing advances, full-figure, traditionally animated films continue to be produced, most notably by Don Bluth (An American Tale, 1986), a Disney dissident who moved his operation to Ireland, and Brad Bird, a veteran of Simpsons minimalism who progressed to the spectacular full technique of The Iron Giant (1999). As digital imaging techniques continue to improve in quality and affordability, it becomes increasingly difficult to draw a clear line between live action and animation. Films such as The Matrix (1999), Star Wars: Episode One (1999), and Gladiator (2000), incorporate backgrounds, action sequences, and even major characters conceived by illustrators and brought to life by technology. Such techniques are no less creations of the animator’s art than were Gertie, Betty Boop, and Bugs Bunny.

Additional Information

Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics.

Computer animation is a digital successor to stop motion and traditional animation. Instead of a physical model or illustration, a digital equivalent is manipulated frame-by-frame. Also, computer-generated animations allow a single graphic artist to produce such content without using actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new similar image but advanced slightly in time (usually at a rate of 24, 25, or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.

To trick the visual system into seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second or faster (a frame is one complete image). At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates.

Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the appearance of continuous movement.

zmkTqAHFpVBkkEuVgNM3rh-970-80.jpg.webp

#23 Re: Dark Discussions at Cafe Infinity » crème de la crème » 2026-02-11 17:33:51

2431) Edward Tatum

Gist:

Work

Organisms' metabolism–the chemical processes within its cells–are regulated by substances called enzymes. Edward Tatum and George Beadle proved in 1941 that our genetic code‚ our genes, govern the formation of enzymes. They exposed a type of mold to x-rays, causing mutations, or changes in its genes. They later succeeded in proving that this led to definite changes in enzyme formation. The conclusion was that each enzyme corresponds to a particular gene.

Summary

Edward L. Tatum (born Dec. 14, 1909, Boulder, Colo., U.S.—died Nov. 5, 1975, New York, N.Y.) was an American biochemist who helped demonstrate that genes determine the structure of particular enzymes or otherwise act by regulating specific chemical processes in living things. His research helped create the field of molecular genetics and earned him (with George Beadle and Joshua Lederberg) the Nobel Prize for Physiology or Medicine in 1958.

Tatum earned his doctorate from the University of Wisconsin in 1934. As a research associate at Stanford University (1937–41), Tatum collaborated with Beadle in an attempt to confirm the following concepts: all biochemical processes in all organisms are ultimately controlled by genes; all these processes are resolvable into series of individual sequential chemical reactions (pathways); each reaction is in some way controlled by a single gene; and the mutation of a single gene results only in an alteration in the ability of the cell to carry out a single chemical reaction.

At Stanford, Tatum and Beadle used X rays to induce mutations in strains of the pink bread mold Neurospora crassa. They found that some of the mutants lost the ability to produce an essential amino acid or vitamin. Tatum and Beadle then crossed these strains with normal strains of the mold and found that their offspring inherited the metabolic defect as a recessive trait, thereby proving that the mutations were in fact genetic defects. Their research showed that when a genetic mutation can be shown to affect a specific chemical reaction, the enzyme catalyzing that reaction will be altered or missing. Thus, they showed that each gene in some way determines the structure of a specific enzyme (the one-gene–one-enzyme hypothesis).

As a professor at Yale University (1945–48), Tatum successfully applied his methods of inducing mutations and studying biochemical processes in Neurospora to bacteria. With Lederberg, he discovered the occurrence of genetic recombination, or “sex,” between Escherichia coli bacteria of the K-12 strain. Largely because of their efforts, bacteria became the primary source of information concerning the genetic control of biochemical processes in the cell.

Tatum returned to Stanford in 1948 and joined the staff of the Rockefeller Institute for Medical Research (now Rockefeller University), New York City, in 1957.

Details

Edward Lawrie Tatum (December 14, 1909 – November 5, 1975) was an American geneticist. He shared half of the Nobel Prize in Physiology or Medicine in 1958 with George Beadle for showing that genes control individual steps in metabolism. The other half of that year's award went to Joshua Lederberg. Tatum was an elected member of the United States National Academy of Sciences, the American Philosophical Society, and the American Academy of Arts and Sciences.

Education

Edward Lawrie Tatum was born on December 14, 1909, in Boulder, Colorado to Arthur L. Tatum and Mabel Webb Tatum. Arthur L. Tatum was a chemistry professor, who by 1925 was a professor of pharmacology at the University of Wisconsin at Madison.

Edward Lawrie Tatum attended college at the University of Chicago for two years, before transferring to the University of Wisconsin–Madison, where he received his BA in 1931 and PhD in 1934. His dissertation was Studies in the biochemistry of microorganisms (1934).

Career

Starting in 1937, Tatum worked at Stanford University, where he began his collaboration with Beadle. He then moved to Yale University in 1945 where he mentored Lederberg. He returned to Stanford in 1948 and then joined the faculty of Rockefeller Institute in 1957. He remained there until his death on November 5, 1975, in New York City. A heavy cigarette smoker, Tatum died of heart failure complicated by chronic emphysema. His last wife Elsie Bergland died in 1998.

Research

Tatum and Beadle carried out pioneering studies of biochemical mutations in Neurospora, published in 1941. Their work provided a prototype of the investigation of gene action and a new and effective experimental methodology for the analysis of mutations in biochemical pathways. Beadle and Tatum's key experiments involved exposing the bread mold Neurospora crassa to x-rays, causing mutations. In a series of experiments, they showed that these mutations caused changes in specific enzymes involved in metabolic pathways. This led them to propose a direct link between genes and enzymatic reactions, known as the "one gene, one enzyme" hypothesis.

Tatum spent his career studying biosynthetic pathways and the genetics of bacteria. An active area of research in his laboratory was to understand the basis of Tryptophan biosynthesis in Escherichia coli. Tatum and his student Joshua Lederberg showed that E. coli could share genetic information through recombination.

tatum-13127-portrait-medium.jpg

#24 Dark Discussions at Cafe Infinity » Come Quotes - I » 2026-02-11 17:13:42

Jai Ganesh
Replies: 0

Come Quotes - I

1. Change will not come if we wait for some other person or some other time. We are the ones we've been waiting for. We are the change that we seek. - Barack Obama

2. All our dreams can come true, if we have the courage to pursue them. - Walt Disney

3. We all have dreams. But in order to make dreams come into reality, it takes an awful lot of determination, dedication, self-discipline, and effort. - Jesse Owens

4. If Tyranny and Oppression come to this land, it will be in the guise of fighting a foreign enemy. - James Madison

5. Strength does not come from physical capacity. It comes from an indomitable will. - Mahatma Gandhi

6. Death is not extinguishing the light; it is only putting out the lamp because the dawn has come. - Rabindranath Tagore

7. Spread love everywhere you go. Let no one ever come to you without leaving happier. - Mother Teresa

8. Strength does not come from winning. Your struggles develop your strengths. When you go through hardships and decide not to surrender, that is strength. - Arnold Schwarzenegger.

Board footer

Powered by FluxBB