Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1926 2023-10-10 00:23:09

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1930) Biometry


Biometry is the process of measuring the power of the cornea (keratometry) and the length of the eye, and using this data to determine the ideal intraocular lens power. If this calculation is not performed, or if it is inaccurate, then patients may be left with a significant refractive error.


The terms “Biometrics” and “Biometry” have been used since early in the 20th century to refer to the field of development of statistical and mathematical methods applicable to data analysis problems in the biological sciences.

Statistical methods for the analysis of data from agricultural field experiments to compare the yields of different varieties of wheat, for the analysis of data from human clinical trials evaluating the relative effectiveness of competing therapies for disease, or for the analysis of data from environmental studies on the effects of air or water pollution on the appearance of human disease in a region or country are all examples of problems that would fall under the umbrella of “Biometrics” as the term has been historically used.

The journal “Biometrics” is a scholarly publication sponsored by a non-profit professional society (the International Biometric Society) devoted to the dissemination of accounts of the development of such methods and their application in real scientific contexts.

The term “Biometrics” has also been used to refer to the field of technology devoted to the identification of individuals using biological traits, such as those based on retinal or iris scanning, fingerprints, or face recognition. Neither the journal “Biometrics” nor the International Biometric Society is engaged in research, marketing, or reporting related to this technology. Likewise, the editors and staff of the journal are not knowledgeable in this area. 


Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.


Biostatistics and genetics

Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis.

Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology.

Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability".

Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient.
J. B. S. Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory of primordial soup.

These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled.

In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also helped to add quantitative discipline to biological study.

Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining."

Research planning

Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control.

Research question

The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community.

Hypothesis definition

Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers.

As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2).

The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter).


Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example.

It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size.

Experimental design

Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference.

In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort.

Data collection

Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design.

Data collection varies according to type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments.

In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection. Finally, all data collected of interest must be stored in an organized data frame for further analysis.

Additional Information

Biometry is a test to measure the dimension of the eyeball. This include the curvature of the cornea, the length of the eyeball.

The modern biometry machine commonly uses laser interferometry technology to measure the length of the eyeball with very high precision. However, some very mature cataract can be difficult to measure with this machine, a more traditional A-scan ultrasound method may need to be used.

How to perform the test?

The test is performed with the patient resting his/her chin on the chin-rest of the machine. They are required to look at an target light and keep their heads very still. The test only take 5 minutes to complete. However, in patients with more mature cataracts, more measurements must be taken to ensure accuracy, hence the test may take longer than usual to perform.

In patients with very mature cataracts, the patients will need to undergo another test with using an ultrasound scan probe gently touching the front of the eyes for a few seconds.

How to calculate the lens implant power?

The data from the biometry results are fed into a few lens implant power calculation forumlae. The commonest formulae including the SRK-T, Hoffer-Q, and the Hagis.

The results will be assessed by the surgeons to decide on the artificial lens power that can produce the target refraction (glasses prescription) after the operation.

Do I need to wear glasses after a cataract operation?

In most cataract operation, the surgeons will aim to achieve for patients to focus at distance (emmetropia). Therefore, most patient should be able to walk about or even drive without the need of glasses. Standard lens implant cannot change the focus, so patients will need to wear reading glasses for reading. Unless the patient opt for an multifocal lens implant.

Some patients may want to keep being short-sighted, therefore reading without the need of glasses but wear glasses for distance. Some patients may even want to keep one eye focus for distance and one eye focus for near. These options must be discussed with the surgeons before surgery, so that an agreement can be reached on the desire resulting refraction.

Patients with significant astigmatism (the eye ball is oval shape rather than a perfect round shape) before the operation may still need to wear glasses for distance and near after the cataract operation, as routine cataract operation and standard lens implant do not correct the astigmatism. However, some surgeons may perform special incision technique (limbal relaxing incision) during surgery to reduce the amount of astigmatism or even inserting a toric lens implant to correct some high degree astigmatism. This must be discussed with the surgeons before surgery so that special lens implant can be ordered.

How accurate is the biometry test?

The modern technique of biometry especially with the laser interferometry method is very accurate. Patients can expect more than 90% chance to achieve within 1 diopter of the target refraction.

However, some patients with very long eyeball (very short-sighted) or very short eyeball (very long-sighted), the accuracy is a lot lower. Patients should expect some residual refraction error that will require glasses for both distance and near to see very clear.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1927 2023-10-11 00:05:56

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1931) Siren


A siren is a device for making a loud warning noise.


A Siren is a noisemaking device producing a piercing sound of definite pitch. Used as a warning signal, it was invented in the late 18th century by the Scottish natural philosopher John Robison. The name was given it by the French engineer Charles Cagniard de La Tour, who devised an acoustical instrument of the type in 1819. A disk with evenly spaced holes around its edge is rotated at high speed, interrupting at regular intervals a jet of air directed at the holes. The resulting regular pulsations cause a sound wave in the surrounding air. The siren is thus classified as a free aerophone. The sound-wave frequency of its pitch equals the number of air puffs (or holes times number of revolutions) per second. The strident sound results from the high number of overtones (harmonics) present.


A siren is a loud noise-making device. Civil defense sirens are mounted in fixed locations and used to warn of natural disasters or attacks. Sirens are used on emergency service vehicles such as ambulances, police cars, and fire engines. There are two general types: mechanical and electronic.

Many fire sirens (used for summoning volunteer firefighters) serve double duty as tornado or civil defense sirens, alerting an entire community of impending danger. Most fire sirens are either mounted on the roof of a fire station or on a pole next to the fire station. Fire sirens can also be mounted on or near government buildings, on tall structures such as water towers, as well as in systems where several sirens are distributed around a town for better sound coverage. Most fire sirens are single tone and mechanically driven by electric motors with a rotor attached to the shaft. Some newer sirens are electronically driven speakers.

Fire sirens are often called "fire whistles", "fire alarms", or "fire horns". Although there is no standard signaling of fire sirens, some utilize codes to inform firefighters of the location of the fire. Civil defense sirens also used as fire sirens often can produce an alternating "hi-lo" signal (similar to emergency vehicles in many European countries) as the fire signal, or attack (slow wail), typically 3x, as to not confuse the public with the standard civil defense signals of alert (steady tone) and fast wail (fast wavering tone). Fire sirens are often tested once a day at noon and are also called "noon sirens" or "noon whistles".

The first emergency vehicles relied on a bell. Then in the 70s, they switched to a duotone airhorn, which was itself overtaken in the 80s by an electronic wail.


Some time before 1799, the siren was invented by the Scottish natural philosopher John Robison. Robison's sirens were used as musical instruments; specifically, they powered some of the pipes in an organ. Robison's siren consisted of a stopcock that opened and closed a pneumatic tube. It was apparently driven by the rotation of a wheel.

In 1819, an improved siren was developed and named by Baron Charles Cagniard de la Tour. De la Tour's siren consisted of two perforated disks that were mounted coaxially at the outlet of a pneumatic tube. One disk was stationary, while the other disk rotated. The rotating disk periodically interrupted the flow of air through the fixed disk, producing a tone. De la Tour's siren could produce sound under water, suggesting a link with the sirens of Greek mythology; hence the name he gave to the instrument.

Instead of disks, most modern mechanical sirens use two concentric cylinders, which have slots parallel to their length. The inner cylinder rotates while the outer one remains stationary. As air under pressure flows out of the slots of the inner cylinder and then escapes through the slots of the outer cylinder, the flow is periodically interrupted, creating a tone. The earliest such sirens were developed during 1877–1880 by James Douglass and George Slight (1859–1934) of Trinity House; the final version was first installed in 1887 at the Ailsa Craig lighthouse in Scotland's Firth of Clyde. When commercial electric power became available, sirens were no longer driven by external sources of compressed air, but by electric motors, which generated the necessary flow of air via a simple centrifugal fan, which was incorporated into the siren's inner cylinder.

To direct a siren's sound and to maximize its power output, a siren is often fitted with a horn, which transforms the high-pressure sound waves in the siren to lower-pressure sound waves in the open air.

The earliest way of summoning volunteer firemen to a fire was by ringing of a bell, either mounted atop the fire station, or in the belfry of a local church. As electricity became available, the first fire sirens were manufactured. In 1886 French electrical engineer Gustave Trouvé, developed a siren to announce the silent arrival of his electric boats. Two early fire siren manufacturers were William A. Box Iron Works, who made the "Denver" sirens as early as 1905, and the Inter-State Machine Company (later the Sterling Siren Fire Alarm Company) who made the ubiquitous Model "M" electric siren, which was the first dual tone siren. The popularity of fire sirens took off by the 1920s, with many manufacturers including the Federal Electric Company and Decot Machine Works creating their own sirens. Since the 1970s, many communities have since deactivated their fire sirens as pagers became available for fire department use. Some sirens still remain as a backup to pager systems.

During the Second World War, the British civil defence used a network of sirens to alert the general population to the imminence of an air raid. A single tone denoted an "all clear". A series of tones denoted an air raid.



The pneumatic siren, which is a free aerophone, consists of a rotating disk with holes in it (called a chopper, siren disk or rotor), such that the material between the holes interrupts a flow of air from fixed holes on the outside of the unit (called a stator). As the holes in the rotating disk alternately prevent and allow air to flow it results in alternating compressed and rarefied air pressure, i.e. sound. Such sirens can consume large amounts of energy. To reduce the energy consumption without losing sound volume, some designs of pneumatic sirens are boosted by forcing compressed air from a tank that can be refilled by a low powered compressor through the siren disk.

In United States English language usage, vehicular pneumatic sirens are sometimes referred to as mechanical or coaster sirens, to differentiate them from electronic devices. Mechanical sirens driven by an electric motor are often called "electromechanical". One example is the Q2B siren sold by Federal Signal Corporation. Because of its high current draw (100 amps when power is applied) its application is normally limited to fire apparatus, though it has seen increasing use on type IV ambulances and rescue-squad vehicles. Its distinct tone of urgency, high sound pressure level (123 dB at 10 feet) and square sound waves account for its effectiveness.

In Germany and some other European countries, the pneumatic two-tone (hi-lo) siren consists of two sets of air horns, one high pitched and the other low pitched. An air compressor blows the air into one set of horns, and then it automatically switches to the other set. As this back and forth switching occurs, the sound changes tones. Its sound power varies, but could get as high as approximately 125 dB, depending on the compressor and the horns. Comparing with the mechanical sirens, it uses much less electricity but needs more maintenance.

In a pneumatic siren, the stator is the part which cuts off and reopens air as rotating blades of a chopper move past the port holes of the stator, generating sound. The pitch of the siren's sound is a function of the speed of the rotor and the number of holes in the stator. A siren with only one row of ports is called a single tone siren. A siren with two rows of ports is known as a dual tone siren. By placing a second stator over the main stator and attaching a solenoid to it, one can repeatedly close and open all of the stator ports thus creating a tone called a pulse. If this is done while the siren is wailing (rather than sounding a steady tone) then it is called a pulse wail. By doing this separately over each row of ports on a dual tone siren, one can alternately sound each of the two tones back and forth, creating a tone known as Hi/Lo. If this is done while the siren is wailing, it is called a Hi/Lo wail. This equipment can also do pulse or pulse wail. The ports can be opened and closed to send Morse code. A siren which can do both pulse and Morse code is known as a code siren.


Electronic sirens incorporate circuits such as oscillators, modulators, and amplifiers to synthesize a selected siren tone (wail, yelp, pierce/priority/phaser, hi-lo, scan, airhorn, manual, and a few more) which is played through external speakers. It is not unusual, especially in the case of modern fire engines, to see an emergency vehicle equipped with both types of sirens. Often, police sirens also use the interval of a tritone to help draw attention. The first electronic siren that mimicked the sound of a mechanical siren was invented in 1965 by Motorola employees Ronald H. Chapman and Charles W. Stephens.

Other types

Steam whistles were also used as a warning device if a supply of steam was present, such as a sawmill or factory. These were common before fire sirens became widely available, particularly in the former Soviet Union. Fire horns, large compressed air horns, also were and still are used as an alternative to a fire siren. Many fire horn systems were wired to fire pull boxes that were located around a town, and this would "blast out" a code in respect to that box's location. For example, pull box number 233, when pulled, would trigger the fire horn to sound two blasts, followed by a pause, followed by three blasts, followed by a pause, followed by three more blasts. In the days before telephones, this was the only way firefighters would know the location of a fire. The coded blasts were usually repeated several times. This technology was also applied to many steam whistles as well. Some fire sirens are fitted with brakes and dampers, enabling them to sound out codes as well. These units tended to be unreliable, and are now uncommon.

Physics of the sound

Mechanical sirens blow air through a slotted disk or rotor. The cyclic waves of air pressure are the physical form of sound. In many sirens, a centrifugal blower and rotor are integrated into a single piece of material, spun by an electric motor.

Electronic sirens are high efficiency loudspeakers, with specialized amplifiers and tone generation. They usually imitate the sounds of mechanical sirens in order to be recognizable as sirens.

To improve the efficiency of the siren, it uses a relatively low frequency, usually several hundred hertz. Lower frequency sound waves go around corners and through holes better.

Sirens often use horns to aim the pressure waves. This uses the siren's energy more efficiently by aiming it. Exponential horns achieve similar efficiencies with less material.

The frequency, i.e. the cycles per second of the sound of a mechanical siren is controlled by the speed of its rotor, and the number of openings. The wailing of a mechanical siren occurs as the rotor speeds and slows. Wailing usually identifies an attack or urgent emergency.

The characteristic timbre or musical quality of a mechanical siren is caused because it is a triangle wave, when graphed as pressure over time. As the openings widen, the emitted pressure increases. As they close, it decreases. So, the characteristic frequency distribution of the sound has harmonics at odd (1, 3, 5...) multiples of the fundamental. The power of the harmonics roll off in an inverse square to their frequency. Distant sirens sound more "mellow" or "warmer" because their harsh high frequencies are absorbed by nearby objects.

Two tone sirens are often designed to emit a minor third, musically considered a "sad" sound. To do this, they have two rotors with different numbers of openings. The upper tone is produced by a rotor with a count of openings divisible by six. The lower tone's rotor has a count of openings divisible by five. Unlike an organ, a mechanical siren's minor third is almost always physical, not tempered. To achieve tempered ratios in a mechanical siren, the rotors must either be geared, run by different motors, or have very large numbers of openings. Electronic sirens can easily produce a tempered minor third.

A mechanical siren that can alternate between its tones uses solenoids to move rotary shutters that cut off the air supply to one rotor, then the other. This is often used to identify a fire warning.

When testing, a frightening sound is not desirable. So, electronic sirens then usually emit musical tones: Westminster chimes is common. Mechanical sirens sometimes self-test by "growling", i.e. operating at low speeds.

In music

Sirens are also used as musical instruments. They have been prominently featured in works by avant-garde and contemporary classical composers. Examples include Edgard Varèse's compositions Amériques (1918–21, rev. 1927), Hyperprism (1924), and Ionisation (1931); math Avraamov's Symphony of Factory Sirens (1922); George Antheil's Ballet Mécanique (1926); Dimitri Shostakovich's Symphony No. 2 (1927), and Henry Fillmore's "The Klaxon: March of the Automobiles" (1929), which features a klaxophone.

In popular music, sirens have been used in The Chemical Brothers' "Song to the Siren" (1992) and in a CBS News 60 Minutes segment played by percussionist Evelyn Glennie. A variation of a siren, played on a keyboard, are the opening notes of the REO Speedwagon song "Ridin' the Storm Out". Some heavy metal bands also use air raid type siren intros at the beginning of their shows. The opening measure of Money City Maniacs 1998 by Canadian band Sloan uses multiple sirens overlapped.


Approvals or certifications

Governments may have standards for vehicle-mounted sirens. For example, in California, sirens are designated Class A or Class B. A Class A siren is loud enough that it can be mounted nearly anywhere on a vehicle. Class B sirens are not as loud and must be mounted on a plane parallel to the level roadway and parallel to the direction the vehicle travels when driving in a straight line.

Sirens must also be approved by local agencies, in some cases. For example, the California Highway Patrol approves specific models for use on emergency vehicles in the state. The approval is important because it ensures the devices perform adequately. Moreover, using unapproved devices could be a factor in determining fault if a collision occurs.

The SAE International Emergency Warning Lights and Devices committee oversees the SAE emergency vehicle lighting practices and the siren practice, J1849. This practice was updated through cooperation between the SAE and the National Institute of Standards and Technology. Though this version remains quite similar to the California Title 13 standard for sound output at various angles, this updated practice enables an acoustic laboratory to test a dual speaker siren system for compliant sound output.

Best practices

Siren speakers, or mechanical sirens, should always be mounted ahead of the passenger compartment. This reduces the noise for occupants and makes two-way radio and mobile telephone audio more intelligible during siren use. It also puts the sound where it will be useful. A 2007 study found passenger compartment sound levels could exceed 90dB(A).

Research has shown that sirens mounted behind the engine grille or under the wheel arches produces less unwanted noise inside the passenger cabin and to the side and rear of the vehicle while maintaining noise levels to give adequate warnings. The inclusion of broadband sound to sirens has the ability to increase localisation of sirens, as in a directional siren, as a spread of frequencies makes use of the three ways the brain detects a direction of a sound: Interaural level difference, interaural time difference and head-related transfer function.

The worst installations are those where the siren sound is emitted above and slightly behind the vehicle occupants such as cases where a light-bar mounted speaker is used on a sedan or pickup. Vehicles with concealed sirens also tend to have high noise levels inside. In some cases, concealed or poor installations produce noise levels which can permanently damage vehicle occupants' hearing.

Electric-motor-driven mechanical sirens may draw 50 to 200 amperes at 12 volts (DC) when spinning up to operating speed. Appropriate wiring and transient protection for engine control computers is a necessary part of an installation. Wiring should be similar in size to the wiring to the vehicle engine starter motor. Mechanical vehicle mounted devices usually have an electric brake, a solenoid that presses a friction pad against the siren rotor. When an emergency vehicle arrives on-scene or is cancelled en route, the operator can rapidly stop the siren.

Multi-speaker electronic sirens often are alleged to have dead spots at certain angles to the vehicle's direction of travel. These are caused by phase differences. The sound coming from the speaker array can phase cancel in some situations. This phase cancellation occurs at single frequencies, based upon the spacing of the speakers. These phase differences also account for increases, based upon the frequency and the speaker spacing. However, sirens are designed to sweep the frequency of their sound output, typically, no less than one octave. This sweeping minimizes the effects of phase cancellation. The result is that the average sound output from a dual speaker siren system is 3 dB greater than a single speaker system.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1928 2023-10-12 00:02:27

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1932) Wavelength


Wavelength is the distance between identical points (adjacent crests) in the adjacent cycles of a waveform signal propagated in space or along a wire. In wireless systems, this length is usually specified in meters (m), centimeters (cm) or millimeters (mm).


Wavelength is the distance between corresponding points of two consecutive waves. “Corresponding points” refers to two points or particles in the same phase—i.e., points that have completed identical fractions of their periodic motion. Usually, in transverse waves (waves with points oscillating at right angles to the direction of their advance), wavelength is measured from crest to crest or from trough to trough; in longitudinal waves (waves with points vibrating in the same direction as their advance), it is measured from compression to compression or from rarefaction to rarefaction. Wavelength is usually denoted by the Greek letter lambda; it is equal to the speed (v) of a wave train in a medium divided by its frequency (f).


Wavelength is the distance between identical points (adjacent crests) in the adjacent cycles of a waveform signal propagated in space or along a wire. In wireless systems, this length is usually specified in meters (m), centimeters (cm) or millimeters (mm). In the case of infrared (IR), visible light, ultraviolet (UV), and gamma radiation (γ), the wavelength is more often specified in nanometers (nm), which are units of {10}^{-9} m, or angstroms (Å), which are units of {10}^{-10} m.

Wavelength is inversely related to frequency, which refers to the number of wave cycles per second. The higher the frequency of the signal, the shorter the wavelength.

A sound wave is the pattern of disturbance caused by the movement of energy traveling through a medium, such as air, water or any other liquid or solid matter, as it propagates away from the source of the sound. A water wave is an example of a wave that involves a combination of longitudinal and transverse motions. An electromagnetic wave is created as a result of vibrations between an electric field and a magnetic field.

Instruments such as optical spectrometers or optical spectrum analyzers can be used to detect wavelengths in the electromagnetic spectrum.

Wavelengths are measured in kilometers (km), meters, millimeters, micrometers (μm) and even smaller denominations, including nanometers, picometers (pm) and femtometers (fm). The latter is used to measure shorter wavelengths on the electromagnetic spectrum, such as UV radiation, X-rays and gamma rays. Conversely, radio waves have much longer wavelengths, reaching anywhere from 1 mm to 100 km, depending on the frequency.

If f is the frequency of the signal as measured in megahertz (MHz) and the Greek letter lambda λ is the wavelength as measured in meters, then:

λ = 300/f

and, conversely:

f = 300/λ

The distance between repetitions in the waves indicates where the wavelength is on the electromagnetic radiation spectrum, which includes radio waves in the audio range and waves in the visible light range.

A wavelength can be calculated by dividing the velocity of a wave by its frequency. This is often expressed as the equation seen here.

λ represents wavelength, expressed in meters. The v is wave velocity, calculated as meters per second (mps). And the f stands for frequency, which is measured in hertz (Hz).

Wave division multiplexing

In the 1990s, fiber optic cable's ability to carry data was significantly increased with the development of wavelength division multiplexing (WDM). This technique was introduced by AT&T's Bell Labs, which established a way to split a beam of light into different wavelengths that could travel through the fiber independently of one another.

WDM, along with dense WDM (DWDM) and other methods, permits a single optical fiber to transmit multiple signals at the same time. As a result, capacity can be added to existing optical networks, also called photonic networks.

The three most common wavelengths in fiber optics are 850 nm, 1,300 nm and 1,550 nm.


Waveform describes the shape or form of a wave signal. Wave is typically used to describe an acoustic signal or cyclical electromagnetic signal because each is similar to waves in a body of water.

There are four basic types of waveforms:

* Sine wave. The voltage increases and decreases in a steady curve. Sine waves can be found in sound waves, light waves and water waves. Additionally, the alternating current voltage provided in the public power grid is in the form of a sine wave.
* Square wave. The square wave represents a signal where voltage simply turns on, stays on for a while, turns off, stays off for a while and repeats. It's called a square wavebecause the graph of a square wave shows sharp, right-angle turns. Square waves are found in many electronic circuits.
* Triangle wave. In this wave, the voltage increases in a straight line until it reaches a peak value, and then it decreases in a straight line. If the voltage reaches zero and starts to rise again, the triangle wave is a form of direct current (DC). However, if the voltage crosses zero and goes negative before it starts to rise again, the triangle wave is a form of alternating current (AC).
* Sawtooth wave. The sawtooth wave is a hybrid of a triangle wave and a square wave. In most sawtooth waves, the voltage increases in a straight line until it reaches its peak voltage, and then the voltage drops instantly -- or almost instantly -- to zero and repeats immediately.

Relationship between frequency and wavelength

Wavelength and frequency of light are closely related: The higher the frequency, the shorter the wavelength, and the lower the frequency, the longer the wavelength. The energy of a wave is directly proportional to its frequency but inversely proportional to its wavelength. That means the greater the energy, the larger the frequency and the shorter the wavelength. Given the relationship between wavelength and frequency, short wavelengths are more energetic than long wavelengths.

Electromagnetic waves always travel at the same speed: 299,792 kilometers per second (kps). In the electromagnetic spectrum, there are numerous types of waves with different frequencies and wavelengths. However, they're all related by one equation: The frequency of any electromagnetic wave multiplied by its wavelength equals the speed of light.

Wavelengths in wireless networks

Although frequencies are more commonly discussed in wireless networking, wavelengths are also an important factor in Wi-Fi networks. Wi-Fi operates at five frequencies, all in the gigahertz range: 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz and 5.9 GHz. Higher frequencies have shorter wavelengths, and signals with shorter wavelengths have more trouble penetrating obstacles like walls and floors.

As a result, wireless access points that operate at higher frequencies -- with shorter wavelengths -- often consume more power to transmit data at similar speeds and distances achieved by devices that operate at lower frequencies -- with longer wavelengths.

Additional Information

In physics and mathematics, wavelength or spatial period of a wave or periodic function is the distance over which the wave's shape repeats. In other words, it is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings. Wavelength is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. The inverse of the wavelength is called the spatial frequency. Wavelength is commonly designated by the Greek letter lambda (λ). The term "wavelength" is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids.

Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to the frequency of the wave: waves with higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths.

Wavelength depends on the medium (for example, vacuum, air, or water) that a wave travels through. Examples of waves are sound waves, light, water waves and periodic electrical signals in a conductor. A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary.

The range of wavelengths or frequencies for wave phenomena is called a spectrum. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1929 2023-10-13 00:10:12

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1933) Toothbrush


A toothbrush is a small brush with a long handle that you use to clean your teeth.


A toothbrush is an oral hygiene tool used to clean the teeth, gums, and tongue. It consists of a head of tightly clustered bristles, atop of which toothpaste can be applied, mounted on a handle which facilitates the cleaning of hard-to-reach areas of the mouth. They should be used in conjunction with something to clean between the teeth where the bristles of the toothbrush cannot reach - for example floss, tape or interdental brushes.

They are available with different bristle textures, sizes, and forms. Most dentists recommend using a soft toothbrush since hard-bristled toothbrushes can damage tooth enamel and irritate the gums.

Because many common and effective ingredients in toothpaste are harmful if swallowed in large doses and instead should be spat out, the act of brushing teeth is most often done at a sink within the kitchen or bathroom, where the brush may be rinsed off afterwards to remove any debris remaining and then dried to reduce conditions ideal for germ growth (and, if it is a wooden toothbrush, mold as well).

Some toothbrushes have plant-based handles, often bamboo. However, numerous others are made of cheap plastic; such brushes constitute a significant source of pollution. Over 1 billion toothbrushes are disposed of into landfills annually in the United States alone. Bristles are commonly made of nylon (which, while not biodegradable, as plastic is, may still be recycled) or bamboo viscose.



Before the invention of the toothbrush, a variety of oral hygiene measures had been used. This has been verified by excavations during which tree twigs, bird feathers, animal bones and porcupine quills were recovered.

The predecessor of the toothbrush is the chew stick. Chew sticks were twigs with frayed ends used to brush the teeth while the other end was used as a toothpick. The earliest chew sticks were discovered in Sumer in southern Mesopotamia in 3500 BC, an Egyptian tomb dating from 3000 BC, and mentioned in Chinese records dating from 1600 BC.

The Indian way of using tooth wood for brushing is presented by the Chinese Monk Yijing (635–713 CE) when he describes the rules for monks in his book: "Every day in the morning, a monk must chew a piece of tooth wood to brush his teeth and scrape his tongue, and this must be done in the proper way. Only after one has washed one's hands and mouth may one make salutations. Otherwise both the saluter and the saluted are at fault. In Sanskrit, the tooth wood is known as the dantakastha—danta meaning tooth, and kastha, a piece of wood. It is twelve finger-widths in length. The shortest is not less than eight finger-widths long, resembling the little finger in size. Chew one end of the wood well for a long while and then brush the teeth with it."

The Greeks and Romans used toothpicks to clean their teeth, and toothpick-like twigs have been excavated in Qin dynasty tombs. Chew sticks remain common in Africa, the rural Southern United States, and in the Islamic world the use of chewing stick miswak is considered a pious action and has been prescribed to be used before every prayer five times a day. Miswaks have been used by Muslims since the 7th century. Twigs of Neem Tree have been used by ancient Indians. Neem, in its full bloom, can aid in healing by keeping the area clean and disinfected. In fact, even today, Neem twigs called datun are used for brushing teeth in India, although not hugely common.


The first bristle toothbrush resembling the modern one was found in China. Used during the Tang dynasty (619–907), it consisted of hog bristles. The bristles were sourced from hogs living in Siberia and northern China because the colder temperatures provided firmer bristles. They were attached to a handle manufactured from bamboo or bone, forming a toothbrush. In 1223, Japanese Zen master Dōgen Kigen recorded in his Shōbōgenzō that he saw monks in China clean their teeth with brushes made of horsetail hairs attached to an oxbone handle. The bristle toothbrush spread to Europe, brought from China to Europe by travellers. It was adopted in Europe during the 17th century. The earliest identified use of the word toothbrush in English was in the autobiography of Anthony Wood who wrote in 1690 that he had bought a toothbrush from J. Barret. Europeans found the hog bristle toothbrushes imported from China too firm and preferred softer bristle toothbrushes made from horsehair. Mass-produced toothbrushes made with horse or boar bristle continued to be imported to Britain from China until the mid 20th century.

In the UK, William Addis is believed to have produced the first mass-produced toothbrush in 1780. In 1770, he had been jailed for causing a riot. While in prison he decided that using a rag with soot and salt on the teeth was ineffective and could be improved. After saving a small bone from a meal, he drilled small holes into the bone and tied into the bone tufts of bristles that he had obtained from one of the guards, passed the tufts of bristle through the holes in the bone and sealed the holes with glue. After his release, he became wealthy after starting a business manufacturing toothbrushes. He died in 1808, bequeathing the business to his eldest son. It remained within family ownership until 1996. Under the name Wisdom Toothbrushes, the company now manufactures 70 million toothbrushes per year in the UK. By 1840 toothbrushes were being mass-produced in Britain, France, Germany, and Japan. Pig bristles were used for cheaper toothbrushes and badger hair for the more expensive ones.

Hertford Museum in Hertford, UK, holds approximately 5000 brushes that make up part of the Addis Collection. The Addis factory on Ware Road was a major employer in the town until 1996. Since the closure of the factory, Hertford Museum has received photographs and documents relating to the archive, and collected oral histories from former employees.

The first patent for a toothbrush was granted to H.N. Wadsworth in 1857 (U.S.A. Patent No. 18,653) in the United States, but mass production in the United States did not start until 1885. The improved design had a bone handle with holes bored into it for the Siberian boar hair bristles. Unfortunately, animal bristle was not an ideal material as it retained bacteria, did not dry efficiently and the bristles often fell out. In addition to bone, handles were made of wood or ivory. In the United States, brushing teeth did not become routine until after World War II, when American soldiers had to clean their teeth daily.

During the 1900s, celluloid gradually replaced bone handles. Natural animal bristles were also replaced by synthetic fibers, usually nylon, by DuPont in 1938. The first nylon bristle toothbrush made with nylon yarn went on sale on February 24, 1938. The first electric toothbrush, the Broxodent, was invented in Switzerland in 1954. By the turn of the 21st century nylon had come to be widely used for the bristles and the handles were usually molded from thermoplastic materials.

Johnson & Johnson, a leading medical supplies firm, introduced the "Reach" toothbrush in 1977. It differed from previous toothbrushes in three ways: it had an angled head, similar to dental instruments, to reach back teeth; the bristles were concentrated more closely than usual to clean each tooth of potentially cariogenic (cavity-causing) materials; and the outer bristles were longer and softer than the inner bristles. Other manufacturers soon followed with other designs aimed at improving effectiveness. In spite of the changes with the number of tufts and the spacing, the handle form and design, the bristles were still straight and difficult to maneuver. In 1978 Dr. George C. Collis developed the Collis Curve toothbrush which was the first toothbrush to have curved bristles. The curved bristles follow the curvature of the teeth and safely reach in between the teeth and into the sulcular areas.

In January 2003, the toothbrush was selected as the number one invention Americans could not live without according to the Lemelson-MIT Invention Index.

Types of toothbrush:

Multi-sided toothbrushes

A six-sided toothbrush used to brush all sides of the teeth, in both the upper and lower jaw, at the same time.
A multi-sided toothbrush is a fast and easy way to brush the teeth.

Electric toothbrush

It has been discovered that compared to a manual brush, the multi-directional power brush might reduce the incidence of gingivitis and plaque, when compared to regular side-to-side brushing. These brushes tend to be more costly and damaging to the environment when compared to manual toothbrushes. Most studies report performances equivalent to those of manual brushings, possibly with a decrease in plaque and gingivitis. An additional timer and pressure sensors can encourage a more efficient cleaning process. Electric toothbrushes can be classified, according to the speed of their movements as: standard power toothbrushes, sonic toothbrushes, or ultrasonic toothbrushes. Any electric toothbrush is technically a powered toothbrush. If the motion of the toothbrush is sufficiently rapid to produce a hum in the audible frequency range (20 Hz to 20,000 Hz), it can be classified as a sonic toothbrush. Any electric toothbrush with movement faster than this limit can be classified as an ultrasonic toothbrush. Certain ultrasonic toothbrushes, such as the Megasonex and the Ultreo, have both sonic and ultrasonic movements.

There are different electric toothbrush heads designed for sensitive teeth and gums, increased stain removal, or different-sized bristles for tight or gapped teeth. The hand motion with an electric toothbrush is different from a manual toothbrush. They are meant to have the bristles do the work by just placing and moving the toothbrush. Fewer back and forth strokes are needed.

Interdental brush

An interdental or interproximal ("proxy") brush is a small brush, typically disposable, either supplied with a reusable angled plastic handle or an integral handle, used for cleaning between teeth and between the wires of dental braces and the teeth.

The use of interdental brushes in conjunction with tooth brushing has been shown to reduce both the amount of plaque and the incidence of gingivitis when compared to tooth brushing alone. Although there is some evidence that after tooth brushing with a conventional tooth brush, interdental brushes remove more plaque than dental floss, a systematic review reported insufficient evidence to determine such an association.

The size of interdental brushes is standardized in ISO 16409. The brush size, which is a number between 0 (small space between teeth) and 8 (large space), indicates the passage hole diameter. This corresponds to the space between two teeth that is just sufficient for the brush to go through without bending the wire. The color of the brushes differs between producers. The same is the case with respect to the wire diameter.

End-tuft brush

The small round brush head comprises seven tufts of tightly packed soft nylon bristles, trimmed so the bristles in the center can reach deeper into small spaces. The brush handle is ergonomically designed for a firm grip, giving the control and precision necessary to clean where most other cleaning aids cannot reach. These areas include the posterior of the wisdom teeth (third molars), orthodontic structures (braces), crowded teeth, and tooth surfaces that are next to missing teeth. It can also be used to clean areas around implants, bridges, dentures and other appliances.

Chewable toothbrush

A chewable toothbrush is a miniature plastic moulded toothbrush which can be placed inside the mouth. While not commonly used, they are useful to travelers and are sometimes available from bathroom vending machines. They are available in different flavors such as mint or bubblegum and should be disposed of after use. Other types of disposable toothbrushes include those that contain a small breakable plastic ball of toothpaste on the bristles, which can be used without water.

Musical toothbrush

A musical toothbrush is a type of manual or powered toothbrush designed to make tooth brushing habit more interesting. It is more commonly introduced to children to gain their attention and positively influence their tooth brushing behavior. The music starts while child starts brushing, it continuously plays during the brushing and it ends when the child stops brushing.

Tooth brushing:

Hygiene and care

It is not recommended to share toothbrushes with others, since besides general hygienic concerns, there is a risk of transmitting diseases that are typically transmittable by blood, such as Hepatitis C.

After use, it is advisable to rinse the toothbrush with water, shake it off and let the toothbrush dry.

Studies have shown that brushing to remove dental plaque more often than every 48 hours is enough to maintain gum and tooth health. Tooth brushing can remove plaque up to one millimeter below the gum line, and each person has a habitual brushing method, so more frequent brushing does not cover additional parts of the teeth or mouth. Most dentists recommended patients brush twice a day in the hope that more frequent brushing would clean more areas of the mouth. Tooth brushing is the most common preventive healthcare activity, but tooth and gum disease remain high, since lay people clean at most 40% of their tooth margins at the gum line. Videos show that even when asked to brush their best, they do not know how to clean effectively.

Adversity of toothbrushes

Teeth can be damaged by several factors including poor oral hygiene, but also by wrong oral hygiene. Especially for sensitive teeth, damage to dentin and gums can be prevented by several measures including a correct brushing technique.

It is beneficial, when using a straight bristled brush, not to scrub horizontally over the necks of teeth, not to press the brush too hard against the teeth, to choose a toothpaste that is not too abrasive, and to wait at least 30 minutes after consumption of acidic food or drinks before brushing. Harder toothbrushes reduce plaque more efficiently but are more stressful to teeth and gum; using a medium to soft brush for a longer cleaning time was rated to be the best compromise between cleaning result and gum and tooth health.

A study by University College London found that advice on brushing technique and frequency given by 10 national dental associations, toothpaste and toothbrush companies, and in dental textbooks was inconsistent.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1930 2023-10-14 00:47:04

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1934) Shield


A shield is a broad piece of defensive armor carried on the arm. 2. : something or someone that protects or defends : defense.


A shield is a piece of personal armour held in the hand, which may or may not be strapped to the wrist or forearm. Shields are used to intercept specific attacks, whether from close-ranged weaponry or projectiles such as arrows, by means of active blocks, as well as to provide passive protection by closing one or more lines of engagement during combat.

Shields vary greatly in size and shape, ranging from large panels that protect the user's whole body to small models (such as the buckler) that were intended for hand-to-hand-combat use. Shields also vary a great deal in thickness; whereas some shields were made of relatively deep, absorbent, wooden planking to protect soldiers from the impact of spears and crossbow bolts, others were thinner and lighter and designed mainly for deflecting blade strikes (like the roromaraugi or qauata). Finally, shields vary greatly in shape, ranging in roundness to angularity, proportional length and width, symmetry and edge pattern; different shapes provide more optimal protection for infantry or cavalry, enhance portability, provide secondary uses such as ship protection or as a weapon and so on.

In prehistory and during the era of the earliest civilisations, shields were made of wood, animal hide, woven reeds or wicker. In classical antiquity, the Barbarian Invasions and the Middle Ages, they were normally constructed of poplar tree, lime or another split-resistant timber, covered in some instances with a material such as leather or rawhide and often reinforced with a metal boss, rim or banding. They were carried by foot soldiers, knights and cavalry.

Depending on time and place, shields could be round, oval, square, rectangular, triangular, bilabial or scalloped. Sometimes they took on the form of kites or flatirons, or had rounded tops on a rectangular base with perhaps an eye-hole, to look through when used with combat. The shield was held by a central grip or by straps with some going over or around the user's arm and one or more being held by the hand.

Often shields were decorated with a painted pattern or an animal representation to show their army or clan. These designs developed into systematized heraldic devices during the High Middle Ages for purposes of battlefield identification. Even after the introduction of gunpowder and firearms to the battlefield, shields continued to be used by certain groups. In the 18th century, for example, Scottish Highland fighters liked to wield small shields known as targes, and as late as the 19th century, some non-industrialized peoples (such as Zulu warriors) employed them when waging wars.

In the 20th and 21st century, shields have been used by military and police units that specialize in anti-terrorist actions, hostage rescue, riot control and siege-breaking.


The oldest form of shield was a protection device designed to block attacks by hand weapons, such as swords, axes and maces, or ranged weapons like sling-stones and arrows. Shields have varied greatly in construction over time and place. Sometimes shields were made of metal, but wood or animal hide construction was much more common; wicker and even turtle shells have been used. Many surviving examples of metal shields are generally felt to be ceremonial rather than practical, for example the Yetholm-type shields of the Bronze Age, or the Iron Age Battersea shield.



Size and weight varied greatly. Lightly armored warriors relying on speed and surprise would generally carry light shields (pelte) that were either small or thin. Heavy troops might be equipped with robust shields that could cover most of the body. Many had a strap called a guige that allowed them to be slung over the user's back when not in use or on horseback. During the 14th–13th century BC, the Sards or Shardana, working as mercenaries for the Egyptian pharaoh Ramses II, utilized either large or small round shields against the Hittites. The Mycenaean Greeks used two types of shields: the "figure-of-eight" shield and a rectangular "tower" shield. These shields were made primarily from a wicker frame and then reinforced with leather. Covering the body from head to foot, the figure-of-eight and tower shield offered most of the warrior's body a good deal of protection in hand-to-hand combat. The Ancient Greek hoplites used a round, bowl-shaped wooden shield that was reinforced with bronze and called an aspis. The aspis was also the longest-lasting and most famous and influential of all of the ancient Greek shields. The Spartans used the aspis to create the Greek phalanx formation. Their shields offered protection not only for themselves but for their comrades to their left.

The heavily armored Roman legionaries carried large shields (scuta) that could provide far more protection, but made swift movement a little more difficult. The scutum originally had an oval shape, but gradually the curved tops and sides were cut to produce the familiar rectangular shape most commonly seen in the early Imperial legions. Famously, the Romans used their shields to create a tortoise-like formation called a testudo in which entire groups of soldiers would be enclosed in an armoured box to provide protection against missiles. Many ancient shield designs featured incuts of one sort or another. This was done to accommodate the shaft of a spear, thus facilitating tactics requiring the soldiers to stand close together forming a wall of shields.


Typical in the early European Middle Ages were round shields with light, non-splitting wood like linden, fir, alder, or poplar, usually reinforced with leather cover on one or both sides and occasionally metal rims, encircling a metal shield boss. These light shields suited a fighting style where each incoming blow is intercepted with the boss in order to deflect it. The Normans introduced the kite shield around the 10th century, which was rounded at the top and tapered at the bottom. This gave some protection to the user's legs, without adding too much to the total weight of the shield. The kite shield predominantly features enarmes, leather straps used to grip the shield tight to the arm. Used by foot and mounted troops alike, it gradually came to replace the round shield as the common choice until the end of the 12th century, when more efficient limb armour allowed the shields to grow shorter, and be entirely replaced by the 14th century.

As body armour improved, knight's shields became smaller, leading to the familiar heater shield style. Both kite and heater style shields were made of several layers of laminated wood, with a gentle curve in cross section. The heater style inspired the shape of the symbolic heraldic shield that is still used today. Eventually, specialised shapes were developed such as the bouche, which had a lance rest cut into the upper corner of the lance side, to help guide it in combat or tournament. Free standing shields called pavises, which were propped up on stands, were used by medieval crossbowmen who needed protection while reloading.

In time, some armoured foot knights gave up shields entirely in favour of mobility and two-handed weapons. Other knights and common soldiers adopted the buckler, giving rise to the term "swashbuckler". The buckler is a small round shield, typically between 8 and 16 inches (20–40 cm) in diameter. The buckler was one of very few types of shield that were usually made of metal. Small and light, the buckler was easily carried by being hung from a belt; it gave little protection from missiles and was reserved for hand-to-hand combat where it served both for protection and offence. The buckler's use began in the Middle Ages and continued well into the 16th century.

In Italy, the targa, parma, and rotella were used by common people, fencers and even knights. The development of plate armour made shields less and less common as it eliminated the need for a shield. Lightly armoured troops continued to use shields after men-at-arms and knights ceased to use them. Shields continued in use even after gunpowder powered weapons made them essentially obsolete on the battlefield. In the 18th century, the Scottish clans used a small, round targe that was partially effective against the firearms of the time, although it was arguably more often used against British infantry bayonets and cavalry swords in close-in fighting.

During the 19th century, non-industrial cultures with little access to guns were still using war shields. Zulu warriors carried large lightweight shields called Ishlangu made from a single ox hide supported by a wooden spine.[6] This was used in combination with a short spear (iklwa) and/or club. Other African shields include Glagwa from Cameroon or Nguba from Congo.


Law enforcement shields

Shields for protection from armed attack are still used by many police forces around the world. These modern shields are usually intended for two broadly distinct purposes. The first type, riot shields, are used for riot control and can be made from metal or polymers such as polycarbonate Lexan or Makrolon or boPET Mylar. These typically offer protection from relatively large and low velocity projectiles, such as rocks and bottles, as well as blows from fists or clubs. Synthetic riot shields are normally transparent, allowing full use of the shield without obstructing vision. Similarly, metal riot shields often have a small window at eye level for this purpose. These riot shields are most commonly used to block and push back crowds when the users stand in a "wall" to block protesters, and to protect against shrapnel, projectiles like stones and bricks, molotov math, and during hand-to-hand combat.

The second type of modern police shield is the bullet-resistant ballistic shield, also called tactical shield. These shields are typically manufactured from advanced synthetics such as Kevlar and are designed to be bulletproof, or at least bullet resistant. Two types of shields are available:

* Light level IIIA shields are designed to stop pistol cartridges.
* Heavy level III and IV shields are designed to stop rifle cartridges.

Tactical shields often have a firing port so that the officer holding the shield can fire a weapon while being protected by the shield, and they often have a bulletproof glass viewing port. They are typically employed by specialist police, such as SWAT teams in high risk entry and siege scenarios, such as hostage rescue and breaching gang compounds, as well as in antiterrorism operations.

Law enforcement shields often have a large signs stating "POLICE" (or the name of a force, such as "US MARSHALS") to indicate that the user is a law enforcement officer.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1931 2023-10-15 00:10:44

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1935) Film Festival


A film festival is an organized event at which many films are shown.


Film festival is a gathering, usually annual, for the purpose of evaluating new or outstanding motion pictures. Sponsored by national or local governments, industry, service organizations, experimental film groups, or individual promoters, the festivals provide an opportunity for filmmakers, distributors, critics, and other interested persons to attend film showings and meet to discuss current artistic developments in film. At the festivals distributors can purchase films that they think can be marketed successfully in their own countries.

The first festival was held in Venice in 1932. Since World War II, film festivals have contributed significantly to the development of the motion-picture industry in many countries. The popularity of Italian films at the Cannes and Venice film festivals played an important part in the rebirth of the Italian industry and the spread of the postwar Neorealist movement. In 1951 Kurosawa Akira’s Rashomon won the Golden Lion at Venice, focusing attention on Japanese films. That same year the first American Art Film Festival at Woodstock, New York, stimulated the art-film movement in the United States.

Probably the best-known and most noteworthy of the hundreds of film festivals is held each spring in Cannes, France. Since 1947, people interested in films have gathered in that small resort town to attend official and unofficial showings of films. Other important festivals are held in Berlin, Karlovy Vary (Czech Republic), Toronto, Ouagadougou (Burkina Faso), Park City (Utah, U.S.), Hong Kong, Belo Horizonte (Brazil) and Venice. Short subjects and documentaries receive special attention at gatherings in Edinburgh, Mannheim and Oberhausen (both in Germany), and Tours (France). Some festivals feature films of one country, and since the late 1960s there have been special festivals for student filmmakers. Others are highly specialized, such as those that feature only underwater photography or those that deal with specific subjects, such as mountain climbing.


A film festival is an organized, extended presentation of films in one or more cinemas or screening venues, usually in a single city or region. Increasingly, film festivals show some films outdoors. Films may be of recent date and, depending upon the festival's focus, can include international and domestic releases. Some film festivals focus on a specific filmmaker, genre of film (e.g. horror films), or subject matter. Several film festivals focus solely on presenting short films of a defined maximum length. Film festivals are typically annual events. Some film historians, including Jerry Beck, do not consider film festivals as official releases of the film.

The oldest film festival in the world is the Venice Film Festival. The most prestigious film festivals in the world, known as the "Big Five", are (listed chronologically according to the date of foundation): Venice, Cannes, Berlin (the original Big Three), Toronto, and Sundance.


(The Venice Film Festival is the oldest film festival in the world and one of the most prestigious and publicized.)

The Venice Film Festival in Italy began in 1932 and is the oldest film festival still running.

Mainland Europe's biggest independent film festival is ÉCU The European Independent Film Festival, which started in 2006 and takes place every spring in Paris, France. Edinburgh International Film Festival is the longest-running festival in Great Britain as well as the longest continually running film festival in the world.

Australia's first and longest-running film festival is the Melbourne International Film Festival (1952), followed by the Sydney Film Festival (1954).

North America's first and longest-running short film festival is the Yorkton Film Festival, established in 1947. The first film festival in the United States was the Columbus International Film & Video Festival, also known as The Chris Awards, held in 1953. According to the Film Arts Foundation in San Francisco, "The Chris Awards (is) one of the most prestigious documentaries, educational, business and informational competitions in the U.S; (it is) the oldest of its kind in North America and celebrating its 54th year". It was followed four years later by the San Francisco International Film Festival, held in March 1957, which emphasized feature-length dramatic films. The festival played a major role in introducing foreign films to American audiences. Films in the first year included Akira Kurosawa's Throne of Blood and Satyajit Ray's Pather Panchali.

Today, thousands of film festivals take place around the world—from high-profile festivals such as Sundance Film Festival and Slamdance Film Festival (Park City, Utah), to horror festivals such as Terror Film Festival (Philadelphia), and the Park City Film Music Festival, the first U.S. film festival dedicated to honoring music in film.

Film Funding competitions such as Writers and Filmmakers were introduced when the cost of production could be lowered significantly and internet technology allowed for the collaboration of film production.

Film festivals have evolved significantly since the COVID-19 pandemic. Many festivals opted for virtual or hybrid festivals. The film industry, which was already in upheaval due to streaming options, has faced another major shift and movies that are showcased at festivals have an even shorter runway to online launches.

Notable film festivals

A queue to the 1999 Belgian-French film Rosetta at the Midnight Sun Film Festival in Sodankylä, Finland, in 2005.
The "Big Five" film festivals are considered to be Venice, Cannes, Berlin, Toronto and Sundance.

In North America, the Toronto International Film Festival is the most popular festival. Time wrote it had "grown from its place as the most influential fall film festival to the most influential film festival, period".

The Seattle International Film Festival is credited as being the largest film festival in the United States, regularly showing over 400 films in a month across the city.

Competitive feature films

The festivals in Berlin, Cairo, Cannes, Goa, Karlovy Vary, Locarno, Mar del Plata, Moscow, San Sebastián, Shanghai, Tallinn, Tokyo, Venice, and Warsaw are accredited by the International Federation of Film Producers Associations (FIAPF) in the category of competitive feature films. As a rule, for films to compete, they must first be released during the festivals and not in any other previous venue beforehand.

Experimental films

Ann Arbor Film Festival started in 1963. It is the oldest continually operated experimental film festival in North America, and has become one of the premier film festivals for independent and, primarily, experimental filmmakers to showcase work.

Independent films

In the U.S., Telluride Film Festival, Sundance Film Festival, Austin Film Festival, Austin's South by Southwest, New York City's Tribeca Film Festival, and Slamdance Film Festival are all considered significant festivals for independent film. The Zero Film Festival is significant as the first and only festival exclusive to self-financed filmmakers. The biggest independent film festival in the UK is Raindance Film Festival. The British Urban Film Festival (which specifically caters to Black and minority interests) was officially recognized in the 2020 New Year Honours list.

Subject specific films

A few film festivals have focused on highlighting specific issue topics or subjects. These festivals have included both mainstream and independent films. Some examples include military films, health-related film festivals, and human rights film festivals.

There are festivals, especially in the US, that highlight and promote films that are made by or are about various ethnic groups and nationalities or feature the cinema from a specific foreign country. These include African-Americans, Asian-Americans, Mexican-Americans, Arabs, Italian, German, French, Palestinian, and Native American. The Deauville American Film Festival in France is devoted to the cinema of the United States.

Women's film festivals are also popular.

North American film festivals

The San Francisco International Film Festival, founded by Irving "Bud" Levin started in 1957, is the oldest continuously annual film festival in the United States. It highlights current trends in international filmmaking and video production with an emphasis on work that has not yet secured U.S. distribution.

The Vancouver International Film Festival, founded in 1958, is one of the largest film festivals in North America. It focuses on East Asian film, Canadian film, and nonfiction film. In 2016, there was an audience of 133,000 and 324 films.

The Toronto International Film Festival, founded by Bill Marshall, Henk Van der Kolk and Dusty Cohl, is regarded[by whom?] as North America's most important film festival, and is the most widely attended.

The Ottawa Canadian Film Festival, abbreviated OCanFilmFest, was co-founded by Ottawa-based filmmakers Jith Paul, Ed Kucerak and Blair Campbell in 2015, and features films of various durations and genres from filmmakers across Canada.

The Sundance Film Festival founded by Sterling Van Wagenen (then head of Wildwood, Robert Redford's company), John Earle, and Cirina Hampton Catania (both serving on the Utah Film Commission at the time) is a major festival for independent film.

The Woodstock Film Festival was launched in 2000 by filmmakers Meira Blaustein and Laurent Rejto to bring high-quality independent films to the Hudson Valley region of New York. In 2010, Indiewire named the Woodstock Film Festival among the top 50 independent film festivals worldwide.

The Regina International Film Festival and Awards (RIFFA) founded by John Thimothy, one of the top leading international film festivals in western Canada (Regina, Saskatchewan) represented 35 countries in 2018 festival. RIFFA annual Award show and red carpet arrival event is getting noticed in the contemporary film and fashion industries in Western Canada.

Toronto's Hot Docs founded by filmmaker Paul Jay, is a North American documentary film festival. Toronto has the largest number of film festivals in the world,[citation needed] ranging from cultural, independent, and historic films.

The Seattle International Film Festival, which screens 270 features and approximately 150 short films, is the largest American film festival in terms of the number of feature productions.

The Expresión en Corto International Film Festival is the largest competitive film festival in Mexico. It specializes in emerging talent, and is held in the last week of each July in the two colonial cities of San Miguel de Allende and Guanajuato.

Other Mexican festivals include the Guadalajara International Film Festival in Guadalajara, Oaxaca Film Fest, the Morelia International Film Festival in Morelia, Michoacan Mexico, and the Los Cabos International Film Festival founded by Scott Cross, Sean Cross, and Eduardo Sanchez Navarro, in Los Cabos, Baja Sur, Mexico are considered the most important film festivals in Latin America. In 2015, Variety called the Los Cabos International Film Festival the "Cannes of Latin America".

South American film festivals

The Cartagena Film Festival, founded by Victor Nieto in 1960, is the oldest in Latin America. The Festival de Gramado (or Gramado Film Festival) Gramado, Brazil.

The Lima Film Festival is the main film festival of Peru and one of the most important in Latin America. It is focused in Latinamerican cinema and is organized each year by the Pontifical Catholic University of Peru.

The Valdivia International Film Festival is held annually in the city of Valdivia. It is arguable the most important film festival in Chile. There is also Filmambiente, held in Rio de Janeiro, Brazil, an international festival on environmental films and videos.

The Caribbean

For Spanish-speaking countries, the Dominican International Film Festival takes place annually in Puerto Plata, Dominican Republic. As well as the Havana Film Festival was founded in 1979 and is the oldest continuous annual film festival in the Caribbean. Its focus is on Latin American cinema.

The Trinidad and Tobago Film Festival, founded in 2006, is dedicated to screening the newest films from the English-, Spanish, French- and Dutch-speaking Caribbean, as well as the region's diaspora. It also seeks to facilitate the growth of Caribbean cinema by offering a wide-ranging industry programme and networking opportunities.

The Lusca Fantastic Film Fest (formerly Puerto Rico Horror Film Fest) was also founded in 2006 and is the first and only international fantastic film festival in the Caribbean[36] devoted to sci-fi, thriller, fantasy, dark humor, bizarre, horror, anime, adventure, virtual reality, and animation in short and feature films.

European festivals

The most important European film festivals are the Venice Film Festival (late summer to early autumn), the Cannes Film Festival (late spring to early summer), and the Berlin International Film Festival (late winter to early spring), founded in 1932, 1946, and 1951 respectively.


Many film festivals are dedicated exclusively to animation.

* Annecy International Animated Film Festival (f. 1960—the oldest)
* Zagreb (f. 1972)
* Ottawa (f. 1976)
* Hiroshima (f. 1985)
* KROK (f. 1989)
* Anima Mundi (f. 1992)
* Fredrikstad Animation Festival (f. 1994)
* Animac (f. 1996)

A variety of regional festivals happen in various countries. Austin Film Festival is accredited by the Academy of Motion Picture Arts & Sciences, which makes all their jury award-winning narrative short and animated short films eligible for an Academy Award.

African festivals

There are several significant film festivals held regularly in Africa. The biannual Panafrican Film and Television Festival of Ouagadougou (FESPACO) in Burkina Faso was established in 1969 and accepts competition-only films by African filmmakers and chiefly produced in Africa. The annual Durban International Film Festival in South Africa and Zanzibar International Film Festival in Tanzania has grown in importance for the film and entertainment industry, as they often screen the African premieres of many international films. The Nairobi Film Festival (NBO), which was established in 2016, with a special focus on screening exceptional films from around the world that are rarely presented in Nairobi's mainstream cinema and spotlighting the best Kenyan films, has also been growing in popularity over the years and has improved the cinema-going culture in Kenya.

The Sahara International Film Festival, held annually in the Sahrawi refugee camps in western Algeria near the border of Western Sahara, is notable as the only film festival in the world to take place in a refugee camp. The festival has the two-fold aim of providing cultural entertainment and educational opportunities to refugees, and of raising awareness of the plight of the Sahrawi people, who have been exiled from their native Western Sahara for more than three decades.

Asian film festivals:


The International Film Festival of India, organized by the government of India, was founded in 1952. Chennai International Film Festival has been organized since 2002 by the Indo Cine Appreciation Foundation (ICAF), Government of Tamil Nadu, the South Indian Film Chamber of Commerce and the Film Federation of India.

The Jaipur International Film Festival, founded in 2009, is the big and bedt international film festival in India. The International Film Festival of Kerala organised by the Government of Kerala held annually at Thiruvananthapuram is acknowledged as one of the leading cultural events in Indian.

The International Documentary and Short Film Festival of Kerala (IDSFFK), hosted by the Kerala State Chalachitra Academy, is a major documentary and short film festival.

The Mumbai Women's International Film Festival (MWIFF) is an annual film festival in Mumbai featuring films made by women directors and women technicians.

The Calcutta International Cult Films Festival (CICFF), is a popular international film festival based in Kolkata which showcases international cult films.

YathaKatha International Film & Literature Festival (YKIFLF) is an annual film & literature festival in Mumbai showcasing the collaboration of literature in cinema via various constructive discussions and forums. 1st edition of festival is being held from 25–28 November in Mumbai, Maharashtra India.


Notable festivals include the Hong Kong International Film Festival (HKIFF), Busan International Film Festival (BIFF), Kathmandu International Mountain Film Festival, Melbourne International Film Festival (MIFF) and World Film Carnival Singapore.

Arab World film festivals

There are several major film festivals in the Arab world, such as the Beirut International Film Festival, Cairo International Film Festival, the only international competitive feature film festival recognized by the FIAPF in the Arab world and Africa, as well as the oldest in this category, Carthage Film Festival, the oldest festival in Africa and the Arab world, and Marrakech International Film Festival.

Festival administration

Business model

Although there are notable for-profit festivals such as SXSW, most festivals operate on a nonprofit membership-based model, with a combination of ticket sales, membership fees, and corporate sponsorship constituting the majority of revenue. Unlike other arts nonprofits (performing arts, museums, etc.), film festivals typically receive few donations from the general public and are occasionally organized as nonprofit business associations instead of public charities. Film industry members often have significant curatorial input, and corporate sponsors are given opportunities to promote their brand to festival audiences in exchange for cash contributions. Private parties, often to raise investments for film projects, constitute significant "fringe" events. Larger festivals maintain year-round staffs often engaging in community and charitable projects outside the festival season.

Entry fee

While entries from established filmmakers are usually considered pluses by the organizers, most festivals require new or relatively unknown filmmakers to pay an entry fee to have their works considered for screening. This is especially so in larger film festivals, such as the Jaipur International Film Festival in Jaipur India, Toronto International Film Festival, Sundance Film Festival, South by Southwest, Montreal World Film Festival, and even smaller "boutique" festivals such as the Miami International Film Festival, British Urban Film Festival in London and Mumbai Women's International Film Festival in India.

On the other hand, some festivals—usually those accepting fewer films, and perhaps not attracting as many "big names" in their audiences as do Sundance and Telluride—require no entry fee. Many smaller film festivals in the United States (the Stony Brook Film Festival on Long Island, the Northwest Filmmakers' Festival, and the Sicilian Film Festival in Miami), are examples.

The Portland International Film Festival charges an entry fee but waives it for filmmakers from the Northwestern United States, and some others with regional focuses have similar approaches.

Several film festival submission portal websites exist to streamline filmmakers' entries into multiple festivals. They provide databases of festival calls for entry and offer filmmakers a convenient "describe once, submit many" service.

Screening out of competition

The core tradition of film festivals is competition, or judging which films are most deserving of various forms of recognition. Some festivals, such as the famous Cannes Film Festival, may screen films that are considered close to competition-quality without being included in the competition; the films are said to be screened "out of competition".


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1932 2023-10-16 00:17:05

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1936) Toy


A toy is a something for a child to play with; something that is or appears to be small.


A Toy is a plaything, usually for an infant or child, and often an instrument used in a game. Toys, playthings, and games survive from the most remote past and from a great variety of cultures. The ball, kite, and yo-yo are assumed to be the oldest objects specifically designed as toys. Toys vary from the simplest to the most complex things, from the stick selected by a child and imagined to be a hobbyhorse to sophisticated and complex mechanical devices. Coordination and other manual skills develop from cumulative childhood experiences received by manipulating toys such as marbles, jackstones, and other objects that require the use of hands and bodies. Mental agility, beginning in childhood, is challenged by puzzles of spatial relationships.

History of toys

Objects with human and animal forms that may have been toys have been found in deposits from ancient Sumer dating to 2600 BCE. The earliest-known written historical mention of a toy comes from about 500 BCE in a Greek reference to yo-yos made from wood, metal, or painted terra-cotta. It is believed, however, that the yo-yo originated in China at a much earlier date. In addition, the kite, still a popular plaything in China, existed as a toy there at least as early as 1000 BCE. In India, clay animal-figures on wheels and other animal toys date to about 2500 BCE. Later, brass and bronze horses and elephants were common playthings among Indian children from wealthy families.

Play with toys follows two main directions: imitative and instructive. The earliest types of play probably developed from the instinct for self-preservation. In many human cultures one of the first things taught to the young was the use of weapons, and the simple stick or club was the prototype of later military instruments of play, such as swords and guns. Most games and sports requiring physical action derived from practice of the skills used in warfare and hunting; nevertheless, the instruments of the game or sport, such as the small bow and arrow given to a boy in ancient Rome for training, were regarded not as toys but as weapons. By the Middle Ages, war-related objects—such as miniature soldiers and weapons—were considered to be toys, however. In modern times the latest developments in warfare are represented among contemporary toys, as are those weapons and war machines fantasized in science fiction and motion pictures.

One of the most ancient toys for adults and children is the ball, which was used in both sacred and secular games. Other forms of toys also probably derive from magical artifacts and fetishes that played a prominent part in primitive religions. Even today, during the Mexican festival of the Day of the Dead, sugar is formed into elaborate and beautiful skulls, tombs, and angels; many of these forms are essentially religious symbols, but in the hands of children they become toys that are played with and finally eaten. Christmas-tree decorations, Easter eggs, and the Neapolitan presepio (crèche), with its wealth of elaborate figures representing the birth of Jesus, are other obvious examples of toys of religious origin.

A modern relic of early culture, the kachina doll of the Pueblo Indians, while essentially an instructive sacred object, is played with by children as a means to learn the myths of their culture. In fact, the doll is perhaps the most ancient and basic toy. Every epoch and culture has provided its children with miniature versions of human beings. Dolls from early Roman times and from Christian Rome have been found preserved in the graves of their young owners. The collections of the British Museum and the Royal Ontario Museum in Toronto both contain early Roman dolls; made of linen and stuffed with papyrus, these dolls date from the 3rd century CE.

Moving toys include a wider variety of types of objects. It is probable that many experiments with basic physical principles were first realized in the form of moving toys known through literary description. Explosive toy weapons and rockets developed from the early use of gunpowder for fireworks by the Chinese. Balance and counterbalance, the wheel, the swing, the pendulum, flight, centrifugal force, magnetism, the spring, and a multitude of other devices and principles have been utilized in toys.

Many moving toys are centuries old. In India several kinds of movable folk toys are still common throughout the country—such as clay elephants that “drink” water and acrobatic dolls on sticks. At the other end of the spectrum, modern technological developments made possible the production of such sophisticated moving toys as scale-model electric railroad trains and automobile racing tracks and cars, radio-controlled model aircraft and wheeled vehicles, and dolls that walk, talk, and perform other stunts. New toy technology also allows children to design, build, and program robots employing special sensors, motors, and microcomputers.

In contrast, indigenous materials are often used by children to fashion folk toys. For example, Huli children in Papua New Guinea make pu abu, a whirling toy created from a flat piece of wood with a hole in the end to which the child ties a piece of string or grass so that the toy can be whirled around to produce a humming noise. (Similar toys are known as bullroarers in other parts of the world.) Many dolls, especially early dolls, were made of materials commonly at hand, such as a block of wood, remains of cloth, or pieces of corn husk.

Under the pressure of industrialization, folk culture and tradition are rapidly disappearing, but in many countries a variety of folk or homemade toys can still be found. Toys sold in developed countries are usually mass-produced and often manufactured in developing countries, with technology providing their locomotion and other actions. However, in spite of Western commodification, toys often reflect the child’s cultural environment. For example, in eastern India common toys include clay monkeys that climb up a string, paper snakes fastened to wood, and rattles created from gourds with pebbles inside.

Gender and toys

It is generally accepted that children are attracted to toys along gender lines. Modern studies demonstrate that while boys consistently choose trucks or soldiers, girls’ choices are more flexible and may include so-called masculine toys as well as baby dolls and household objects. Some of this preference is related to parental beliefs about the appropriateness of certain toys for boys and girls. In a 1970s study conducted in Taiwan, boys preferred electrical toys, then playground slides and swings, tricycles, toy guns, and kites, in that order. Girls, on the other hand, chose playground slides and swings first, then kites and such activities as paper folding, singing, and playing house. In a 1990 study, also done in Taiwan, researchers noted that in 150 randomly selected toy commercials, very few doll advertisements depicted boys and girls playing together, except for a few involving stuffed animals.

During the first two years of life, children absorb information about gender-appropriate toys. This starts with the different types of toys bought for boys and girls. Some parental influence on children’s toy choices is more subtle. For example, when girls play with dolls, parents are typically not even aware as they nod and smile at them, whereas parents are apt to make nonverbal, if not overt, negative reactions when boys play with dolls. In strict gender-segregated societies in Africa, boys may help girls make dolls by gathering the materials for them, but they would be strongly discouraged from playing with dolls themselves. Instead, the boys use the same gathered materials to create vehicles, military men, or toy weapons for their own playthings. Most researchers in Western societies generally agree that boys prefer toy guns and other toys linked to aggression, whereas girls prefer to play with dolls and household objects. American psychologist Jeffrey Goldstein has asserted, “These preferences develop early and appear to have biological as well as social origins. Of the latter, modeling by peers and parents seems to be especially potent.”


A toy or plaything is an object that is used primarily to provide entertainment. Simple examples include toy blocks, board games, and dolls. Toys are often designed for use by children, although many are designed specifically for adults and pets. Toys can provide utilitarian benefits, including physical exercise, cultural awareness, or academic education. Additionally, utilitarian objects, especially those which are no longer needed for their original purpose, can be used as toys. Examples include children building a fort with empty cereal boxes and tissue paper spools, or a toddler playing with a broken TV remote control. The term "toy" can also be used to refer to utilitarian objects purchased for enjoyment rather than need, or for expensive necessities for which a large fraction of the cost represents its ability to provide enjoyment to the owner, such as luxury cars, high-end motorcycles, gaming computers, and flagship smartphones.

Playing with toys can be an enjoyable way of training young children for life experiences. Different materials like wood, clay, paper, and plastic are used to make toys. Newer forms of toys include interactive digital entertainment and smart toys. Some toys are produced primarily as collectors' items and are intended for display only.

The origin of toys is prehistoric; dolls representing infants, animals, and soldiers, as well as representations of tools used by adults, are readily found at archaeological sites. The origin of the word "toy" is unknown, but it is believed that it was first used in the 14th century. Toys are mainly made for children. The oldest known doll toy is thought to be 4,000 years old.

Playing with toys is an important part of aging. Younger children use toys to discover their identity, help with cognition, learn cause and effect, explore relationships, become stronger physically, and practice skills needed in adulthood. Adults on occasion use toys to form and strengthen social bonds, teach, help in therapy, and to remember and reinforce lessons from their youth.



Most children have been said to play with whatever they can find, such as sticks and rocks. Toys and games have been retrieved from the sites of ancient civilizations, and have been mentioned in ancient literature. Toys excavated from the Indus valley civilization (3010–1500 BCE) include small carts, whistles shaped like birds, and toy monkeys that could slide down a string.

The earliest toys were made from natural materials, such as rocks, sticks, and clay. Thousands of years ago, Egyptian children played with dolls that had wigs and movable limbs, which were made from stone, pottery, and wood. However, evidence of toys in ancient Egypt is exceptionally difficult to identify with certainty in the archaeological record. Small figurines and models found in tombs are usually interpreted as ritual objects; those from settlement sites are more easily labelled as toys. These include spinning tops, balls of spring, and wooden models of animals with movable parts.

In ancient Greece and ancient Rome, children played with dolls made of wax or terracotta: sticks, bows and arrows, and yo-yos. When Greek children, especially girls, came of age, it was customary for them to sacrifice the toys of their childhood to the gods. On the eve of their wedding, young girls around fourteen would offer their dolls in a temple as a rite of passage into adulthood.

The oldest known mechanical puzzle also comes from ancient Greece and appeared in the 3rd century BCE. The game consisted of a square divided into 14 parts, and the aim was to create different shapes from the pieces. In Iran, "puzzle-locks" were made as early as the 17th century (CE).

Enlightenment Era

Toys became more widespread with changing Western attitudes towards children and childhood brought about by the Enlightenment. Previously, children had often been thought of as small adults, who were expected to work in order to produce the goods that the family needed to survive. As children's culture scholar Stephen Kline has argued, Medieval children were "more fully integrated into the daily flux of making and consuming, of getting along. They had no autonomy, separate statuses, privileges, special rights or forms of social comportment that were entirely their own."

As these ideas began changing during the Enlightenment Era, blowing bubbles from leftover washing up soap became a popular pastime, as shown in the painting The Soap Bubble (1739) by Jean-Baptiste-Siméon Chardin, and other popular toys included hoops, toy wagons, kites, spinning wheels and puppets. Many board games were produced by John Jefferys in the 1750s, including A Journey Through Europe. The game was very similar to modern board games; players moved along a track with the throw of a die (a teetotum was actually used) and landing on different spaces would either help or hinder the player.

In the nineteenth century, Western values prioritized toys with an educational purpose, such as puzzles, books, cards and board games. Religion-themed toys were also popular, including a model Noah's Ark with miniature animals and objects from other Bible scenes. With growing prosperity among the middle class, children had more leisure time on their hands, which led to the application of industrial methods to the manufacture of toys.

More complex mechanical and optical-based toys were also invented during the nineteenth century. Carpenter and Westley began to mass-produce the kaleidoscope, invented by Sir David Brewster in 1817, and had sold over 200,000 items within three months in London and Paris. The company was also able to mass-produce magic lanterns for use in phantasmagoria and galanty shows, by developing a method of mass production using a copper plate printing process. Popular imagery on the lanterns included royalty, flora and fauna, and geographical/man-made structures from around the world. The modern zoetrope was invented in 1833 by British mathematician William George Horner and was popularized in the 1860s. Wood and porcelain dolls in miniature doll houses were popular with middle-class girls, while boys played with marbles and toy trains.

Industrial Era and mass-marketed toys

The golden age of toy development occurred during the Industrial Era. Real wages were rising steadily in the Western world, allowing even working-class families to afford toys for their children, and industrial techniques of precision engineering and mass production were able to provide the supply to meet this rising demand. Intellectual emphasis was also increasingly being placed on the importance of a wholesome and happy childhood for the future development of children. Franz Kolb, a German pharmacist, invented plasticine in 1880, and in 1900 commercial production of the material as a children's toy began. Frank Hornby was a visionary in toy development and manufacture and was responsible for the invention and production of three of the most popular lines of toys based on engineering principles in the twentieth century: Meccano, Hornby Model Railways and Dinky Toys.

Meccano was a model construction system that consisted of re-usable metal strips, plates, angle girders, wheels, axles and gears, with nuts and bolts to connect the pieces and enabled the building of working models and mechanical devices. Dinky Toys pioneered the manufacture of die-cast toys with the production of toy cars, trains and ships and model train sets became popular in the 1920s. The Britains company revolutionized the production of toy soldiers with the invention of the process of hollow casting in lead in 1893 – the company's products remained the industry standard for many years.

Puzzles became popular as well. In 1893, the English lawyer Angelo John Lewis, writing under the pseudonym of Professor Hoffman, wrote a book called Puzzles Old and New. It contained, among other things, more than 40 descriptions of puzzles with secret opening mechanisms. This book grew into a reference work for puzzle games and was very popular at the time. The Tangram puzzle, originally from China, spread to Europe and America in the 19th century.

In 1903, a year after publishing The Tale of Peter Rabbit, English author Beatrix Potter created the first Peter Rabbit soft toy and registered him at the Patent Office in London, making Peter the oldest licensed character. It was followed by other "spin-off" merchandise over the years, including painting books and board games. The Smithsonian magazine stated, "Potter was also an entrepreneur and a pioneer in licensing and merchandising literary characters. Potter built a retail empire out of her "bunny book" that is worth $500 million today. In the process, she created a system that continues to benefit all licensed characters, from Mickey Mouse to Harry Potter."

In tandem with the development of mass-produced toys, Enlightenment ideals about children's rights to education and leisure time came to fruition. During the late 18th and early 19th century, many families needed to send their children to work in factories and other sites to make ends meet—just as their predecessors had required their labor producing household goods in the medieval era. Business owners' exploitation and abuse of child laborers during this period differed from how children had been treated as workers within a family unit, though. Thanks to advocacy including photographic documentation of children's exploitation and abuse by business owners, Western nations enacted a series of child labor laws, putting an end to child labor in nations such as the U.S. (1949). This fully entrenched, through law, the Western idea that childhood is a time for leisure, not work—and with leisure time comes more space for consumer goods such as toys.

During the Second World War, some new types of toys were created through accidental innovation. After trying to create a replacement for synthetic rubber, the American Earl L. Warrick inadvertently invented "nutty putty" during World War II. Later, Peter Hodgson recognized the potential as a childhood plaything and packaged it as Silly Putty. Similarly, Play-Doh was originally created as a wallpaper cleaner. In 1943 Richard James was experimenting with springs as part of his military research when he saw one come loose and fall to the floor. He was intrigued by the way it flopped around on the floor. He spent two years fine-tuning the design to find the best gauge of steel and coil; the result was the Slinky, which went on to sell in stores throughout the United States.

After the Second World War, as society became ever more affluent and new technology and materials (plastics) for toy manufacture became available, toys became cheap and ubiquitous in households across the Western World. At this point, name-brand toys became widespread in the U.S.–a new phenomenon that helped market mass-produce toys to audiences of children growing up with ample leisure time and during a period of relative prosperity.

Among the more well-known products of the 1950s there was the Danish company Lego's line of colourful interlocking plastic brick construction sets (based on Hilary Page's Kiddicraft Self-Locking Bricks, described by London's V&A Museum of Childhood as among the "must-have toys" of the 1940s), Mr. Potato Head, the Barbie doll (inspired by the Bild Lilli doll from Germany), and Action Man. The Rubik's Cube became an enormous seller in the 1980s. In modern times, there are computerized dolls that can recognize and identify objects, the voice of their owner, and choose among hundreds of pre-programmed phrases with which to respond.


The act of children's play with toys embodies the values set forth by the adults of their specific community, but through the lens of the child's perspective. Within cultural societies, toys are a medium to enhance a child's cognitive, social, and linguistic learning.

In some cultures, toys are used as a way to enhance a child's skillset within the traditional boundaries of their future roles in the community. In Saharan and North African cultures, play is facilitated by children through the use of toys to enact scenes recognizable in their community such as hunting and herding. The value is placed in a realistic version of development in preparing a child for the future they are likely to grow up into. This allows the child to imagine and create a personal interpretation of how they view the adult world.

However, in other cultures, toys are used to expand the development of a child's cognition in an idealistic fashion. In these communities, adults place the value of play with toys to be on the aspirations they set forth for their child. In the Western culture, the Barbie and Action-Man represent lifelike figures but in an imaginative state out of reach from the society of these children and adults. These toys give way to a unique world in which children's play is isolated and independent of the social constraints placed on society leaving the children free to delve into the imaginary and idealized version of what their development in life could be.

In addition, children from differing communities may treat their toys in different ways based on their cultural practices. Children in more affluent communities may tend to be possessive of their toys, while children from poorer communities may be more willing to share and interact more with other children. The importance the child places on possession is dictated by the values in place within the community that the children observe on a daily basis.

Child development

Toys, like play itself, serve multiple purposes in both humans and animals. They provide entertainment while fulfilling an educational role. Toys enhance cognitive behavior and stimulate creativity. They aid in the development of physical and mental skills which are necessary in later life.

Wooden blocks, though simple, are regarded by early childhood education experts such as Sally Cartwright (1974) as an excellent toy for young children; she praised the fact that they are relatively easy to engage with, can be used in repeatable and predictable ways, and are versatile and open-ended, allowing for a wide variety of developmentally appropriate play. Andrew Witkin, director of marketing for Mega Brands, told Investor's Business Daily that "They help develop hand-eye coordination, math and science skills and also let kids be creative." Other toys like marbles, jackstones, and balls serve similar functions in child development, allowing children to use their minds and bodies to learn about spatial relationships, cause and effect, and a wide range of other skills.

One example of the dramatic ways that toys can influence child development involves clay sculpting toys such as Play-Doh and Silly Putty and their home-made counterparts. Mary Ucci, Educational Director of the Child Study Center of Wellesley College, has demonstrated how such toys positively impact the physical development, cognitive development, emotional development, and social development of children.

Toys for infants often make use of distinctive sounds, bright colors, and unique textures. Through repetition of play with toys, infants begin to recognize shapes and colors. Play-Doh, Silly Putty and other hands-on materials allow the child to make toys of their own.

Educational toys for school age children of often contain a puzzle, problem-solving technique, or mathematical proposition. Often toys designed for older audiences, such as teenagers or adults, demonstrate advanced concepts. Newton's cradle, a desk toy designed by Simon Prebble, demonstrates the conservation of momentum and energy.

Not all toys are appropriate for all ages of children. Even some toys which are marketed for a specific age range can even harm the development of children in that range, such as when for example toys meant for young girls contribute to the ongoing problem of girls' sexualization in Western culture.

A study suggested that supplying fewer toys in the environment allows toddlers to better focus to explore and play more creatively. The provision of four rather than sixteen toys is thus suggested to promote children's development and healthy play.

Age compression

Age compression is the modern trend of children moving through play stages faster than was the case in the past. Children have a desire to progress to more complex toys at a faster pace, girls in particular. Barbie dolls, for example, were once marketed to girls around 8 years old but have been found to be more popular in recent years with girls around 3 years old, with most girls outgrowing the brand by about age 7. The packaging for the dolls labels them appropriate for ages 3 and up. Boys, in contrast, apparently enjoy toys and games over a longer timespan, gravitating towards toys that meet their interest in assembling and disassembling mechanical toys, and toys that "move fast and things that fight". An industry executive points out that girls have entered the "tween" phase by the time they are 8 years old and want non-traditional toys, whereas boys have been maintaining an interest in traditional toys until they are 12 years old, meaning the traditional toy industry holds onto their boy customers for 50% longer than their girl customers.

Girls gravitate towards "music, clothes, make-up, television talent shows and celebrities". As young children are more exposed to and drawn to music intended for older children and teens, companies are having to rethink how they develop and market their products. Girls also demonstrate a longer loyalty to characters in toys and games marketed towards them. A variety of global toy companies have marketed themselves to this aspect of girls' development, for example, the Hello Kitty brand and the Disney Princess franchise. Boys have shown an interest in computer games at an ever-younger age in recent years.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1933 2023-10-17 00:27:23

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1937) Librarian


Word forms: plural librarians. countable noun. A librarian is a person who is in charge of a library or who has been specially trained to work in a library.

A librarian is a person who is in charge of a library or who has been specially trained to work in a library.


A librarian is a person who works professionally in a library providing access to information, and sometimes social or technical programming, or instruction on information literacy to users.

The role of the librarian has changed much over time, with the past century in particular bringing many new media and technologies into play. From the earliest libraries in the ancient world to the modern information hub, there have been keepers and disseminators of the information held in data stores. Roles and responsibilities vary widely depending on the type of library, the specialty of the librarian, and the functions needed to maintain collections and make them available to its users.

Education for librarianship has changed over time to reflect changing roles.

Roles and responsibilities

Traditionally, a librarian is associated with collections of books, as demonstrated by the etymology of the word "librarian" (from the Latin liber, "book"). A 1713 definition of the word was "custodian of a library" 1713, while in the 17th century, the role was referred to as a "library-keeper", and a librarian was a "scribe, one who copies books".

The role of a librarian is continually evolving to meet social and technological needs. A modern librarian may deal with provision and maintenance of information in many formats, including books; electronic resources; magazines; newspapers; audio and video recordings; maps; manuscripts; photographs and other graphic material; bibliographic databases; and Internet-based and digital resources. A librarian may also provide other information services, such as information literacy instruction; computer provision and training; coordination with community groups to host public programs; assistive technology for people with disabilities; and assistance locating community resources.

The Internet has had a profound impact on the resources and services that librarians of all kinds provide to their patrons. Electronic information has transformed the roles and responsibilities of librarians, even to the point of revolutionizing library education and service expectations.


Librarians collect, organise, preserve and manage reading, informational and learning resources. They make these resources available to interested parties and audit the inventory periodically. By understanding the role of a librarian and their typical duties, you can decide if the career suits you. In this article, we can find out what is a librarian, what a librarian does, the skills and educational qualifications they need, their work environment and salary and how to become a librarian.

What is a librarian?

A librarian is a professional in charge of a library's information resources and inventory. Along with cataloguing, issuing and tracking physical and digital materials, they oversee the library's everyday operations. They enable eligible visitors to access and use the library.

Who is a librarian and what do they do?

A librarian is a person responsible for performing various duties related to the upkeep of a library and its inventory and resources. They may order the purchase of and manage interlibrary loans of books, journals, magazines, recordings and other materials to meet the requirements of library patrons. To keep track of available information resources, they may catalogue them in systematic order and conduct periodic audits. A librarian may issue books and other items and keep records of the resources in circulation.

What is the role of a librarian?

The role of a librarian may include the performance of the following work duties:

Managing resources

Librarians manage the library's physical and digital information resources. That includes cataloging and organising books, magazines, journals, newspapers, documents, letters, manuscripts, records, audio recordings, video recordings and various digital materials. They must also undertake the preservation and proper storage of these resources.

Maintaining inventory

A librarian keeps track of the library's information resources and updates its inventory. They may purchase new books, order new subscriptions and acquire other essential materials. They may also work with other libraries to exchange or share information and resources to make them more widely available.

Keeping records

The creation of a library system comes under the ambit of a librarian. They may maintain a physical or digital database to record the library's information assets. They may organise the database so that the information is easy to find and access.

Aiding visitors

A librarian may aid visitors in finding the materials they want for general information, education or research purposes. They may inform them of the availability of materials, point them towards their location and oversee the use of these. They may answer their questions and offer research guidance.

Issuing books

One of the principal duties of a librarian is to check out and check in books and other resources. They may take orders for books and other materials in person, over the phone or online and may keep these aside for easy pickup. They may keep a record of the books and materials they issue to visitors to read in the library or to borrow.

Advocating education

As guardians of information sources, librarians are well-equipped to advocate education and literacy. They may organise reading and literary events, outreach programs and community gatherings for adults and children. They may make book suggestions and create reading lists on various subjects to encourage people to educate themselves.

Overseeing staff

A librarian may train and oversee junior librarians, record keepers, attendants and other support staff. They may assign work tasks and issue instructions. Additionally, they may be in charge of budget, stationery and supplies management.

What skills should librarians have?

Librarians can benefit from having the following skills:

* Organization skills: As part of their everyday duties, librarians have to oversee, organise and manage vast data and information resources. They must keep track of incoming and outgoing books and other materials, maintain the inventory and update the database.

* Analytical skills: To process large amounts of information and classify, organise and store physical and digital data, it can help to have excellent analytical skills. The librarian can also use their analytical knowledge to assist visitors in finding the information they want.

* Research skills: Librarians may have to research information to update the library inventory or manage the database. With research skills, they can better organise the library and assist visitors in their research.

* Technological skills: Most librarians use word processing programs, spreadsheets and other software programs to review and record information. They work on computers to update databases, find necessary items and issue books and other materials.

* Communication skills: A librarian has to interact with library visitors, library staff and various others associated with the library. For this, it can help to have excellent verbal and written communication skills.

* Interpersonal skills: A librarian needs to have strong interpersonal skills to provide high-quality customer service to their visitors. They must be able to interact and discuss matters with diverse individuals.

What education qualifications do librarians need?

To be a professional librarian, you may need formal educational qualifications in library science or library and information science. You can get a certification, a diploma, a bachelor's degree, a master's degree or a doctorate in these topics. A higher educational qualification may lead to a higher job position.

Do librarians need a degree?

You can become a librarian without a degree, since a certification or a diploma can get you an entry-level position. To aspire for senior-level jobs, though, you may benefit from acquiring higher qualifications in library science. It is possible to become a librarian after completing a certification or a diploma course. Many employers, however, may prefer candidates with higher qualifications.

What is the work environment for a librarian?

The work environment for librarians is typically a calm, quiet indoor area in a school, college, university, research institution, public library, government organisation or information-focused company. Some of the job titles for librarians are chief librarian, law librarian, archivist, records manager, information executive, documentation officer, assistant librarian, junior librarian, library attendant and consultant. A librarian may work part-time or full-time during regular office hours, but they sometimes work evening and weekend shifts.

A typical workday may consist of sitting at their desk, working on their computer, using printers and scanners and speaking on the phone. They may also have to stand, walk around and physically lift, carry and move books and other library resources. Additionally, they may have to interact with visitors, oversee library staff, organise community events and use social media to promote the library.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1934 2023-10-18 00:14:55

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1938) Insect


Insect: any of a class (Insecta) of arthropods (such as bugs or bees) with well-defined head, thorax, and abdomen, only three pairs of legs, and typically one or two pairs of wings.
b) any of numerous small invertebrate animals (such as spiders or centipedes) that are more or less obviously segmented - not used technically.
c) a type of very small animal with six legs, a body divided into three parts and usually two pairs of wings, or, more generally, any similar very small animal.


Insect, (class Insecta or Hexapoda), is any member of the largest class of the phylum Arthropoda, which is itself the largest of the animal phyla. Insects have segmented bodies, jointed legs, and external skeletons (exoskeletons). Insects are distinguished from other arthropods by their body, which is divided into three major regions: (1) the head, which bears the mouthparts, eyes, and a pair of antennae, (2) the three-segmented thorax, which usually has three pairs of legs (hence “Hexapoda”) in adults and usually one or two pairs of wings, and (3) the many-segmented abdomen, which contains the digestive, excretory, and reproductive organs.

In a popular sense, “insect” usually refers to familiar pests or disease carriers, such as bedbugs, houseflies, clothes moths, Japanese beetles, aphids, mosquitoes, fleas, horseflies, and hornets, or to conspicuous groups, such as butterflies, moths, and beetles. Many insects, however, are beneficial from a human viewpoint; they pollinate plants, produce useful substances, control pest insects, act as scavengers, and serve as food for other animals (see below Importance). Furthermore, insects are valuable objects of study in elucidating many aspects of biology and ecology. Much of the scientific knowledge of genetics has been gained from fruit fly experiments and of population biology from flour beetle studies. Insects are often used in investigations of hormonal action, nerve and sense organ function, and many other physiological processes. Insects are also used as environmental quality indicators to assess water quality and soil contamination and are the basis of many studies of biodiversity.

General features

In numbers of species and individuals and in adaptability and wide distribution, insects are perhaps the most eminently successful group of all animals. They dominate the present-day land fauna with about 1 million described species. This represents about three-fourths of all described animal species. Entomologists estimate the actual number of living insect species could be as high as 5 million to 10 million. The orders that contain the greatest numbers of species are Coleoptera (beetles), Lepidoptera (butterflies and moths), Hymenoptera (ants, bees, wasps), and Diptera (true flies).

Appearance and habits

The majority of insects are small, usually less than 6 mm (0.2 inch) long, although the range in size is wide. Some of the feather-winged beetles and parasitic wasps are almost microscopic, while some tropical forms such as the hercules beetles, African goliath beetles, certain Australian stick insects, and the wingspan of the hercules moth can be as large as 27 cm (10.6 inches).

In many species the difference in body structure between the sexes is pronounced, and knowledge of one gender may give few clues to the appearance of the other gender. In some, such as the twisted-wing insects (Strepsiptera), the female is a mere inactive bag of eggs, and the winged male is one of the most active insects known. Modes of reproduction are quite diverse, and reproductive capacity is generally high. Some insects, such as the mayflies, feed only in the immature or larval stage and go without food during an extremely short adult life. Among social insects, queen termites may live for up to 50 years, whereas some adult mayflies live less than two hours.

Some insects advertise their presence to the other gender by flashing lights, and many imitate other insects in colour and form and thus avoid or minimize attack by predators that feed by day and find their prey visually, as do birds, lizards, and other insects.

Behaviour is diverse, from the almost inert parasitic forms, whose larvae lie in the nutrient bloodstreams of their hosts and feed by absorption, to dragonflies that pursue victims in the air, tiger beetles that outrun prey on land, and predaceous water beetles that outswim prey in water.

In some cases the adult insects make elaborate preparations for the young, in others the mother alone defends or feeds her young, and in still others the young are supported by complex insect societies. Some colonies of social insects, such as tropical termites and ants, may reach populations of millions of inhabitants.

Distribution and abundance

Scientists familiar with insects realize the difficulty in attempting to estimate individual numbers of insects beyond areas of a few acres or a few square miles in extent. Figures soon become so large as to be incomprehensible. The large populations and great variety of insects are related to their small size, high rates of reproduction, and abundance of suitable food supplies. Insects abound in the tropics, both in numbers of different kinds and in numbers of individuals.

If the insects (including the young and adults of all forms) are counted on a square yard (0.84 square metre) of rich moist surface soil, 500 are found easily and 2,000 are not unusual in soil samples in the north temperate zone. This amounts to roughly 4 million insects on one moist acre (0.41 hectare). In such an area only an occasional butterfly, bumblebee, or large beetle, supergiants among insects, probably would be noticed. Only a few thousand species, those that attack people’s crops, herds, and products and those that carry disease, interfere with human life seriously enough to require control measures.

Insects are adapted to every land and freshwater habitat where food is available, from deserts to jungles, from glacial fields and cold mountain streams to stagnant, lowland ponds and hot springs. Many live in brackish water up to 1/10 the salinity of seawater, a few live on the surface of seawater, and some fly larvae can live in pools of crude petroleum, where they eat other insects that fall in.


Insects (from Latin insectum) are pancrustacean hexapod invertebrates of the class Insecta. They are the largest group within the arthropod phylum. Insects have a chitinous exoskeleton, a three-part body (head, thorax and abdomen), three pairs of jointed legs, compound eyes and one pair of antennae. Their blood is not totally contained in vessels; some circulates in an open cavity known as the haemocoel. Insects are the most diverse group of animals; they include more than a million described species and represent more than half of all known living organisms. The total number of extant species is estimated at between six and ten million; potentially over 90% of the animal life forms on Earth are insects.

Nearly all insects hatch from eggs. Insect growth is constrained by the inelastic exoskeleton and development involves a series of molts. The immature stages often differ from the adults in structure, habit and habitat, and can include a usually immobile pupal stage in those groups that undergo four-stage metamorphosis. Insects that undergo three-stage metamorphosis lack a pupal stage and adults develop through a series of nymphal stages. The higher level relationship of the insects is unclear. Fossilized insects of enormous size have been found from the Paleozoic Era, including giant dragonflies with wingspans of 55 to 70 cm (22 to 28 in). The most diverse insect groups appear to have coevolved with flowering plants.

Adult insects typically move about by walking, flying, or sometimes swimming. As it allows for rapid yet stable movement, many insects adopt a tripedal gait in which they walk with their legs touching the ground in alternating triangles, composed of the front and rear on one side with the middle on the other side. Insects are the only invertebrate group with members able to achieve sustained powered flight, and all flying insects derive from one common ancestor. Many insects spend at least part of their lives under water, with larval adaptations that include gills, and some adult insects are aquatic and have adaptations for swimming. Some species, such as water striders, are capable of walking on the surface of water. Insects are mostly solitary, but some, such as certain bees, ants and termites, are social and live in large, well-organized colonies. Some insects, such as earwigs, show maternal care, guarding their eggs and young. Insects can communicate with each other in a variety of ways. Male moths can sense the pheromones of female moths over great distances. Other species communicate with sounds: crickets stridulate, or rub their wings together, to attract a mate and repel other males. Lampyrid beetles communicate with light.

Humans regard certain insects as pests, and attempt to control them using insecticides, and a host of other techniques. Some insects damage crops by feeding on sap, leaves, fruits, or wood. Some species are parasitic, and may vector diseases. Some insects perform complex ecological roles; blow-flies, for example, help consume carrion but also spread diseases. Insect pollinators are essential to the life cycle of many flowering plant species on which most organisms, including humans, are at least partly dependent; without them, the terrestrial portion of the biosphere would be devastated. Many insects are considered ecologically beneficial as predators and a few provide direct economic benefit. Silkworms produce silk and honey bees produce honey, and both have been domesticated by humans. Insects are consumed as food in 80% of the world's nations, by people in roughly 3000 ethnic groups. Human activities also have effects on insect biodiversity.


The word insect comes from the Latin word insectum, meaning "with a notched or divided body", or literally "cut into", from the neuter singular perfect passive participle of insectare, "to cut into, to cut up", from in- "into" and secare from seco "to cut"; because insects appear "cut into" three sections. The Latin word was introduced by Pliny the Elder who calqued the Ancient Greek word  éntomon "insect" (as in entomology) from éntomos "cut into sections" or "cut in pieces"; éntomon was Aristotle's term for this class of life, also in reference to their "notched" bodies. The English word insect first appears documented in 1601 in Holland's translation of Pliny. Translations of Aristotle's term also form the usual word for insect in Welsh (trychfil, from trychu "to cut" and mil, "animal"), Serbo-Croatian (zareznik, from rezati, "to cut"), etc.

In common parlance, insects are also called bugs (derived from Middle English bugge meaning "scarecrow, hobgoblin") though this term usually includes all terrestrial arthropods. The term is also occasionally extended to colloquial names for freshwater or marine crustaceans (e.g. Balmain bug, Moreton Bay bug, mudbug) and used by physicians and bacteriologists for disease-causing germs (e.g. superbugs), but entomologists to some extent reserve this term for a narrow category of "true bugs", insects of the order Hemiptera, such as cicadas and shield bugs.


Estimates of the total number of insect species, or those within specific orders, often vary considerably. Globally, averages of these estimates suggest there are around 1.5 million beetle species and 5.5 million insect species, with about 1 million insect species currently found and described. E. O. Wilson has estimated that the number of insects living at any one time are around 10 quintillion (10 billion billion).

Between 950,000 and 1,000,000 of all described species are insects, so over 50% of all described eukaryotes (1.8 million) are insects. With only 950,000 known non-insects, if the actual number of insects is 5.5 million, they may represent over 80% of the total. As only about 20,000 new species of all organisms are described each year, most insect species may remain undescribed, unless the rate of species descriptions greatly increases. Of the 24 orders of insects, four dominate in terms of numbers of described species; at least 670,000 identified species belong to Coleoptera, Diptera, Hymenoptera or Lepidoptera.

Insects with population trends documented by the International Union for Conservation of Nature, for orders Collembola, Hymenoptera, Lepidoptera, Odonata, and Orthoptera. Of 203 insect species that had such documented population trends in 2013, 33% were in decline.

As of 2017, at least 66 insect species extinctions had been recorded in the previous 500 years, generally on oceanic islands. Declines in insect abundance have been attributed to artificial lighting, land use changes such as urbanization or agricultural use, pesticide use, and invasive species. Studies summarized in a 2019 review suggested that a large proportion of insect species is threatened with extinction in the 21st century. The ecologist Manu Sanders notes that the 2019 review was biased by mostly excluding data showing increases or stability in insect population, with the studies limited to specific geographic areas and specific groups of species. A larger 2020 meta-study, analyzing data from 166 long-term surveys, suggested that populations of terrestrial insects are decreasing rapidly, by about 9% per decade. Claims of pending mass insect extinctions or "insect apocalypse" based on a subset of these studies have been popularized in news reports, but often extrapolate beyond the study data or hyperbolize study findings. Other areas have shown increases in some insect species, although trends in most regions are currently unknown. It is difficult to assess long-term trends in insect abundance or diversity because historical measurements are generally not known for many species. Robust data to assess at-risk areas or species is especially lacking for arctic and tropical regions and a majority of the southern hemisphere.

Additional Information

An insect’s three main body regions are the head, thorax, and abdomen.

The HEAD holds most of the sensory organs, including the mouth, antennae, and eyes. An insect’s mouth is much more complicated than our own mouths, and the shape varies widely between different insects. A pair of antennae are used to taste and smell the world. The compound eyes are made up of many tiny lenses; the more lenses, the sharper the vision. In addition to eyes on the head, some insects have light-sensitive organs in various places on their bodies.

The THORAX is the insect’s central body region. It contains all the muscles for the legs and wings, which are attached to this part of the body. Insects have six segmented legs, which take many different forms depending on their function. For example, legs might be modified for swimming, jumping, capturing prey, or holding on to a mate. Most insects have four wings, but some insects have none. Wings can be membranous, covered in loose scales, or modified into tiny gyroscopes or hardened covers.

The ABDOMEN is the final and largest body region. It holds most of the insect’s guts and reproductive organs. Some insects breathe directly through skin or gills, but most breathe through small holes on the sides of the body, called spiracles. The reproductive organs are often very complicated structures that can take many different forms. In male insects, they can be used as claws or sound-making apparatuses. In females they can be modified into piercing spears or stingers.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1935 2023-10-19 00:08:15

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1939) The Guinness Book of World Records


The Guinness Book of World Records, also called Guinness World Records, annual reference book covering all types of records about the world and its inhabitants. Published worldwide, The Guinness Book of World Records has been translated into more than 40 languages. It is one of the top-selling copyrighted books in publishing history, selling an average of about 3.5 million books annually and more than 150 million books since it was first released in 1955. The book is best known for its lists and descriptions of records related to various extremes of size, speed, and distance (such as the world’s tallest person, heaviest onion, fastest animal, and most-remote human-made object) and to unusual, as well as difficult-to-achieve, competitive challenges (such as the most coins stacked in 30 seconds, the longest arrow shot using only one’s feet, and the most people packed into a large automobile).

The Guinness Book of World Records, which inspires tens of thousands of people annually to attempt record-breaking feats, began as an idea conceived by British engineer and industrialist Sir Hugh Beaver, the managing director of the Guinness Brewery, to solve trivia questions among bar patrons. During the early 1950s Beaver was involved in a dispute during a shooting party about the fastest game bird in Europe; however, the answer could not be found in any bird reference book. He sought the help of sports journalists Norris and Ross McWhirter, and Guinness Superlatives, Ltd., was founded in November 1954 to handle the book’s publication. A year later the McWhirter brothers published the 198-page first edition of roughly 4,000 entries, which were separated into several chapters (e.g., “The Universe,” “The Human Being,” “The Natural World,” “The World’s Structures,” and others). The first edition, called The Guinness Book of Records, became a best seller in England in only four months’ time, and three more editions were released over the following year. The McWhirter brothers were renowned for their painstaking fact-checking work, often traveling personally to far corners of the world to observe attempts at record-breaking or to determine whether new and different activities and curiosities were worthy of inclusion in the book. More than 60,000 Guinness world records had been cataloged in the publication’s database by 2022.


Guinness World Records, known from its inception in 1955 until 1999 as The Guinness Book of Records and in previous United States editions as The Guinness Book of World Records, is a British reference book published annually, listing world records both of human achievements and the extremes of the natural world. The brainchild of Sir Hugh Beaver, the book was co-founded by twin brothers Norris and Ross McWhirter in Fleet Street, London, in August 1955.

The first edition topped the bestseller list in the United Kingdom by Christmas 1955. The following year the book was launched internationally, and as of the 2022 edition, it is now in its 67th year of publication, published in 100 countries and 23 languages, and maintains over 53,000 records in its database.

The international franchise has extended beyond print to include television series and museums. The popularity of the franchise has resulted in Guinness World Records becoming the primary international source for cataloguing and verification of a huge number of world records. The organisation employs record adjudicators to verify the authenticity of the setting and breaking of records. Following a series of owners, the franchise has been owned by the Jim Pattison Group since 2008, with its headquarters moved to South Quay Plaza, Canary Wharf, London, in 2017. Since 2008, Guinness World Records has orientated its business model toward inventing new world records as publicity stunts for companies and individuals, which has attracted criticism.


Norris McWhirter co-founded the book with his twin brother Ross at 107 Fleet Street, London, in August 1955
On 10 November 1951, Sir Hugh Beaver, then the managing director of the Guinness Breweries, went on a shooting party in the North Slob, by the River Slaney in County Wexford, Ireland. After missing a shot at a golden plover, he became involved in an argument over which was the fastest game bird in Europe, the golden plover or the red grouse (it is the plover). That evening at Castlebridge House, he realised that it was impossible to confirm in reference books whether or not the golden plover was Europe's fastest game bird. Beaver knew that there must have been numerous other questions debated nightly among the public, but there was no book in the world with which to settle arguments about records. He realised then that a book supplying the answers to this sort of question might prove successful. Beaver's idea became reality when Guinness employee Christopher Chataway recommended university friends Norris and Ross McWhirter, who had been running a fact-finding agency in London. The twin brothers were commissioned to compile what became The Guinness Book of (Superlatives and now) Records, in August 1954. A thousand copies were printed and given away.

After the founding of The Guinness Book of Records office at the top of Ludgate House, 107 Fleet Street, London, the first 198-page edition was bound on 27 August 1955 and went to the top of the British best-seller list by Christmas. The following year, it was introduced into the United States by New York publisher David Boehm and sold 70,000 copies. Since then, Guinness World Records has sold more than 100 million copies in 100 countries and 37 languages.

Because the book became a surprise hit, many further editions were printed, eventually settling into a pattern of one revision a year, published in September/October, in time for Christmas. The McWhirters continued to compile it for many years. Both brothers had an encyclopedic memory; on the BBC television series Record Breakers, based upon the book, they would take questions posed by children in the audience on various world records and were able to give the correct answer. Ross McWhirter was assassinated by two members of the Provisional Irish Republican Army in 1975, in response to offering a £50,000 reward for information that would lead to capture of members of the organisation. Following Ross's assassination, the feature in the show where questions about records posed by children were answered was called Norris on the Spot. Norris carried on as the book's sole editor.

Guinness Superlatives, later Guinness World Records Limited, was formed in 1954 to publish the first book. Sterling Publishing owned the rights to the Guinness book in the US for decades until it was repurchased by Guinness in 1989 after an 18-month long lawsuit. The group was owned by Guinness PLC and subsequently Diageo until 2001, when it was purchased by Gullane Entertainment for $65 million. Gullane was itself purchased by HIT Entertainment in 2002. In 2006, Apax Partners purchased HIT and subsequently sold Guinness World Records in early 2008 to the Jim Pattison Group, the parent company of Ripley Entertainment, which is licensed to operate Guinness World Records' Attractions. With offices in New York City and Tokyo, Guinness World Records' global headquarters remain in London, specifically South Quay Plaza, Canary Wharf, while its museum attractions are based at Ripley headquarters in Orlando, Florida, US.


Recent editions have focused on record feats by individuals. Competitions range from obvious ones such as Olympic weightlifting to the longest egg tossing distances, or for longest time spent playing Grand Theft Auto IV or the number of hot dogs that can be consumed in three minutes. Besides records about competitions, it contains such facts such as the heaviest tumour, the most poisonous fungus, the longest-running soap opera and the most valuable life-insurance policy, among others. Many records also relate to the youngest people to have achieved something, such as the youngest person to visit all nations of the world, currently held by Maurizio Giuliano.

Each edition contains a selection of the records from the Guinness World Records database, as well as select new records, with the criteria for inclusion changing from year to year.

The retirement of Norris McWhirter from his consulting role in 1995 and the subsequent decision by Diageo Plc to sell The Guinness Book of Records brand have shifted the focus of the books from text-oriented to illustrated reference. A selection of records are curated for the book from the full archive but all existing Guinness World Records titles can be accessed by creating a login on the company's website. Applications made by individuals for existing record categories are free of charge. There is an administration fee of $5 to propose a new record title.

A number of spin-off books and television series have also been produced.

Guinness World Records bestowed the record of "Person with the most records" on Ashrita Furman of Queens, NY, in April 2009; at that time, he held 100 records, while he currently holds over 220.

In 2005, Guinness designated 9 November as International Guinness World Records Day to encourage breaking of world records. In 2006, an estimated 100,000 people participated in over 10 countries. Guinness reported 2,244 new records in 12 months, which was a 173% increase over the previous year. In February 2008, NBC aired The Top 100 Guinness World Records of All Time and Guinness World Records made the complete list available on their website.

The popularity of the franchise has resulted in Guinness World Records becoming the primary international authority on the cataloguing and verification of a huge number of world records.

Defining records

For many records, Guinness World Records is the effective authority on the exact requirements for them and with whom records reside, the company providing adjudicators to events to determine the veracity of record attempts. The list of records which the Guinness World Records covers is not fixed, records may be added and also removed for various reasons. The public is invited to submit applications for records, which can be either the bettering of existing records or substantial achievements which could constitute a new record. The company also provides corporate services for companies to "harness the power of record-breaking to deliver tangible success for their businesses."

Ethical and safety issues

Guinness World Records states several types of records it will not accept for ethical reasons, such as those related to the killing or harming of animals. In the 2006 Guinness Book of World Records, Colombian serial killer Pedro López was listed as the "most prolific serial killer", having murdered at least 110 people (with Lopez himself claiming he murdered over 300 people) in Colombia, Ecuador and Peru in the late 1960s to 1980s. This was removed after complaints that the listing and category made a competition out of murder and was not just unethical but also immoral.

Several world records that were once included in the book have been removed for ethical reasons, including concerns for the well-being of potential record breakers. For example, following publication of the "heaviest fish" record, many fish owners overfed their pets beyond the bounds of what was healthy, and therefore such entries were removed. The Guinness Book also dropped records within their "eating and drinking records" section of Human Achievements in 1991 over concerns that potential competitors could harm themselves and expose the publisher to potential litigation. These changes included the removal of all spirit, wine and beer drinking records, along with other unusual records for consuming such unlikely things as bicycles and trees. Other records, such as sword swallowing and rally driving (on public roads), were closed from further entry as the current holders had performed beyond what are considered safe human tolerance levels. There have been instances of closed categories being reopened. For example, the sword swallowing category was listed as closed in the 1990 Guinness Book of World Records, but has since been reopened with Johnny Strange breaking a sword swallowing record on Guinness World Records Live. Similarly, the speed beer drinking records which were dropped from the book in 1991, reappeared 17 years later in the 2008 edition, but were moved from the "Human Achievements" section of the older book to the "Modern Society" section of the newer edition.

As of 2011, it is required in the guidelines of all "large food" type records that the item be fully edible, and distributed to the public for consumption, to prevent food wastage.

Chain letters are also not allowed: "Guinness World Records does not accept any records relating to chain letters, sent by post or e-mail."

At the request of the U.S. Mint, in 1984, the book stopped accepting claims of large hoardings of pennies or other currency.

Environmentally unfriendly records (such as the releasing of sky lanterns and party balloons) are no longer accepted or monitored, in addition to records relating to tobacco or cannabis consumption or preparation.

Difficulty in defining records

For some potential categories, Guinness World Records has declined to list some records that are too difficult or impossible to determine. For example, its website states: "We do not accept any claims for beauty as it is not objectively measurable."

However, other categories of human skill relating to measurable speed such as "Worlds Fastest Clapper" were instated. On 27 July 2010, Connor May (NSW, Australia) set the record for claps, with 743 in 1 minute.

On 10 December 2010, Guinness World Records stopped accepting submissions for the "dreadlock" category after investigation of its first and only female title holder, Asha Mandela, determining it was impossible to judge this record accurately.

Change in business model

Traditionally, the company made a large amount of its revenue via book sales to interested readers, especially children. The rise of the Internet began to cut into book sales starting in the 2000s, part of a general decline in the book industry. According to a 2017 story by Planet Money of NPR, Guinness began to realise that a lucrative new revenue source to replace falling book sales was the would-be record-holders themselves. While any person can theoretically send in a record to be verified for free, the approval process is slow. Would-be record breakers that paid fees ranging from US$12,000 to US$500,000 would be given advisors, adjudicators, help in finding good records to break as well as suggestions for how to do it, prompt service, and so on. In particular, corporations and celebrities seeking a publicity stunt to launch a new product or draw attention to themselves began to hire Guinness World Records, paying them for finding a record to break or to create a new category just for them. As such, they have been described as a native advertising company, with no clear distinction between content and advertisement.

Guinness World Records was criticised by television talk show host John Oliver on the program Last Week Tonight with John Oliver in August 2019. Oliver criticised Guinness for taking money from authoritarian governments for pointless vanity projects as it related to the main focus of his story, President of Turkmenistan Gurbanguly Berdimuhamedow. Oliver asked Guinness to work with Last Week Tonight to adjudicate a record for "Largest cake featuring a picture of someone falling off a horse", but according to Oliver, the offer did not work out after Guinness insisted on a non-disparagement clause. Guinness World Records denied the accusations and stated that they declined Oliver's offer to participate because "it was merely an opportunity to mock one of our record-holders," and that Oliver did not specifically request the record for the largest marble cake. As of 2021, the Guinness World Record for "Largest marble cake" remains with Betty Crocker Middle East in Saudi Arabia. Following Oliver's episode, Guinness World Records' ethics were called into question by human rights groups.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1936 2023-10-20 00:18:47

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1940) Real Estate


Real Estate is property in the form of land or buildings.


Real estate is property consisting of land and the buildings on it, along with its natural resources such as growing crops (eg. timber), minerals or water, and wild animals; immovable property of this nature; an interest vested in this (also) an item of real property, (more generally) buildings or housing in general. In terms of law, real is in relation to land property and is different from personal property while estate means the "interest" a person has in that land property.

Real estate is different from personal property, which is not permanently attached to the land (or comes with the land), such as vehicles, boats, jewelry, furniture, tools, and the rolling stock of a farm and farm animals.

In the United States, the transfer, owning, or acquisition of real estate can be through business corporations, individuals, nonprofit corporations, fiduciaries, or any legal entity as seen within the law of each U.S. state.

History of real estate

The natural right of a person to own property as a concept can be seen as having roots in Roman law as well as Greek philosophy. The profession of appraisal can be seen as beginning in England during the 1500s as agricultural needs required land clearing and land preparation. Textbooks on the subject of surveying began to be written and the term "surveying" was used in England, while the term "appraising" was more used in North America. Natural law which can be seen as "universal law" was discussed among writers of the 15th and 16th century as it pertained to "property theory" and the inter-state relations dealing with foreign investments and the protection of citizens private property abroad. Natural law can be seen as having an influence in Emerich de Vattel's 1758 treatise The Law of Nations which conceptualized the idea of private property.

One of the largest initial real estate deals in history known as the "Louisiana Purchase" happened in 1803 when the Louisiana Purchase Treaty was signed. This treaty paved the way for western expansion and made the U.S. the owners of the "Louisiana Territory" as the land was bought from France for fifteen million, making each acre roughly 4 cents. The oldest real estate brokerage firm was established in 1855 in Chicago, Illinois, and was initially known as "L. D. Olmsted & Co." but is now known as "Baird & Warner". In 1908, the National Association of Realtors was founded in Chicago and in 1916, the name was changed to the National Association of Real Estate Boards and this was also when the term "realtor" was coined to identify real estate professionals.

The stock market crash of 1929 and the Great Depression in the U.S. caused a major drop in real estate worth and prices and ultimately resulted in depreciation of 50% for the four years after 1929. Housing financing in the U.S. was greatly affected by the Banking Act of 1933 and the National Housing Act in 1934 because it allowed for mortgage insurance for home buyers and this system was implemented by the Federal Deposit Insurance as well as the Federal Housing Administration. In 1938, an amendment was made to the National Housing Act and Fannie Mae, a government agency, was established to serve as a secondary market for mortgages and to give lenders more money in order for new homes to be funded.

Title VIII of the Civil Rights Act in the U.S., which is also known as the Fair Housing Act, was put into place in 1968 and dealt with the incorporation of African Americans into neighborhoods as the issues of discrimination were analyzed with the renting, buying, and financing of homes. Internet real estate as a concept began with the first appearance of real estate platforms on the World Wide Web and occurred in 1999.

Residential real estate

Residential real estate may contain either a single family or multifamily structure that is available for occupation or for non-business purposes.

Residences can be classified by and how they are connected to neighbouring residences and land. Different types of housing tenure can be used for the same physical type. For example, connected residences might be owned by a single entity and leased out, or owned separately with an agreement covering the relationship between units and common areas and concerns.

According to the Congressional Research Service, in 2021, 65% of homes in the U.S. are owned by the occupier.


Real estate is defined as the land and any permanent structures, like a home, or improvements attached to the land, whether natural or man-made.

Real estate is a form of real property. It differs from personal property, which is not permanently attached to the land, such as vehicles, boats, jewelry, furniture, and farm equipment.

* Real estate is considered real property that includes land and anything permanently attached to it or built on it, whether natural or man-made.
* There are five main categories of real estate which include residential, commercial, industrial, raw land, and special use.
* Investing in real estate includes purchasing a home, rental property, or land.
* Indirect investment in real estate can be made via REITs or through pooled real estate investment.

Understanding Real Estate

The terms land, real estate, and real property are often used interchangeably, but there are distinctions.

Land refers to the earth's surface down to the center of the earth and upward to the airspace above, including the trees, minerals, and water. The physical characteristics of land include its immobility, indestructibility, and uniqueness, where each parcel of land differs geographically.

Real estate encompasses the land, plus any permanent man-made additions, such as houses and other buildings. Any additions or changes to the land that affects the property's value are called an improvement.

Once land is improved, the total capital and labor used to build the improvement represent a sizable fixed investment. Though a building can be razed, improvements like drainage, electricity, water and sewer systems tend to be permanent.

Real property includes the land and additions to the land plus the rights inherent to its ownership and usage.

Real Estate Agent

A real estate agent is a licensed professional who arranges real estate transactions, matching buyers and sellers and acting as their representatives in negotiations.

What Are Types of Real Estate?

Residential real estate: Any property used for residential purposes. Examples include single-family homes, condos, cooperatives, duplexes, townhouses, and multifamily residences.

Commercial real estate: Any property used exclusively for business purposes, such as apartment complexes, gas stations, grocery stores, hospitals, hotels, offices, parking facilities, restaurants, shopping centers, stores, and theaters.

Industrial real estate: Any property used for manufacturing, production, distribution, storage, and research and development.

Land: Includes undeveloped property, vacant land, and agricultural lands such as farms, orchards, ranches, and timberland.

Special purpose: Property used by the public, such as cemeteries, government buildings, libraries, parks, places of worship, and schools.

The Economics of Real Estate

Real estate is a critical driver of economic growth in the U.S., and housing starts, the number of new residential construction projects in any given month, released by the U.S. Census Bureau, is a key economic indicator. The report includes building permits, housing starts, and housing completions data, for single-family homes, homes with 2-4 units, and multifamily buildings with five or more units, such as apartment complexes.

Investors and analysts keep a close eye on housing starts because the numbers can provide a general sense of economic direction. Moreover, the types of new housing starts can give clues about how the economy is developing.

If housing starts indicate fewer single-family and more multifamily starts, it could signal an impending supply shortage for single-family homes, driving up home prices.

How to Invest in Real Estate

Some of the most common ways to invest in real estate include homeownership, investment or rental properties, and house flipping. One type of real estate investor is a real estate wholesaler who contracts a home with a seller, then finds an interested party to buy it. Real estate wholesalers generally find and contract distressed properties but don't do any renovations or additions.

The earnings from investment in real estate are garnered from revenue from rent or leases, and appreciation of the real estate's value. According to ATTOM, which oversees the nation's premier property database, the year-end 2021 U.S. home sales report shows that home sellers nationwide realized a profit of $94,092, a 45.3%return on investment, up 45% from $64,931 in 2020, and up 71% from $55,000 two years ago.

Real estate is dramatically affected by its location and factors such as employment rates, the local economy, crime rates, transportation facilities, school quality, municipal services, and property taxes can affect the value of the real estate.

Investing in real estate indirectly is done through a real estate investment trust (REIT), a company that holds a portfolio of income-producing real estate. There are several types of REITs, including equity, mortgage, and hybrid REITs, and are classified based on how their shares are bought and sold, such as publicly-traded REITs, public non-traded REITs, and private REITs.

The most popular way to invest in a REIT is to buy shares that are publicly traded on an exchange. The shares trade like any other security traded on an exchange such as stocks and makes REITs very liquid and transparent. Income from REITs is earned through dividend payments and appreciation of the shares. In addition to individual REITs, investors can trade in real estate mutual funds and real estate exchange-traded funds (ETFs).

Another option for investing in real estate is via mortgage-backed securities (MBS), such as through the Vanguard Mortgage-Backed Securities ETF (VMBS), made up of federal agency-backed MBS that have minimum pools of $1 billion and minimum maturity of one year.

What Are the Best Ways to Finance a Real Estate Investment?

Real estate is commonly purchased with cash or financed with a mortgage through a private or commercial lender.

What Is Real Estate Development?

Real estate development, or property development, includes activities that range from renovating existing buildings to the purchase of raw land and the sale of developed land or parcels to others.

What Careers are Common in the Real Estate Industry?

Common careers found in the real estate industry include leasing agent, foreclosure specialist, title examiner, home inspector, real estate appraiser, real estate agent, and mortgage broker.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1937 2023-10-21 00:06:22

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1941) Proxima Centauri


The closest star to us is actually our very own Sun at 93,000,000 miles (150,000,000 km). The next closest star is Proxima Centauri. It lies at a distance of about 4.3 light-years or about 25,300,000,000,000 miles (about 39,900,000,000,000 kilometers). A car travelling at a speed of 60 miles per hour would take more than 48 million years to reach this closest star. Proxima Centauri is actually part of a triple star system which also includes the stars Alpha Centauri A and B. It is a small, red star which is about one tenth the size of our Sun.


Our closest stellar neighbor: Proxima Centauri.

Proxima Centauri lies in the constellation of Centaurus (the Centaur), just over four light-years from Earth. Although it looks bright through the eye of Hubble, as you might expect from the nearest star to the solar system, Proxima Centauri is not visible to the unaided eye. Its average luminosity is very low, and it is quite small compared to other stars, at only about an eighth of the mass of the Sun.

However, on occasion, its brightness increases. Proxima is what is known as a "flare star," meaning that convection processes within the star's body make it prone to random and dramatic changes in brightness. The convection processes not only trigger brilliant bursts of starlight but, combined with other factors, mean that Proxima Centauri is in for a very long life. Astronomers predict that this star will remain middle-aged – or a "main sequence" star in astronomical terms – for another four trillion years, some 300 times the age of the current universe.

Proxima Centauri is actually part of a triple star system – its two companions, Alpha Centauri A and B.

Although by cosmic standards it is a close neighbor, Proxima Centauri remains a point-like object even using Hubble's eagle-eyed vision, hinting at the vast scale of the universe around us.


Proxima Centauri is a small, low-mass star located 4.2465 light-years (1.3020 pc) away from the Sun in the southern constellation of Centaurus. Its Latin name means the 'nearest [star] of Centaurus'. It was discovered in 1915 by Robert Innes and is the nearest-known star to the Sun. With a quiescent apparent magnitude of 11.13, it is too faint to be seen with the unaided eye. Proxima Centauri is a member of the Alpha Centauri star system, being identified as component Alpha Centauri C, and is 2.18° to the southwest of the Alpha Centauri AB pair. It is currently 12,950 AU (0.2 ly) from AB, which it orbits with a period of about 550,000 years.

Proxima Centauri is a red dwarf star with a mass about 12.5% of the Sun's mass (M☉), and average density about 33 times that of the Sun. Because of Proxima Centauri's proximity to Earth, its angular diameter can be measured directly. Its actual diameter is about one-seventh (14%) the diameter of the Sun. Although it has a very low average luminosity, Proxima Centauri is a flare star that randomly undergoes dramatic increases in brightness because of magnetic activity. The star's magnetic field is created by convection throughout the stellar body, and the resulting flare activity generates a total X-ray emission similar to that produced by the Sun. The internal mixing of its fuel by convection through its core, and Proxima's relatively low energy-production rate, mean that it will be a main-sequence star for another four trillion years.

Proxima Centauri has two known exoplanets and one candidate exoplanet: Proxima Centauri b, Proxima Centauri d and the disputed Proxima Centauri c. Proxima Centauri b orbits the star at a distance of roughly 0.05 AU (7.5 million km) with an orbital period of approximately 11.2 Earth days. Its estimated mass is at least 1.07 times that of Earth. Proxima b orbits within Proxima Centauri's habitable zone—the range where temperatures are right for liquid water to exist on its surface—but, because Proxima Centauri is a red dwarf and a flare star, the planet's habitability is highly uncertain. A candidate super-Earth, Proxima Centauri c, roughly 1.5 AU (220 million km) away from Proxima Centauri, orbits it every 1,900 d (5.2 yr). A sub-Earth, Proxima Centauri d, roughly 0.029 AU (4.3 million km) away, orbits it every 5.1 days.

General characteristics

Proxima Centauri is a red dwarf, because it belongs to the main sequence on the Hertzsprung–Russell diagram and is of spectral class M5.5. The M5.5 class means that it falls in the low-mass end of M-type dwarf stars, with its hue shifted toward red-yellow[22] by an effective temperature of ~3,000 K. Its absolute visual magnitude, or its visual magnitude as viewed from a distance of 10 parsecs (33 ly), is 15.5. Its total luminosity over all wavelengths is only 0.16% that of the Sun, although when observed in the wavelengths of visible light the eye is most sensitive to, it is only 0.0056% as luminous as the Sun. More than 85% of its radiated power is at infrared wavelengths.

In 2002, optical interferometry with the Very Large Telescope (VLTI) found that the angular diameter of Proxima Centauri is 1.02±0.08 mas. Because its distance is known, the actual diameter of Proxima Centauri can be calculated to be about 1/7 that of the Sun, or 1.5 times that of Jupiter.

Structure and fusion

Proxima Centauri's chromosphere is active, and its spectrum displays a strong emission line of singly ionized magnesium at a wavelength of 280 nm. About 88% of the surface of Proxima Centauri may be active, a percentage that is much higher than that of the Sun even at the peak of the solar cycle. Even during quiescent periods with few or no flares, this activity increases the corona temperature of Proxima Centauri to 3.5 million K, compared to the 2 million K of the Sun's corona, and its total X-ray emission is comparable to the sun's. Proxima Centauri's overall activity level is considered low compared to other red dwarfs, which is consistent with the star's estimated age of 4.85 × {10}^{9} years, since the activity level of a red dwarf is expected to steadily wane over billions of years as its stellar rotation rate decreases. The activity level appears to vary with a period of roughly 442 days, which is shorter than the solar cycle of 11 years.

Proxima Centauri has a relatively weak stellar wind, no more than 20% of the mass loss rate of the solar wind. Because the star is much smaller than the Sun, the mass loss per unit surface area from Proxima Centauri may be eight times that from the solar surface.

Life phases

A red dwarf with the mass of Proxima Centauri will remain on the main sequence for about four trillion years. As the proportion of helium increases because of hydrogen fusion, the star will become smaller and hotter, gradually transforming into a so-called "blue dwarf". Near the end of this period it will become significantly more luminous, reaching 2.5% of the Sun's luminosity (L☉) and warming up any orbiting bodies for a period of several billion years. When the hydrogen fuel is exhausted, Proxima Centauri will then evolve into a helium white dwarf (without passing through the red giant phase) and steadily lose any remaining heat energy.

The Alpha Centauri system may form naturally through a low-mass star being dynamically captured by a more massive binary of 1.5–2 M☉ within their embedded star cluster before the cluster disperses. However, more accurate measurements of the radial velocity are needed to confirm this hypothesis. If Proxima Centauri was bound to the Alpha Centauri system during its formation, the stars are likely to share the same elemental composition. The gravitational influence of Proxima might have stirred up the Alpha Centauri protoplanetary disks. This would have increased the delivery of volatiles such as water to the dry inner regions, so possibly enriching any terrestrial planets in the system with this material.

Alternatively, Proxima Centauri may have been captured at a later date during an encounter, resulting in a highly eccentric orbit that was then stabilized by the galactic tide and additional stellar encounters. Such a scenario may mean that Proxima Centauri's planetary companions have had a much lower chance for orbital disruption by Alpha Centauri. As the members of the Alpha Centauri pair continue to evolve and lose mass, Proxima Centauri is predicted to become unbound from the system in around 3.5 billion years from the present. Thereafter, the star will steadily diverge from the pair.

Future exploration

Because of the star's proximity to Earth, Proxima Centauri has been proposed as a flyby destination for interstellar travel. If non-nuclear, conventional propulsion technologies are used, the flight of a spacecraft to Proxima Centauri and its planets would probably require thousands of years. For example, Voyager 1, which is now travelling 17 km/s (38,000 mph) relative to the Sun, would reach Proxima Centauri in 73,775 years, were the spacecraft travelling in the direction of that star. A slow-moving probe would have only several tens of thousands of years to catch Proxima Centauri near its closest approach, before the star would recede out of reach.

Nuclear pulse propulsion might enable such interstellar travel with a trip timescale of a century, inspiring several studies such as Project Orion, Project Daedalus, and Project Longshot. Project Breakthrough Starshot aims to reach the Alpha Centauri system within the first half of the 21st century, with microprobes travelling at 20% of the speed of light propelled by around 100 gigawatts of Earth-based lasers. The probes would perform a fly-by of Proxima Centauri to take photos and collect data of its planets' atmospheric compositions. It would take 4.25 years for the information collected to be sent back to Earth.

Additional Information

Proxima Centauri, the closest star to our sun

Proxima Centauri is a red dwarf.

The star Proxima Centauri isn’t visible to the eye. Yet it’s one of the most noted stars in the heavens. That’s because it’s part of the Alpha Centauri system, the closest star system to our sun. Alpha Centauri consists of three known stars (including Proxima). Of the three, Proxima is closest, at 4.2 light-years away. Alpha Centauri appears as the 3rd-brightest star in Earth’s sky. But Proxima is too faint to see with the eye. It’s a small, dim, low-mass star, with its own planetary system. It has two confirmed planets – and possibly a third planet – known so far. Proxima is a red dwarf star. And, like other red dwarfs, it’s also known to have massive solar flares.

Usually, when stars are so close to Earth, they appear bright in our sky. Consider the star Sirius, for example, in the constellation Canis Major. Sirius is the brightest star in Earth’s sky, at just 8.6 light-years away. So why isn’t Proxima Centauri, at 4.22 light years away, even brighter?

Far from being bright, Proxima is exceedingly dim. It shines at about +11 magnitude.

And that’s the nature of red dwarf stars like Proxima. These stars – the most common sorts of stars in our Milky Way galaxy – are too puny to shine brightly. Proxima contains only about an eighth of the mass of our sun. Faint red Proxima Centauri is only 3,100 Kelvin (5,100 degrees F or 2,800 C) in contrast to 5,778 K for our sun.

Proxima’s location, seen from Earth

In Earth’s sky, Proxima is located in the direction of our constellation Centaurus the Centaur. That’s a southern constellation, best seen from Earth’s Southern Hemisphere. And indeed – although it can be glimpsed from very southerly latitudes in the Northern Hemisphere – many experience northern stargazers say they’ve never seen Alpha Centauri.

Proxima lies nearly a fifth of a light-year from Alpha Centauri A and B. Because it’s so far from the two primary stars, some scientists question whether it’s really part of the star system. The charts below will give you a sense of Proxima’s location with respect to our sun and other stars, and with respect to Alpha Centauri A and B.

All of these nearby stars – Proxima, Alpha Centauri A and B, and our sun – orbit together around the center of our Milky Way galaxy.

Planets orbiting Proxima

In 2016, the European Southern Observatory (ESO) announced the discovery of Proxima b, a planet orbiting Proxima Centauri at a distance of roughly 4.7 million miles (7.5 million km) with an orbital period of approximately 11.2 Earth days. Its estimated mass is at least 1.3 times that of the Earth.

Proxima b is within the habitable zone of its star. But a 2017 study suggests that the exoplanet does not have an Earth-like atmosphere.

In June 2020, scientists announced that they had discovered a second planet around the star, Proxima c. This second planet for Proxima is a lot larger than Earth and orbits its star every 1,907 days. It orbits at about 1.5 times the distance from its star than Earth orbits from the sun.

Then in 2022, the discovery of a third planet candidate orbiting Proxima Centauri was announced. Researchers using the European Southern Observatory’s Very Large Telescope (VLT) said they found a third planet orbiting Proxima Centauri. The planet, Proxima d, is only 1/4 the mass of Earth and one of the smallest and lightest exoplanets ever discovered. It orbits close to its star, at a distance of approximately 4 million kilometers (2.5 million miles). And it completes one orbit in only five days.

By the way, any planets around Proxima Centauri would contend with massive flares that pop off the red dwarf, which might or might not spell doom.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1938 2023-10-22 00:09:19

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1942) Factory


A factory a building or set of buildings with facilities for manufacturing; the seat of some kind of production.


Factory is a Structure in which work is organized to meet the need for production on a large scale usually with power-driven machinery. In the 17th–18th century, the domestic system of work in Europe began giving way to larger units of production, and capital became available for investment in industrial enterprises. The movement of population from country to city also contributed to change in work methods. Mass production, which transformed the organization of work, came about by the development of the machine-tool industry. With precision equipment, large numbers of identical parts could be produced at low cost and with a small workforce. The assembly line was first widely used in the U.S. meat-packing industry; Henry Ford designed an automobile assembly line in 1913. By mid-1914, chassis assembly time had fallen from 121/2 man-hours to 93 man-minutes. Some countries, particularly in Asia and South America, began industrializing in the 1970s and later.


A factory, manufacturing plant or a production plant is an industrial facility, often a complex consisting of several buildings filled with machinery, where workers manufacture items or operate machines which process each item into another. They are a critical part of modern economic production, with the majority of the world's goods being created or processed within factories.

Factories arose with the introduction of machinery during the Industrial Revolution, when the capital and space requirements became too great for cottage industry or workshops. Early factories that contained small amounts of machinery, such as one or two spinning mules, and fewer than a dozen workers have been called "glorified workshops".

Most modern factories have large warehouses or warehouse-like facilities that contain heavy equipment used for assembly line production. Large factories tend to be located with access to multiple modes of transportation, some having rail, highway and water loading and unloading facilities. In some countries like Australia, it is common to call a factory building a "Shed".

Factories may either make discrete products or some type of continuously produced material, such as chemicals, pulp and paper, or refined oil products. Factories manufacturing chemicals are often called plants and may have most of their equipment – tanks, pressure vessels, chemical reactors, pumps and piping – outdoors and operated from control rooms. Oil refineries have most of their equipment outdoors.

Discrete products may be final goods, or parts and sub-assemblies which are made into final products elsewhere. Factories may be supplied parts from elsewhere or make them from raw materials. Continuous production industries typically use heat or electricity to transform streams of raw materials into finished products.

The term mill originally referred to the milling of grain, which usually used natural resources such as water or wind power until those were displaced by steam power in the 19th century. Because many processes like spinning and weaving, iron rolling, and paper manufacturing were originally powered by water, the term survives as in steel mill, paper mill, etc.

The first machine is stated by one source to have been traps used to assist with the capturing of animals, corresponding to the machine as a mechanism operating independently or with very little force by interaction from a human, with a capacity for use repeatedly with operation exactly the same on every occasion of functioning. The wheel was invented c. 3000 BC, the spoked wheel c. 2000 BC. The Iron Age began approximately 1200–1000 BC. However, other sources define machinery as a means of production.

Archaeology provides a date for the earliest city as 5000 BC as Tell Brak (Ur et al. 2006), therefore a date for cooperation and factors of demand, by an increased community size and population to make something like factory level production a conceivable necessity.

Archaeologist Bonnet, unearthed the foundations of numerous workshops in the city of Kerma proving that as early as 2000 BC Kerma was a large urban capital.

The watermill was first made in the Persian Empire some time before 350 BC. In the third century BC, Philo of Byzantium describes a water-driven wheel in his technical treatises. Factories producing garum were common in the Roman Empire. The Barbegal aqueduct and mills are an industrial complex from the second century AD found in southern France. By the time of the fourth century AD, there was a water-milling installation with a capacity to grind 28 tons of grain per day, a rate sufficient to meet the needs of 80,000 persons, in the Roman Empire.

The large population increase in medieval Islamic cities, such as Baghdad's 1.5 million population, led to the development of large-scale factory milling installations with higher productivity to feed and support the large growing population. A tenth-century grain-processing factory in the Egyptian town of Bilbays, for example, produced an estimated 300 tons of grain and flour per day. Both watermills and windmills were widely used in the Islamic world at the time.

The Venice math also provides one of the first examples of a factory in the modern sense of the word. Founded in 1104 in Venice, Republic of Venice, several hundred years before the Industrial Revolution, it mass-produced ships on assembly lines using manufactured parts. The Venice math apparently produced nearly one ship every day and, at its height, employed 16,000 people.

Industrial Revolution

One of the earliest factories was John Lombe's water-powered silk mill at Derby, operational by 1721. By 1746, an integrated brass mill was working at Warmley near Bristol. Raw material went in at one end, was smelted into brass and was turned into pans, pins, wire, and other goods. Housing was provided for workers on site. Josiah Wedgwood in Staffordshire and Matthew Boulton at his Soho Manufactory were other prominent early industrialists, who employed the factory system.

The factory system began widespread use somewhat later when cotton spinning was mechanized.

Richard Arkwright is the person credited with inventing the prototype of the modern factory. After he patented his water frame in 1769, he established Cromford Mill, in Derbyshire, England, significantly expanding the village of Cromford to accommodate the migrant workers new to the area. The factory system was a new way of organizing workforce made necessary by the development of machines which were too large to house in a worker's cottage. Working hours were as long as they had been for the farmer, that is, from dawn to dusk, six days per week. Overall, this practice essentially reduced skilled and unskilled workers to replaceable commodities. Arkwright's factory was the first successful cotton spinning factory in the world; it showed unequivocally the way ahead for industry and was widely copied.

Between 1770 and 1850 mechanized factories supplanted traditional artisan shops as the predominant form of manufacturing institution, because the larger-scale factories enjoyed a significant technological and supervision advantage over the small artisan shops. The earliest factories (using the factory system) developed in the cotton and wool textiles industry. Later generations of factories included mechanized shoe production and manufacturing of machinery, including machine tools. After this came factories that supplied the railroad industry included rolling mills, foundries and locomotive works, along with agricultural-equipment factories that produced cast-steel plows and reapers. Bicycles were mass-produced beginning in the 1880s.

The Nasmyth, Gaskell and Company's Bridgewater Foundry, which began operation in 1836, was one of the earliest factories to use modern materials handling such as cranes and rail tracks through the buildings for handling heavy items.

Large scale electrification of factories began around 1900 after the development of the AC motor which was able to run at constant speed depending on the number of poles and the current electrical frequency. At first larger motors were added to line shafts, but as soon as small horsepower motors became widely available, factories switched to unit drive. Eliminating line shafts freed factories of layout constraints and allowed factory layout to be more efficient. Electrification enabled sequential automation using relay logic.

Assembly line

Henry Ford further revolutionized the factory concept in the early 20th century, with the innovation of the mass production. Highly specialized laborers situated alongside a series of rolling ramps would build up a product such as (in Ford's case) an automobile. This concept dramatically decreased production costs for virtually all manufactured goods and brought about the age of consumerism.

In the mid - to late 20th century, industrialized countries introduced next-generation factories with two improvements:

* Advanced statistical methods of quality control, pioneered by the American mathematician William Edwards Deming, whom his home country initially ignored. Quality control turned Japanese factories into world leaders in cost-effectiveness and production quality.
* Industrial robots on the factory floor, introduced in the late 1970s. These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This too cut costs and improved speed.

Some speculation as to the future of the factory includes scenarios with rapid prototyping, nanotechnology, and orbital zero-gravity facilities. There is some scepticism about the development of the factories of the future if the robotic industry is not matched by a higher technological level of the people who operate it. According to some authors, the four basic pillars of the factories of the future are strategy, technology, people and habitability, which would take the form of a kind of "laboratory factories", with management models that allow "producing with quality while experimenting to do it better tomorrow".


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1939 2023-10-23 00:16:36

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1943) Wheel


A wheel is a circular object connected at the centre to a bar, used for making vehicles or parts of machines move.


A Wheel a circular frame of hard material that may be solid, partly solid, or spoked and that is capable of turning on an axle.

A Sumerian (Erech) pictograph, dated about 3500 BC, shows a sledge equipped with wheels. The idea of wheeled transportation may have come from the use of logs for rollers, but the oldest known wheels were wooden disks consisting of three carved planks clamped together by transverse struts.

Spoked wheels appeared about 2000 BC, when they were in use on chariots in Asia Minor. Later developments included iron hubs (centerpieces) turning on greased axles, and the introduction of a tire in the form of an iron ring that was expanded by heat and dropped over the rim and that on cooling shrank and drew the members tightly together.

The use of a wheel (turntable) for pottery had also developed in Mesopotamia by 3500 BC.

The early waterwheels, used for lifting water from a lower to a higher level for irrigation, consisted of a number of pots tied to the rim of a wheel that was caused to rotate about a horizontal axis by running water or by a treadmill. The lower pots were submerged and filled in the running stream; when they reached their highest position, they poured their contents into a trough that carried the water to the fields.

The three power sources used in the Middle Ages—animal, water, and wind—were all exploited by means of wheels. One method of driving millstones for grinding grain was to fit a long horizontal arm to the vertical shaft connected to the stone and pull or push it with a horse or other beast of burden. Waterwheels and windmills were also used to drive millstones.

Because the wheel made controlled rotary motion possible, it was of decisive importance in machine design. Rotating machines for performing repetitive operations driven by steam engines were important elements in the Industrial Revolution. Rotary motion permits a continuity in magnitude and direction that is impossible with linear motion, which in a machine always involves reversals and changes in magnitude.


A wheel is a circular component that is intended to rotate on an axle bearing. The wheel is one of the key components of the wheel and axle which is one of the six simple machines. Wheels, in conjunction with axles, allow heavy objects to be moved easily facilitating movement or transportation while supporting a load, or performing labor in machines. Wheels are also used for other purposes, such as a ship's wheel, steering wheel, potter's wheel, and flywheel.

Common examples can be found in transport applications. A wheel reduces friction by facilitating motion by rolling together with the use of axles. In order for wheels to rotate, a moment needs to be applied to the wheel about its axis, either by way of gravity or by the application of another external force or torque. Using the wheel, Sumerians invented a device that spins clay as a potter shapes it into the desired object.


The place and time of the invention of the wheel remains unclear, because the oldest hints do not guarantee the existence of real wheeled transport, or are dated with too much scatter. Mesopotamian civilization is credited with the invention of the wheel by old sources. However, more recent sources either credit prehistoric Eastern Europeans with the invention of the wheel or suggest that it was invented independently in both Mesopotamia and Eastern Europe and that unlike other breakthrough inventions, the wheel cannot be attributed to a single nor several inventors. Evidence of early usage of wheeled carts has been found across the Middle East, in Europe, Eastern Europe, India and China. It is not known whether Chinese, Indians, Europeans and even Mesopotamians invented the wheel independently or not.

The invention of the solid wooden disk wheel falls into the late Neolithic, and may be seen in conjunction with other technological advances that gave rise to the early Bronze Age. This implies the passage of several wheelless millennia even after the invention of agriculture and of pottery, during the Aceramic Neolithic.

* 4500–3300 BCE (Copper Age): invention of the potter's wheel; earliest solid wooden wheels (disks with a hole for the axle); earliest wheeled vehicles; domestication of the horse

* 3300–2200 BCE (Early Bronze Age)

* 2200–1550 BCE (Middle Bronze Age): invention of the spoked wheel and the chariot

This Ljubljana Marshes Wheel with axle is the oldest wooden wheel yet discovered dating to Copper Age (c. 3,130 BCE)
The Halaf culture of 6500–5100 BCE is sometimes credited with the earliest depiction of a wheeled vehicle, but this is doubtful as there is no evidence of Halafians using either wheeled vehicles or even pottery wheels. Precursors of pottery wheels, known as "tournettes" or "slow wheels", were known in the Middle East by the 5th millennium BCE. One of the earliest examples was discovered at Tepe Pardis, Iran, and dated to 5200–4700 BCE. These were made of stone or clay and secured to the ground with a peg in the center, but required significant effort to turn. True potter's wheels, which are freely-spinning and have a wheel and axle mechanism, were developed in Mesopotamia (Iraq) by 4200–4000 BCE. The oldest surviving example, which was found in Ur (modern day Iraq), dates to approximately 3100 BCE. Wheels of uncertain dates have also been found in the Indus Valley civilization, a 4th millennium BCE civilization covering areas of present-day India and Pakistan.

The oldest indirect evidence of wheeled movement was found in the form of miniature clay wheels north of the Black Sea before 4000 BCE. From the middle of the 4th millennium BCE onward, the evidence is condensed throughout Europe in the form of toy cars, depictions, or ruts, with the oldest find in Northern Germany dating back to around 3400 BCE. In Mesopotamia, depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk, in the Sumerian civilization are dated to c. 3500–3350 BCE. In the second half of the 4th millennium BCE, evidence of wheeled vehicles appeared near-simultaneously in the Northern (Maykop culture) and South Caucasus and Eastern Europe (Cucuteni–Trypillian culture).

Depictions of a wheeled vehicle appeared between 3631 and 3380 BCE in the Bronocice clay pot excavated in a Funnelbeaker culture settlement in southern Poland. In nearby Olszanica, a 2.2 m wide door was constructed for wagon entry; this barn was 40 m long with three doors, dated to 5000 B.C.E—7000 years old, and belonged to the neolithic Linear Pottery culture. Surviving evidence of a wheel-axle combination, from Stare Gmajne near Ljubljana in Slovenia (Ljubljana Marshes Wooden Wheel), is dated within two standard deviations to 3340–3030 BCE, the axle to 3360–3045 BCE. Two types of early Neolithic European wheel and axle are known: a circumalpine type of wagon construction (the wheel and axle rotate together, as in Ljubljana Marshes Wheel), and that of the Baden culture in Hungary (axle does not rotate). They both are dated to c. 3200–3000 BCE. Some historians believe that there was a diffusion of the wheeled vehicle from the Near East to Europe around the mid-4th millennium BCE.

Early wheels were simple wooden disks with a hole for the axle. Some of the earliest wheels were made from horizontal slices of tree trunks. Because of the uneven structure of wood, a wheel made from a horizontal slice of a tree trunk will tend to be inferior to one made from rounded pieces of longitudinal boards.

The spoked wheel was invented more recently and allowed the construction of lighter and swifter vehicles. The earliest known examples of wooden spoked wheels are in the context of the Sintashta culture, dating to c. 2000 BCE (Krivoye Lake). Soon after this, horse cultures of the Caucasus region used horse-drawn spoked-wheel war chariots for the greater part of three centuries. They moved deep into the Greek peninsula where they joined with the existing Mediterranean peoples to give rise, eventually, to classical Greece after the breaking of Minoan dominance and consolidations led by pre-classical Sparta and Athens. Celtic chariots introduced an iron rim around the wheel in the 1st millennium BCE.

In China, wheel tracks dating to around 2200 BCE have been found at Pingliangtai, a site of the Longshan Culture. Similar tracks were also found at Yanshi, a city of the Erlitou culture, dating to around 1700 BCE. The earliest evidence of spoked wheels in China comes from Qinghai, in the form of two wheel hubs from a site dated between 2000 and 1500 BCE.

In Britain, a large wooden wheel, measuring about 1 m (3.3 ft) in diameter, was uncovered at the Must Farm site in East Anglia in 2016. The specimen, dating from 1,100 to 800 BCE, represents the most complete and earliest of its type found in Britain. The wheel's hub is also present. A horse's spine found nearby suggests the wheel may have been part of a horse-drawn cart. The wheel was found in a settlement built on stilts over wetland, indicating that the settlement had some sort of link to dry land.

Although large-scale use of wheels did not occur in the Americas prior to European contact, numerous small wheeled artifacts, identified as children's toys, have been found in Mexican archeological sites, some dating to approximately 1500 BCE. Some argue that the primary obstacle to large-scale development of the wheel in the Americas was the absence of domesticated large animals that could be used to pull wheeled carriages. The closest relative of cattle present in Americas in pre-Columbian times, the American bison, is difficult to domesticate and was never domesticated by Native Americans; several horse species existed until about 12,000 years ago, but ultimately became extinct. The only large animal that was domesticated in the Western hemisphere, the llama, a pack animal, was not physically suited to use as a draft animal to pull wheeled vehicles, and use of the llama did not spread far beyond the Andes by the time of the arrival of Europeans.

On the other hand, Mesoamericans never developed the wheelbarrow, the potter's wheel, nor any other practical object with a wheel or wheels. Although present in a number of toys, very similar to those found throughout the world and still made for children today ("pull toys"), the wheel was never put into practical use in Mesoamerica before the 16th century. Possibly the closest the Mayas came to the utilitarian wheel is the spindle whorl, and some scholars believe that these toys were originally made with spindle whorls and spindle sticks as "wheels" and "axes".

Aboriginal Australians traditionally used circular discs rolled along the ground for target practice.

Nubians from after about 400 BCE used wheels for spinning pottery and as water wheels. It is thought that Nubian waterwheels may have been ox-driven. It is also known that Nubians used horse-drawn chariots imported from Egypt.

Starting from the 18th century in West Africa, wheeled vehicles were mostly used for ceremonial purposes in places like Dahomey. The wheel was barely used for transportation, with the exception of Ethiopia and Somalia in Sub-Saharan Africa well into the 19th century.

The spoked wheel was in continued use without major modification until the 1870s, when wire-spoked wheels and pneumatic tires were invented. Pneumatic tires can greatly reduce rolling resistance and improve comfort. Wire spokes are under tension, not compression, making it possible for the wheel to be both stiff and light. Early radially-spoked wire wheels gave rise to tangentially-spoked wire wheels, which were widely used on cars into the late 20th century. Cast alloy wheels are now more commonly used; forged alloy wheels are used when weight is critical.

The invention of the wheel has also been important for technology in general, important applications including the water wheel, the cogwheel, the spinning wheel, and the astrolabe or torquetum. More modern descendants of the wheel include the propeller, the jet engine, the flywheel (gyroscope) and the turbine.

Mechanics and function

A wheeled vehicle requires much less work to move than simply dragging the same weight. The low resistance to motion is explained by the fact that the frictional work done is no longer at the surface that the vehicle is traversing, but in the bearings. In the simplest and oldest case the bearing is just a round hole through which the axle passes (a "plain bearing"). Even with a plain bearing, the frictional work is greatly reduced because:

* The normal force at the sliding interface is same as with simple dragging.
* The sliding distance is reduced for a given distance of travel.
* The coefficient of friction at the interface is usually lower.


If a 100 kg object is dragged for 10 m along a surface with the coefficient of friction μ = 0.5, the normal force is 981 N and the work done (required energy) is (work=force x distance) 981 × 0.5 × 10 = 4905 joules.

Now give the object 4 wheels. The normal force between the 4 wheels and axles is the same (in total) 981 N. Assume, for wood, μ = 0.25, and say the wheel diameter is 1000 mm and axle diameter is 50 mm. So while the object still moves 10 m the sliding frictional surfaces only slide over each other a distance of 0.5 m. The work done is 981 × 0.25 × 0.5 = 123 joules; the work done has reduced to 1/40 of that of dragging.

Additional energy is lost from the wheel-to-road interface. This is termed rolling resistance which is predominantly a deformation loss. It depends on the nature of the ground, of the material of the wheel, its inflation in the case of a tire, the net torque exerted by the eventual engine, and many other factors.

A wheel can also offer advantages in traversing irregular surfaces if the wheel radius is sufficiently large compared to the irregularities.

The wheel alone is not a machine, but when attached to an axle in conjunction with bearing, it forms the wheel and axle, one of the simple machines. A driven wheel is an example of a wheel and axle. Wheels pre-date driven wheels by about 6000 years, themselves an evolution of using round logs as rollers to move a heavy load—a practice going back in pre-history so far that it has not been dated.



The rim is the "outer edge of a wheel, holding the tire". It makes up the outer circular design of the wheel on which the inside edge of the tire is mounted on vehicles such as automobiles. For example, on a bicycle wheel the rim is a large hoop attached to the outer ends of the spokes of the wheel that holds the tire and tube.

In the 1st millennium BCE an iron rim was introduced around the wooden wheels of chariots.


The hub is the center of the wheel, and typically houses a bearing, and is where the spokes meet.

A hubless wheel (also known as a rim-rider or centerless wheel) is a type of wheel with no center hub. More specifically, the hub is actually almost as big as the wheel itself. The axle is hollow, following the wheel at very close tolerances.


A spoke is one of some number of rods radiating from the center of a wheel (the hub where the axle connects), connecting the hub with the round traction surface. The term originally referred to portions of a log which had been split lengthwise into four or six sections. The radial members of a wagon wheel were made by carving a spoke (from a log) into their finished shape. A spokeshave is a tool originally developed for this purpose. Eventually, the term spoke was more commonly applied to the finished product of the wheelwright's work, than to the materials used.


The rims of wire wheels (or "wire spoked wheels") are connected to their hubs by wire spokes. Although these wires are generally stiffer than a typical wire rope, they function mechanically the same as tensioned flexible wires, keeping the rim true while supporting applied loads.

Wire wheels are used on most bicycles and still used on many motorcycles. They were invented by aeronautical engineer George Cayley and first used in bicycles by James Starley. A process of assembling wire wheels is described as wheelbuilding.


A tire (in American English and Canadian English) or tyre (in some Commonwealth Nations such as UK, India, South Africa, Australia and New Zealand) is a ring-shaped covering that fits around a wheel rim to protect it and enable better vehicle performance by providing a flexible cushion that absorbs shock while keeping the wheel in close contact with the ground. The word itself may be derived from the word "tie", which refers to the outer steel ring part of a wooden cart wheel that ties the wood segments together.

The fundamental materials of modern tires are synthetic rubber, natural rubber, fabric, and wire, along with other compound chemicals. They consist of a tread and a body. The tread provides traction while the body ensures support. Before rubber was invented, the first versions of tires were simply bands of metal that fitted around wooden wheels to prevent wear and tear. Today, the vast majority of tires are pneumatic inflatable structures, comprising a doughnut-shaped body of cords and wires encased in rubber and generally filled with compressed air to form an inflatable cushion. Pneumatic tires are used on many types of vehicles, such as cars, bicycles, motorcycles, trucks, earthmovers, and aircraft.

Protruding or covering attachments

Extreme off-road conditions have resulted in the invention of several types of wheel cover, which may be constructed as removable attachments or as permanent covers. Wheels like this are no longer necessarily round, or have panels that make the ground-contact area flat.

Examples include:

* Snow chains - Specially designed chain assemblies that wrap around the tire to provide increased grip, designed for deep snow.
* Dreadnaught wheel - A type of permanently attached hinged panels for general extreme off-road use. These are not connected directly to the wheels, but to each other.
* Pedrail wheel - A system of rails that holds panels that hold the vehicle. These do not necessarily have to be built as a circle (wheel) and are thus also a form of Continuous track.
* A version of the above examples (name unknown to the writer) was commonly used on heavy artillery during World War I. Specific examples: Cannone da 149/35 A and the Big Bertha. These were panels that were connected to each other by multiple hinges and could be installed over a contemporary wheel.
* Continuous track - A system of linked and hinged chains/panels that cover multiple wheels in a way that allows the vehicles mass to be distributed across the space between wheels that are positioned in front of / behind other wheels.
* "Tire totes" - A bag designed to cover a tire to improve traction in deep snow.

Truck and bus wheels may block (stop rotating) under certain circumstances, such as brake system failure. To help detect this, they sometimes feature "wheel rotation indicators": colored strips of plastic attached to the rim and protruding out from it, such that they can be seen by the driver in the side-view mirrors. These devices were invented and patented in 1998 by a Canadian truck shop owner.

Additional Information

Imagine a world without wheels. We would have to find an alternative way to drive our vehicles around, our steering “wheels” would likely be steering “squares,” and we wouldn’t even be able to fly to our destinations in the same way anymore. After all, airplanes have to taxi into position before taking off. The wheel is considered to be one of the oldest and most important inventions in the world.

The origins of the wheel can be traced back to ancient Mesopotamia in the 5th millennium BC where it was first used as a potter’s wheel. Evidence of the wheel can also be found in ancient China and ancient India. Even the western hemisphere created wheel like toys for their children back in 1500 BC.

Hundreds of Thousands of years before the invention of the wheel, some unlucky hominin stepped on a loose rock or unstable log and—just before they cracked their skull—discovered that a round object reduces friction with the ground.

The inevitability of this moment of clarity explains the ancient ubiquity of rollers, which are simply logs put underneath heavy objects. The Egyptians and the Mesopotamians used them to build their pyramids and roll their heavy equipment, and the Polynesians to move the stone moai statues on Easter Island. But rollers aren’t terribly efficient, because they have to be replaced as they roll forward, and even if they’re pinned underneath, friction makes them horribly difficult to move. The solution—and the stroke of brilliance—was the axle. Yet despite the roller’s antiquity, it doesn’t appear that anyone, anywhere, discovered the wheel and axle until an ingenious potter approximately 6,000 years ago.

The oldest axle ever discovered is not on a wagon or cart, but instead on a potter’s wheel in Mesopotamia. These may seem like simple machines, but they’re the first evidence that anyone anywhere recognized the center of a spinning disk is stationary and used it to their mechanical advantage. It’s a completely ingenious observation and so novel that it’s unclear where the idea came from—perhaps from a bead spinning on string?—as it has no obvious corollary in nature. The pole is called an axle, and many scholars consider it the greatest mechanical insight in the history of humankind.

Yet there exists another great intellectual leap between the potter’s wheel and a set of wheels on a rolling object. The full wheel set appears to have first been invented by a mother or father potter, because the world’s oldest axles are made of clay, are about two inches long, and sit beneath rolling animal figurines.

The first wheeled vehicle, in other words, was a toy.

In July 1880, the archaeologist Désiré Charnay discovered the first pre‑Columbian wheel set in the Americas. It was on a small coyote figure mounted on four wheels, and Charnay found it in the tomb of an Aztec child buried south of Mexico City.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1940 2023-10-24 00:09:27

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1944) Bone marrow


Bone marrow is a spongy substance found in the center of the bones. It manufactures bone marrow stem cells and other substances, which in turn produce blood cells. Each type of blood cell made by the bone marrow has an important job. Red blood cells carry oxygen to tissues in the body.


Bone marrow, also called myeloid tissue, is a soft, gelatinous tissue that fills the cavities of the bones. Bone marrow is either red or yellow, depending upon the preponderance of hematopoietic (red) or fatty (yellow) tissue. In humans the red bone marrow forms all of the blood cells with the exception of the lymphocytes, which are produced in the marrow and reach their mature form in the lymphoid organs. Red bone marrow also contributes, along with the liver and spleen, to the destruction of old red blood cells. Yellow bone marrow serves primarily as a storehouse for fats but may be converted to red marrow under certain conditions, such as severe blood loss or fever. At birth and until about the age of seven, all human marrow is red, as the need for new blood formation is high. Thereafter, fat tissue gradually replaces the red marrow, which in adults is found only in the vertebrae, hips, breastbone, ribs, and skull and at the ends of the long bones of the arm and leg; other cancellous, or spongy, bones and the central cavities of the long bones are filled with yellow marrow.

Red marrow consists of a delicate, highly vascular fibrous tissue containing stem cells, which differentiate into various blood cells. Stem cells first become precursors, or blast cells, of various kinds; normoblasts give rise to the red blood cells (erythrocytes), and myeloblasts become the granulocytes, a type of white blood cell (leukocyte). Platelets, small blood cell fragments involved in clotting, form from giant marrow cells called megakaryocytes. The new blood cells are released into the sinusoids, large thin-walled vessels that drain into the veins of the bone. In mammals, blood formation in adults takes place predominantly in the marrow. In lower vertebrates a number of other tissues may also produce blood cells, including the liver and the spleen.

Because the white blood cells produced in the bone marrow are involved in the body’s immune defenses, marrow transplants have been used to treat certain types of immune deficiency and hematological disorders, especially leukemia. The sensitivity of marrow to damage by radiation therapy and some anticancer drugs accounts for the tendency of these treatments to impair immunity and blood production.

Examination of the bone marrow is helpful in diagnosing certain diseases, especially those related to blood and blood-forming organs, because it provides information on iron stores and blood production. Bone marrow aspiration, the direct removal of a small amount (about 1 ml) of bone marrow, is accomplished by suction through a hollow needle. The needle is usually inserted into the hip or sternum (breastbone) in adults and into the upper part of the tibia (the larger bone of the lower leg) in children. The necessity for a bone marrow aspiration is ordinarily based on previous blood studies and is particularly useful in providing information on various stages of immature blood cells. Disorders in which bone marrow examination is of special diagnostic value include leukemia, multiple myeloma, Gaucher disease, unusual cases of anemia, and other hematological diseases.


Bone marrow is a semi-solid tissue found within the spongy (also known as cancellous) portions of bones. In birds and mammals, bone marrow is the primary site of new blood cell production (or haematopoiesis). It is composed of hematopoietic cells, marrow adipose tissue, and supportive stromal cells. In adult humans, bone marrow is primarily located in the ribs, vertebrae, sternum, and bones of the pelvis. Bone marrow comprises approximately 5% of total body mass in healthy adult humans, such that a man weighing 73 kg (161 lbs) will have around 3.7 kg (8 lbs) of bone marrow.

Human marrow produces approximately 500 billion blood cells per day, which join the systemic circulation via permeable vasculature sinusoids within the medullary cavity. All types of hematopoietic cells, including both myeloid and lymphoid lineages, are created in bone marrow; however, lymphoid cells must migrate to other lymphoid organs (e.g. thymus) in order to complete maturation.

Bone marrow transplants can be conducted to treat severe diseases of the bone marrow, including certain forms of cancer such as leukemia. Several types of stem cells are related to bone marrow. Hematopoietic stem cells in the bone marrow can give rise to hematopoietic lineage cells, and mesenchymal stem cells, which can be isolated from the primary culture of bone marrow stroma, can give rise to bone, adipose, and cartilage tissue.


The composition of marrow is dynamic, as the mixture of cellular and non-cellular components (connective tissue) shifts with age and in response to systemic factors. In humans, marrow is colloquially characterized as "red" or "yellow" marrow (Latin: medulla ossium rubra, Latin: medulla ossium flava, respectively) depending on the prevalence of hematopoietic cells vs fat cells. While the precise mechanisms underlying marrow regulation are not understood, compositional changes occur according to stereotypical patterns. For example, a newborn baby's bones exclusively contain hematopoietically active "red" marrow, and there is a progressive conversion towards "yellow" marrow with age. In adults, red marrow is found mainly in the central skeleton, such as the pelvis, sternum, cranium, ribs, vertebrae and scapulae, and variably found in the proximal epiphyseal ends of long bones such as the femur and humerus. In circumstances of chronic hypoxia, the body can convert yellow marrow back to red marrow to increase blood cell production.

Hematopoietic components

At the cellular level, the main functional component of bone marrow includes the progenitor cells which are destined to mature into blood and lymphoid cells. Human marrow produces approximately 500 billion blood cells per day. Marrow contains hematopoietic stem cells which give rise to the three classes of blood cells that are found in circulation: white blood cells (leukocytes), red blood cells (erythrocytes), and platelets (thrombocytes).


The stroma of the bone marrow includes all tissue not directly involved in the marrow's primary function of hematopoiesis. Stromal cells may be indirectly involved in hematopoiesis, providing a microenvironment that influences the function and differentiation of hematopoietic cells. For instance, they generate colony stimulating factors, which have a significant effect on hematopoiesis. Cell types that constitute the bone marrow stroma include:

* fibroblasts (reticular connective tissue)
* macrophages, which contribute especially to red blood cell production, as they deliver iron for hemoglobin production.
* adipocytes (fat cells)
* osteoblasts (synthesize bone)
* osteoclasts (resorb bone)
* endothelial cells, which form the sinusoids. These derive from endothelial stem cells, which are also present in the bone marrow.


Mesenchymal stem cells

The bone marrow stroma contains mesenchymal stem cells (MSCs), which are also known as marrow stromal cells. These are multipotent stem cells that can differentiate into a variety of cell types. MSCs have been shown to differentiate, in vitro or in vivo, into osteoblasts, chondrocytes, myocytes, marrow adipocytes and beta-pancreatic islets cells.

Bone marrow barrier

The blood vessels of the bone marrow constitute a barrier, inhibiting immature blood cells from leaving the marrow. Only mature blood cells contain the membrane proteins, such as aquaporin and glycophorin, that are required to attach to and pass the blood vessel endothelium. Hematopoietic stem cells may also cross the bone marrow barrier, and may thus be harvested from blood.

Lymphatic role

The red bone marrow is a key element of the lymphatic system, being one of the primary lymphoid organs that generate lymphocytes from immature hematopoietic progenitor cells. The bone marrow and thymus constitute the primary lymphoid tissues involved in the production and early selection of lymphocytes. Furthermore, bone marrow performs a valve-like function to prevent the backflow of lymphatic fluid in the lymphatic system.


Biological compartmentalization is evident within the bone marrow, in that certain cell types tend to aggregate in specific areas. For instance, erythrocytes, macrophages, and their precursors tend to gather around blood vessels, while granulocytes gather at the borders of the bone marrow.

As food

People have used animal bone-marrow in cuisine worldwide for millennia, as in the famed Milanese Ossobuco.

Clinical significance:


The normal bone marrow architecture can be damaged or displaced by aplastic anemia, malignancies such as multiple myeloma, or infections such as tuberculosis, leading to a decrease in the production of blood cells and blood platelets. The bone marrow can also be affected by various forms of leukemia, which attacks its hematologic progenitor cells. Furthermore, exposure to radiation or chemotherapy will kill many of the rapidly dividing cells of the bone marrow, and will therefore result in a depressed immune system. Many of the symptoms of radiation poisoning are due to damage sustained by the bone marrow cells.

To diagnose diseases involving the bone marrow, a bone marrow aspiration is sometimes performed. This typically involves using a hollow needle to acquire a sample of red bone marrow from the crest of the ilium under general or local anesthesia.

Application of stem cells in therapeutics

Bone marrow derived stem cells have a wide array of application in regenerative medicine.


Medical imaging may provide a limited amount of information regarding bone marrow. Plain film x-rays pass through soft tissues such as marrow and do not provide visualization, although any changes in the structure of the associated bone may be detected. CT imaging has somewhat better capacity for assessing the marrow cavity of bones, although with low sensitivity and specificity. For example, normal fatty "yellow" marrow in adult long bones is of low density (-30 to -100 Hounsfield units), between subcutaneous fat and soft tissue. Tissue with increased cellular composition, such as normal "red" marrow or cancer cells within the medullary cavity will measure variably higher in density.

MRI is more sensitive and specific for assessing bone composition. MRI enables assessment of the average molecular composition of soft tissues and thus provides information regarding the relative fat content of marrow. In adult humans, "yellow" fatty marrow is the dominant tissue in bones, particularly in the (peripheral) appendicular skeleton. Because fat molecules have a high T1-relaxivity, T1-weighted imaging sequences show "yellow" fatty marrow as bright (hyperintense). Furthermore, normal fatty marrow loses signal on fat-saturation sequences, in a similar pattern to subcutaneous fat.

When "yellow" fatty marrow becomes replaced by tissue with more cellular composition, this change is apparent as decreased brightness on T1-weighted sequences. Both normal "red" marrow and pathologic marrow lesions (such as cancer) are darker than "yellow" marrow on T1-weight sequences, although can often be distinguished by comparison with the MR signal intensity of adjacent soft tissues. Normal "red" marrow is typically equivalent or brighter than skeletal muscle or intervertebral disc on T1-weighted sequences.

Fatty marrow change, the inverse of red marrow hyperplasia, can occur with normal aging, though it can also be seen with certain treatments such as radiation therapy. Diffuse marrow T1 hypointensity without contrast enhancement or cortical discontinuity suggests red marrow conversion or myelofibrosis. Falsely normal marrow on T1 can be seen with diffuse multiple myeloma or leukemic infiltration when the water to fat ratio is not sufficiently altered, as may be seen with lower grade tumors or earlier in the disease process.


Bone marrow examination is the pathologic analysis of samples of bone marrow obtained via biopsy and bone marrow aspiration. Bone marrow examination is used in the diagnosis of a number of conditions, including leukemia, multiple myeloma, anemia, and pancytopenia. The bone marrow produces the cellular elements of the blood, including platelets, red blood cells and white blood cells. While much information can be gleaned by testing the blood itself (drawn from a vein by phlebotomy), it is sometimes necessary to examine the source of the blood cells in the bone marrow to obtain more information on hematopoiesis; this is the role of bone marrow aspiration and biopsy.

The ratio between myeloid series and erythroid cells is relevant to bone marrow function, and also to diseases of the bone marrow and peripheral blood, such as leukemia and anemia. The normal myeloid-to-erythroid ratio is around 3:1; this ratio may increase in myelogenous leukemias, decrease in polycythemias, and reverse in cases of thalassemia.

Donation and transplantation

In a bone marrow transplant, hematopoietic stem cells are removed from a person and infused into another person (allogenic) or into the same person at a later time (autologous). If the donor and recipient are compatible, these infused cells will then travel to the bone marrow and initiate blood cell production. Transplantation from one person to another is conducted for the treatment of severe bone marrow diseases, such as congenital defects, autoimmune diseases or malignancies. The patient's own marrow is first killed off with drugs or radiation, and then the new stem cells are introduced. Before radiation therapy or chemotherapy in cases of cancer, some of the patient's hematopoietic stem cells are sometimes harvested and later infused back when the therapy is finished to restore the immune system.

Bone marrow stem cells can be induced to become neural cells to treat neurological illnesses, and can also potentially be used for the treatment of other illnesses, such as inflammatory bowel disease. In 2013, following a clinical trial, scientists proposed that bone marrow transplantation could be used to treat HIV in conjunction with antiretroviral drugs; however, it was later found that HIV remained in the bodies of the test subjects.


The stem cells are typically harvested directly from the red marrow in the iliac crest, often under general anesthesia. The procedure is minimally invasive and does not require stitches afterwards. Depending on the donor's health and reaction to the procedure, the actual harvesting can be an outpatient procedure, or can require 1–2 days of recovery in the hospital.

Another option is to administer certain drugs that stimulate the release of stem cells from the bone marrow into circulating blood. An intravenous catheter is inserted into the donor's arm, and the stem cells are then filtered out of the blood. This procedure is similar to that used in blood or platelet donation. In adults, bone marrow may also be taken from the sternum, while the tibia is often used when taking samples from infants. In newborns, stem cells may be retrieved from the umbilical cord.

Persistent viruses

Using quantitative Polymerase Chain Reaction (qPCR) and Next-generation Sequencing (NGS) a maximum of five DNA viruses per individual have been identified. Included were several herpesviruses, hepatitis B virus, Merkel cell polyomavirus, and human papillomavirus 31. Given the reactivation and/or oncogenic potential of these viruses, their repercussion on hematopoietic and malignant disorders calls for further studies.

Fossil record

Bone marrow may have first evolved in Eusthenopteron, a species of prehistoric fish with close links to early tetrapods.
The earliest fossilised evidence of bone marrow was discovered in 2014 in Eusthenopteron, a lobe-finned fish which lived during the Devonian period approximately 370 million years ago. Scientists from Uppsala University and the European Synchrotron Radiation Facility used X-ray synchrotron microtomography to study the fossilised interior of the skeleton's humerus, finding organised tubular structures akin to modern vertebrate bone marrow. Eusthenopteron is closely related to the early tetrapods, which ultimately evolved into the land-dwelling mammals and lizards of the present day.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1941 2023-10-25 00:24:00

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1945) Dye


Dye is a substance used to impart colour to textiles, paper, leather, and other materials such that the colouring is not readily altered by washing, heat, light, or other factors to which the material is likely to be exposed.


A dye is a colored substance that chemically bonds to the substrate to which it is being applied. This distinguishes dyes from pigments which do not chemically bind to the material they color. Dye is generally applied in an aqueous solution and may require a mordant to improve the fastness of the dye on the fiber.

The majority of natural dyes are derived from non-animal sources: roots, berries, bark, leaves, wood, fungi and lichens. However, due to large-scale demand and technological improvements, most dyes used in the modern world are synthetically produced from substances such as petrochemicals. Some are extracted from insects and/or minerals.

Synthetic dyes are produced from various chemicals. The great majority of dyes are obtained in this way because of their superior cost, optical properties (color), and resilience (fastness, mordancy). Both dyes and pigments are colored, because they absorb only some wavelengths of visible light. Dyes are usually soluble in some solvent, whereas pigments are insoluble. Some dyes can be rendered insoluble with the addition of salt to produce a lake pigment.


The color of a dye is dependent upon the ability of the substance to absorb light within the visible region of the electromagnetic spectrum (380-750 nm). An earlier theory known as Witt theory stated that a colored dye had two components, a chromophore which imparts color by absorbing light in the visible region (some examples are nitro, azo, quinoid groups) and an auxochrome which serves to deepen the color. This theory has been superseded by modern electronic structure theory which states that the color in dyes is due to excitation of valence π-electrons by visible light.

Food dyes

One other class that describes the role of dyes, rather than their mode of use, is the food dye. Because food dyes are classed as food additives, they are manufactured to a higher standard than some industrial dyes. Food dyes can be direct, mordant and vat dyes, and their use is strictly controlled by legislation. Many are azo dyes, although anthraquinone and triphenylmethane compounds are used for colors such as green and blue. Some naturally occurring dyes are also used.


Dye is substance used to impart colour to textiles, paper, leather, and other materials such that the colouring is not readily altered by washing, heat, light, or other factors to which the material is likely to be exposed. Dyes differ from pigments, which are finely ground solids dispersed in a liquid, such as paint or ink, or blended with other materials. Most dyes are organic compounds (i.e., they contain carbon), whereas pigments may be inorganic compounds (i.e., they do not contain carbon) or organic compounds. Pigments generally give brighter colours and may be dyes that are insoluble in the medium employed.

Colour has always fascinated humankind, for both aesthetic and social reasons. Throughout history dyes and pigments have been major articles of commerce. Manufacture of virtually all commercial products involves colour at some stage, and today some 9,000 colorants with more than 50,000 trade names are used. The large number is a consequence of the range of tints and hues desired, the chemical nature of the materials to be coloured, and the fact that colour is directly related to the molecular structure of the dye.

History of dyes:

Natural dyes

Until the 1850s virtually all dyes were obtained from natural sources, most commonly from vegetables, such as plants, trees, and lichens, with a few from insects. Solid evidence that dyeing methods are more than 4,000 years old has been provided by dyed fabrics found in Egyptian tombs. Ancient hieroglyphs describe extraction and application of natural dyes. Countless attempts have been made to extract dyes from brightly coloured plants and flowers; yet only a dozen or so natural dyes found widespread use. Undoubtedly most attempts failed because most natural dyes are not highly stable and occur as components of complex mixtures, the successful separation of which would be unlikely by the crude methods employed in ancient times. Nevertheless, studies of these dyes in the 1800s provided a base for development of synthetic dyes, which dominated the market by 1900.

Two natural dyes, alizarin and indigo, have major significance. Alizarin is a red dye extracted from the roots of the madder plant, Rubia tinctorium. Two other red dyes were obtained from scale insects. These include kermes, obtained from Coccus ilicis (or Kermes ilicis), which infects the Kermes oak, and cochineal, obtained from Dactylopius coccus, which lives on prickly pear cactus in Mexico. One kilogram (2.2 pounds) of cochineal dye can be obtained from an estimated 200,000 insects. The principal coloured components in these dyes are kermesic and carminic acids, respectively, whose similarity was established by 1920. In their natural state many colorants are rendered water-soluble through the presence of sugar residues. These sugars, however, are often lost during dye isolation procedures.

Probably the oldest known dye is the blue dye indigo, obtained in Europe from the leaves of the dyerswoad herb, Isatis tinctoria, and in Asia from the indigo plant, Indigofera tinctoria. Even by modern standards, both alizarin and indigo have very good dyeing properties, and indigo remains a favoured dye for denim, although synthetic indigo has replaced the natural material.

With a process developed by the Phoenicians, a derivative of indigo, Tyrian purple, was extracted in very small amounts from the glands of a snail, Murex brandaris, indigenous to the Mediterranean Sea. Experiments in 1909 yielded 1.4 grams (0.05 ounce) from 12,000 snails. Historically, this dye was also called royal purple because kings, emperors, and high priests had the exclusive right to wear garments dyed with it, as is well documented in the Hebrew Bible and illustrated for Roman emperors on mosaics in Ravenna, Italy. By the 1450s, with the decline of the Eastern Roman Empire, the Mediterranean purple industry died out.

Natural yellow dyes include louting, from the leaves of weld, Reseda luteola, and quercetin, from the bark of the North American oak tree, Quercus tinctoria. These are in the flavonoid family, a group of compounds occurring almost exclusively in higher plants and producing the colours of many flowers. In fact, these compounds can produce all the colours of the rainbow except green. Luteolin, a yellow crystalline pigment, was used with indigo to produce Lincoln green, the colour associated with Robin Hood and his merry men.

Another group of compounds, the carotenoids, present in all green plants, produce yellow to red shades. Lycopene, from which all carotenoids are derived, produces the red colour of tomatoes. An ancient natural yellow dye, crocetin, was obtained from the stigmas of Crocus sativus; this dye is undoubtedly derived from lycopene in the plant. Few of the flavonoid and carotenoid colorants would have survived ancient extraction processes.

Logwood is the only natural dye used today. Heartwood extracts of the logwood tree, Haematoxylon campechianum, yield hematoxylin, which oxidizes to hematein during isolation. The latter is red but in combination with chromium gives shades of charcoal, gray, and black; it is used mainly to dye silk and leather.


Highly skilled craftsmen with closely guarded secret formulas rendered dyeing a well-protected trade. The formation of different colours by mixing red, blue, and yellow dyes was well known in ancient times, as was the use of metal salts to aid the retention of dyes on the desired material and to vary the resultant colours. Natural dyes cannot be applied directly to cotton, in contrast to wool and silk, although cotton can be dyed by vatting or by pretreatment with inorganic salts known as mordants (from Latin mordere, meaning “to bite”). These are adsorbed on the fibre and react with the dye to produce a less soluble form that is held to the fabric. Alum, KAl(SO4)2 × H2O, as well as iron, copper, and tin salts were common ancient mordants. No doubt the secret processes included other ingredients to improve the final results. Mordants also were used to vary the colours produced from a single dye. For example, treatment with aluminum hydroxide, Al(OH)3, before dyeing with alizarin produces Turkey red, the traditional red of British and French army uniforms. Alizarin gives violet colours with magnesium mordants, purple-red with calcium mordants, blue with barium mordants, and black-violet with ferrous salts. Around 1850, chromium salts, used as mordants, were found to provide superior dye retention and, in time, largely displaced the others; chromium mordants are still widely used for wool and, to some extent, for silk and nylon.

Decline of natural dyes

Until 1857 the dye industry utilized natural dyes almost exclusively; however, by 1900 nearly 90 percent of industrial dyes were synthetic. Several factors contributed to the commercial decline of natural dyes. By 1850 the Industrial Revolution in Europe led to a burgeoning textile industry, which created increased demand for readily available, inexpensive, and easily applied dyes and revealed the important economic limitations of natural dyes. Since most dyes were imported from distant sources, transportation delays were likely to slow the production of dyed materials. Dye quality was affected by the whims of nature and the dye maker’s skills. In addition, inefficient processes were often required for optimum results; for example, Turkey red dyeing could involve more than 20 steps to produce the desired bright, fast colour. Advances in organic chemistry, both practical and theoretical, spurred by studies of the many new compounds found in coal tar, increased interest in finding ways to utilize this by-product of coke production. The dye industry played a major role in the development of structural organic chemistry, which in turn provided a sound scientific foundation for the dye industry.

Synthetic dyes

In 1856 the first commercially successful synthetic dye, mauve, was serendipitously discovered by British chemist William H. Perkin, who recognized and quickly exploited its commercial significance. The introduction of mauve in 1857 triggered the decline in the dominance of natural dyes in world markets. Mauve had a short commercial lifetime (lasting about seven years), but its success catalyzed activities that quickly led to the discovery of better dyes. Today only one natural dye, logwood, is used commercially, to a small degree, to dye silk, leather, and nylon black.

The synthetic dye industry arose directly from studies of coal tar. By 1850 coal tar was an industrial nuisance because only a fraction was utilized as wood preservative, road binder, and a source of the solvent naphtha. Fortunately, it attracted the attention of chemists as a source of new organic compounds, isolable by distillation. A leading researcher in this area, German chemist August Wilhelm von Hofmann, had been attracted to England in 1845 to direct the Royal College of Chemistry. In the following 20 years, he trained most of the chemists in the English dye industry, one of whom was Perkin, the discoverer of mauve. The success of mauve led to demands by English textile manufacturers for other new dyes. By trial and error, reactions of coal tar compounds were found to yield useful dyes. However, Hofmann became disenchanted with this purely empirical approach, insisting that it was more important to understand the chemistry than to proceed blindly. In 1865 he returned to Germany, and by 1875 most of his students had been lured to German industrial positions. By 1900 more than 50 compounds had been isolated from coal tar, many of which were used in the developing German chemical industry. By 1914 the synthetic dye industry was firmly established in Germany, where 90 percent of the world’s dyes were produced.

Advances in the understanding of chemical structure, combined with strong industrial-academic interactions and favourable governmental practices, gave a setting well-suited for systematic development based on solid scientific foundations. Only a few Swiss firms and one in England survived the strong competition generated by the vigorous activity in the German dye industry.

Recognition of the tetravalency of carbon and the nature of the benzene ring were key factors required to deduce the molecular structures of the well-known natural dyes (e.g., indigo and alizarin) and the new synthetics (e.g., mauve, magenta, and the azo dyes). These structural questions were resolved, and industrial processes based on chemical principles were developed by the beginning of the 20th century. For example, Badische Anilin- & Soda-Fabrik (BASF) of Germany placed synthetic indigo on the market in 1897; development of the synthetic process of this compound was financed by profits from synthetic alizarin, first marketed in 1869.

There was also interest in the effects of dyes on living tissue. In 1884 the Danish microbiologist Hans Christian Gram discovered that crystal violet irreversibly stains certain bacteria but can be washed from others. The dye has been widely used ever since for the Gram stain technique, which identifies bacteria as gram-positive (the stain is retained) or gram-negative (the stain is washed away). The German medical scientist Paul Ehrlich found that methylene blue stains living nerve cells but not adjacent tissue. He proposed that compounds may exist that kill specific disease organisms by bonding to them without damaging the host cells and suggested the name chemotherapy (the treatment of diseases by chemical compounds).

Other uses were explored for compounds discovered during coal tar research; examples include aspirin (an analgesic) and saccharin (a sweetener). Coal tar studies became the foundation of the synthetic chemical industry, because coal tar was the major source of raw materials. However, coal by-products became less popular with the emergence of petroleum feedstocks in the 1930s, which gave rise to the petrochemical industry.

With the onset of World War I, British and U.S. industry were ill-prepared to provide products theretofore obtainable from Germany. The British government was forced to aid rejuvenation of its own dye industry; one measure brought several companies together, later to become part of Imperial Chemical Industries (ICI), modeled after the Bayer and BASF combines in Germany. In the United States a strong coal tar chemical industry quickly developed. After the war, leadership in organic chemistry began to shift from Germany to Switzerland, Britain, and the United States. In contrast to the combines of Europe, independent firms developed the U.S. research-oriented chemical industry.

A few new dye types were introduced in the 20th century, and major challenges were posed by the introduction of synthetic fibres, which held a major share of the world market, and by technological advances.

Dye structure and colour

Advances in structural theory led to investigations of correlations between chemical constitution and colour. In 1868 German chemists Carl Graebe and Carl Liebermann recognized that dyes contain sequences of conjugated double bonds: X=C―C=C―C=C―…, where X is carbon, oxygen, or nitrogen. In 1876 German chemist Otto Witt proposed that dyes contained conjugated systems of benzene rings bearing simple unsaturated groups (e.g., ―NO2, ―N=N―, ―C=O), which he called chromophores, and polar groups (e.g., ―NH 2, ―OH), which he named auxochromes. These ideas remain valid, although they have been broadened by better recognition of the role of specific structural features. He had also claimed that auxochromes impart dyeing properties to these compounds, but it later became clear that colour and dyeing properties are not directly related. Witt suggested the term chromogen for specific chromophore-auxochrome combinations.

Examples of dyes, each containing a different chromophore, include azobenzene, xanthene, and triphenylmethane. Alizarin contains the anthraquinone chromophore. These four dyes were commercial products in the late 1800s.

The colours of dyes and pigments are due to the absorption of visible light by the compounds. The electromagnetic spectrum spans a wavelength range of {10}^{12} metres, from long radio waves (about 10 km [6.2 miles]) to short X-rays (about 1 nm [1 nm = {10}^{-9} metre]), but human eyes detect radiation over only the small visible range of 400–700 nm. Organic compounds absorb electromagnetic energy, but only those with several conjugated double bonds appear coloured by the absorption of visible light (see spectroscopy: Molecular spectroscopy). Without substituents, chromophores do not absorb visible light, but the auxochromes shift the absorption of these chromogens into the visible region. In effect, the auxochromes extend the conjugated system. Absorption spectra (plots of absorption intensity versus wavelength) are used to characterize specific compounds. In visible spectra, the absorption patterns tend to be broad bands with maxima at longer wavelengths corresponding to more extended conjugation. The position and shape of the absorption band affect the appearance of the observed colour. Many compounds absorb in the ultraviolet region, with some absorptions extending into the violet (400–430 nm) region. Thus, these compounds appear yellowish to the eye—i.e., the perceived colour is complementary to the absorbed colour. Progressive absorption into the visible region gives orange (430–480 nm), red (480–550 nm), violet (550–600 nm), and blue (600–700 nm); absorption at 400–450 and 580–700 nm gives green. Black objects absorb all visible light; white objects reflect all visible light. The brilliance of a colour increases with decreasing bandwidth. Synthetic dyes tend to give brilliant colours. This undoubtedly led to their rapid rise in popularity because, by comparison, natural dyes give rather drab, diffuse colorations.

General features of dyes and dyeing

In dyeing operations, the dye must become closely and evenly associated with a specific material to give level (even) colouring with some measure of resistance to moisture, heat, and light—i.e., fastness. These factors involve both chemical and physical interactions between the dye and the fabric. The dyeing process must place dye molecules within the microstructure of the fibre. The dye molecules can be anchored securely through the formation of covalent bonds that result from chemical reactions between substituents on the molecules of the dye and the fibre. These are the reactive dyes, a type introduced in 1956. Many dye-fibre interactions, however, do not involve covalent bond formation. While some dyeing methods have several steps, many dyes can be successfully applied simply by immersing the fabric in an aqueous solution of the dye; these are called direct dyes. In other cases, auxiliary compounds and additional steps are required to obtain the desired fastness. In any event, questions arise as to how and how well the dye is retained within the fibre. The structure of the fibres from which the common fabrics are made provides some guidance for the selection of useful colorants.

Fibre structure

Fibre molecules are polymeric chains of repeating units of five major chemical types. Wool, silk, and leather are proteins, which are polymers of α-amino acids, RCH(NH2)COOH (where R is an organic group). Each chain consists of a series of amide linkages (―CO―NH―) separated by one carbon to which the R group, characteristic of each amino acid, is bonded. These groups may contain basic or acidic substituents, which can serve as sites for electrostatic interactions with dyes having, respectively, acidic or basic groups.

Polyamides (nylons) are synthetic analogs of proteins having the amide groups separated by hydrocarbon chains, (CH2)n, and can be made with an excess of either terminal amino (―NH2) or terminal carboxyl groups (―COOH). These and the amide groups are sites for polar interactions with dyes. Polyester poly(ethylene terephthalate), or PET, is the main synthetic fibre, accounting for more than 50 percent of worldwide production of synthetic fibres. The terminal hydroxyl groups (―OH) can serve as dyeing sites, but PET is difficult to dye because the individual chains pack tightly. Acrylics have hydrocarbon chains bearing polar groups, mainly nitriles, made by copolymerization of acrylonitrile (at least 85 percent) with small amounts (10–15 percent) of components such as acrylamide and vinyl acetate to produce a fibre with improved dyeability. Fibres with 35–85 percent acrylonitrile are termed modacrylics.

Cellulose is found in plants as a linear polymer of a few thousand glucose units, each with three free hydroxyl groups that can be extensively hydrogen-bonded. Cotton fibres are essentially pure cellulose. Wood contains 40–50 percent cellulose that is isolated as chemical cellulose by a process known as pulping. In fibre manufacture, the insolubility of cellulose caused processing problems that were overcome by the development of the viscose process, which produces regenerated cellulose with 300–400 glucose units. This semisynthetic cellulosic is rayon, which is very similar to cotton. The semisynthetic acetate rayon, produced by acetylation of chemical cellulose, has 200–300 glucose units with 75 percent of the hydroxyl groups converted to acetates. The smaller number of free hydroxyls precludes extensive hydrogen bonding, and dyes differing from those for cotton and rayon are needed.Color wheel, visible light, color spectrum.

Fibre porosity

Fibres are made by various spinning techniques that produce bundles of up to several hundred roughly aligned strands of polymer chains with length-to-diameter ratios in the thousands. For the dyeing process, an important characteristic of fibres is their porosity. There is a huge number of submicroscopic pores aligned mainly on the longitudinal axis of the fibres such that there are roughly 10 million pores in a cross-section of a normal fibre. The internal surface area therefore is enormous—about 45,000 square metres per kilogram (5 acres per pound) for cotton and wool—some thousand times greater than the outer surface area. To produce deep coloration, a layer of 1,000–10,000 dye molecules in thickness is needed. Upon immersion in a dyebath, the fabric absorbs the aqueous dye solution, and the dye molecules can move into pores that are sufficiently large to accommodate them. Although many pores may be too small, there is an ample number of adequate size to give satisfactory depths of colour.

Dye retention

Various attractive forces play a role in the retention of particular dyes on specific fibres. These include polar or ionic attractions, hydrogen bonding, Van der Waals forces, and solubilities. The affinity of a dye for a given substrate through such interactions is termed its substantivity. Dyes can be classified by their substantivity, which depends, in part, on the nature of the substituents in the dye molecule.

Attractive ionic interactions are operative in the case of anionic (acid) and cationic (basic) dyes, which have negatively and positively charged groups, respectively. These charged groups are attracted to sites of opposite polarity on the fibre. Mordant dyes are a related type. In the mordanting process, the fabric is pretreated with metallic salts, which complex with polar groups of the fibre to form more highly polarized sites for better subsequent interaction with the dye molecules.

Nonionic groups can also be involved in attractive interactions. Since the electronegativities of oxygen, nitrogen, and sulfur are greater than those of carbon and hydrogen, when these elements are part of a compound, the electron densities at their atomic sites are enhanced and those at neighbouring atoms are decreased. An O―H bond is therefore polar, and an attractive interaction between the hydrogen of one bond and the oxygen of a neighbouring bond can occur. Hydrogen bonding may be exhibited by any weakly acidic hydrogen. Although there is no chemical bond, strong attractive forces are involved. Phenolic hydroxyl groups are more highly polarized and, in dyes, can act as auxochromes and as good hydrogen-bonding sites.

Similar, but weaker, attractive forces are operative between other closely spaced polarized groups. These are the Van der Waals interactions, which are effective for dye adsorption if the separation between molecules is small. Such interactions are particularly important for cellulosics, which tend to have relatively large planar areas to which dye molecules are favourably attracted.

Although most dyes are applied as aqueous solutions, the finished goods should not be prone to dye loss through washing or other exposure to moisture. An exception is in the common use of highly soluble dyes to identify different fibres for weaving processes. These are called fugitive tints and are readily removed with water.

Dyeing techniques

Direct dyeing

Direct, or substantive, dyes are applied to the fabric from a hot aqueous solution of the dye. Under these conditions, the dye is more soluble and the wettability of natural fibres is increased, improving the transport of dye molecules into the fabric. In many cases, the fabric is pretreated with metallic salts or mordants to improve the fastness and to vary the colour produced by a given dye.

Disperse dyeing

Penetration of the fabric by the dye is more difficult with the hydrophobic synthetic fibres of acetate rayon, PET, and acrylics, so an alternate dyeing technique is needed. These synthetic fabrics are dyed by immersion in an aqueous dispersion of insoluble dyes, whereby the dye transfers into the fibre and forms a solid solution. These disperse dyes were originally developed for acetate rayon but became the fastest growing dye class in the 1970s because of the rapid increase in world production of PET, which can be dyed only with small disperse dyes. Transfer into the fibre from a boiling dye bath is aided by carriers (e.g., benzyl alcohol or biphenyl). The transfer mechanism is unclear, but it appears that the fibres loosen slightly to permit dye entry and, on cooling, revert to the original tightly packed structure. Dyeing at higher temperatures (120–130 °C [248–266 °F]) under pressure avoids the need for carriers. With the Thermosol process, a pad-dry heat technique developed by the DuPont Company, temperatures of 180–220 °C (356–428 °F) are employed with contact times on the order of a minute.

Vat dyeing

Conversion of a soluble species to an insoluble dye after transfer to the fibre is the basis of vat dyeing, one of the ancient methods. Indigo is insoluble but is readily reduced to a soluble, colourless form, leucoindigo. After treatment in a leucoindigo bath, the fabric becomes coloured upon exposure to air; atmospheric oxygen regenerates indigo by oxidation.

In contrast to leucoindigo, indigo has no affinity for cotton. Water-insoluble aggregates of indigo molecules larger than the fibre pores are firmly trapped within the fabric. This process was traditionally done outdoors in large vessels or vats and, hence, was named vat dyeing, and the term is still used for this procedure.

Azo dyeing techniques

The discovery of the azo dyes led to the development of other dyeing techniques. Azo dyes are formed from an azoic diazo component and a coupling component. The first compound, an aniline, gives a diazonium salt upon treatment with nitrous acid; this salt reacts with the coupling component to form a dye, many of which are used as direct and disperse colorants. These dyes can be generated directly on the fabric. The process in which the fabric is first treated with a solution of the coupling component and then placed in a solution of the diazonium salt to form the dye on the fabric was patented in 1880. Alternatively, the fabric can be treated with a solution of the diazo component before diazotization, followed by immersion in a solution of the coupling component; this process was patented in 1887. These are ingrain dyeing methods. Because many azo dyes are substituted anilines, they can be transformed to ingrain dyes for improved fastness after application as direct or, in some cases, disperse dyes to cotton and acetate rayon, respectively.

Reactive dyeing

Reactive dyeing directly links the colorant to the fibre by formation of a covalent bond. For years, the idea of achieving high wet fastness for dyed cotton by this method was recognized, but early attempts employed conditions so drastic that partial degradation of the fibres occurred. Studies at a Swiss dyeing company called Ciba in the 1920s gave promising results with wool using colorants having monochlorotriazine groups. (Triazines are heterocyclic rings containing three carbons and three nitrogens within the ring.) However, there was little incentive for further development because the available dyes were satisfactory. These new dyes, however, were sold as direct dyes for many years without recognition of their potential utility as dyes for cotton.

In 1953 British chemists Ian Rattee and William Stephen at ICI in London found that dyes with dichlorotriazinyl groups dyed cotton under mild alkaline conditions with no fibre degradation. Thus, a major breakthrough for the dye industry was made in 1956 when ICI introduced their Procion MX dyes—reactive dyes anchored to the fibre by covalent bonds—100 years after the discovery of the first commercial synthetic dye by Perkin. The generation and subsequent bonding of these three new dyes (a yellow, a red, and a blue) with fibres has a common basis, namely, the reactivity of chlorine on a triazine ring. It is readily displaced by the oxygen and nitrogen of ―OH and ―NH2 groups. Reaction of a dye bearing an amino group with cyanuryl chloride links the two through nitrogen to form the reactive dye. A second chlorine is displaced (in the dyeing step) by reaction with a hydroxyl group of cotton or an amino group in wool. A key feature of cyanuryl chloride is the relative reactivity of the chlorines: only one chlorine reacts at 0–5 °C (32–41 °F), the second reacts at 35–50 °C (95–122 °F), and the third reacts at 80–85 °C (176–185 °F). These differences were exploited in the development of series of related reactive dyes.

The introduction of the Procion MX dyes triggered vigorous activity at other companies. At the German company Hoechst Aktiengesellschaft, a different approach had been under study, and in 1958 they introduced their Remazol dyes. These dyes are the sulfate esters of hydroxyethylsulfonyl dyes, which, on treatment with mild base, generate the vinylsulfone group. This group, in turn, reacts with cellulose to form a unique dye-fibre bond.

In the Procion T series, marketed by ICI in 1979, particularly for dyeing cotton in polyester and cotton blends by the Thermosol process (see above Disperse dyeing), the reactive dye is bonded through a phosphonate ester. The introduction of reactive dyeing not only provided a technique to overcome inadequacies of the traditional methods for dyeing cotton but also vastly increased the array of colours and dye types that could be used for cotton, since almost any chromogen can be converted to a reactive dye.

Classifications of dyes

Different dyes are required to colour the five major types of fibres, but the fact that thousands of dyes are in use may seem excessive. Other factors beyond the basic differences in the five types of fibre structures contribute to problems a dyer encounters. Fabrics made from blends of different fibres are common (65/35 and 50/50 PET/cotton, 40/40/20 PET/rayon/wool, etc.), and there is enormous diversity in the intended end uses of the dyed fabrics.

Dyes can be classified by chemical structure or by area and method of application because the chemical class does not generally restrict a given dye to a single coloristic group. Commercial colorants include both dyes and pigments, groupings distinguishable by their mode of application. In contrast to dyes, pigments are practically insoluble in the application medium and have no affinity for the materials to which these are applied. The distinction between dyes and pigments is somewhat hazy, however, since organic pigments are closely related structurally to dyes, and there are dyes that become pigments after application (e.g., vat dyes).

The vast array of commercial colorants is classified in terms of structure, method of application, and colour in the Colour Index (C.I.), which is edited by the Society of Dyers and Colourists and by the American Association of Textile Chemists and Colorists. The third edition of the index lists more than 8,000 colorants used on a large scale for fibres, plastics, printing inks, paints, and liquids. In part 1, colorants are listed by generic name in classes (e.g., acidic, basic, mordant, disperse, direct, etc.) and are subdivided by colour. Information on application methods, usage, and other technical data such as fastness properties are included. Part 2 provides the chemical structures and methods of manufacture, and part 3 lists manufacturers’ names and an index of the generic and commercial names. Another edition of the Colour Index, Fourth Edition Online, contains information on pigments and solvent dyes (11,000 products under 800 C.I. classifications) not published in other parts of the Colour Index.

The Colour Index provides a valuable aid with which to penetrate the nomenclature jungle. Hundreds of dyes were well known before the first edition of the Colour Index was published in 1924, and their original or classical names are still in wide use. The classical and commercial names for a specific colorant are included in the Colour Index. Each C.I. generic name covers all colorants with the same structure, but these are not necessarily identical products in terms of crystal structure, particle size, or additive or impurity content. For specific applications, crystal structure can be important for pigments, while particle size is significant for pigments, disperse dyes, and vat dyes. While there are thousands of C.I. generic names, each manufacturer can invent a trade name for a given colorant, and, consequently, there are more than 50,000 names of commercial colorants.

Standardization tests and identification of dyes

Colourfastness tests are published by the International Organization for Standardization. For identification purposes, the results of systematic reaction sequences and solubility properties permit determination of the class of dye, which, in many cases, may be all that is required. With modern instrumentation, however, a variety of chromatographic and spectroscopic methods can be utilized to establish the full chemical structure of the dye, information that may be essential to identifying coloured material present in very small amounts.

Development of synthetic dyes:

Triphenylmethane dyes

Perkin’s accidental discovery of mauve as a product of dichromate oxidation of impure aniline motivated chemists to examine oxidations of aniline with an array of reagents. Sometime between 1858 and 1859, French chemist François-Emmanuel Verguin found that reaction of aniline with stannic chloride gave a fuchsia, or rose-coloured, dye, which he named fuchsine. It was the first of the triphenylmethane dyes and triggered the second phase of the synthetic dye industry. Other reagents were found to give better yields, leading to vigorous patent activity and several legal disputes. Inadvertent addition of excess aniline in a fuchsine preparation resulted in the discovery of aniline blue, a promising new dye, although it had poor water solubility. From the molecular formulas of these dyes, Hofmann showed that aniline blue was fuchsine with three more phenyl groups (―C6H5), but the chemical structures were still unknown. In a careful study, the British chemist Edward Chambers Nicholson showed that pure aniline produced no dye, a fact also discovered at a Ciba plant in Basel, Switzerland, that was forced to close because the aniline imported from France no longer gave satisfactory yields. Hofmann showed that toluidine (CH3C6H4NH2) must be present to produce these dyes. All these dyes, including mauve, were prepared from aniline containing unknown amounts of toluidine.

Furthermore, all the dyes were found to be mixtures of two major components. The triphenylmethane structures were established in 1878 by German chemist Emil Fischer, who showed that the methyl carbon of p-toluidine becomes the central carbon bonded to three aryl groups. Fuchsine was found to be a mixture of pararosaniline, C.I. Basic Red 9, and a homolog having a methyl group (―CH3) ortho to one of the amino groups (―NH2); its classical name is magenta (C.I. Basic Violet 14). Each nitrogen in aniline blue bears a phenyl group and each in crystal violet is dimethylated. Malachite green differs from crystal violet by having one unsubstituted aryl ring. It is not surprising that some of these early synthetic dyes had several different names. For example, malachite green was also known as aniline green, China green, and benzaldehyde green; it is C.I. Basic Green 4 (C.I. 42000) and has more than a dozen other trade names.

Nicholson had independently discovered aniline blue and found that treatment with sulfuric acid greatly increases its water solubility. This process, in which a sulfonic acid group (―SO3H) is added onto an aryl ring, was found to be applicable to many dyes and became a standard method for enhancing water solubility. Most of the few hundred triarylmethane dyes listed in the Colour Index were synthesized before 1900. In some, one phenyl ring is replaced with a naphthyl group, whose substituents include NH2, OH, SO3Na, COOH, NO2, Cl, and alkyl groups. While most substituents act as auxochromes, sulfonates are present only to increase the solubility of the dye, which is also improved by amino groups, hydrochlorides thereof, and hydroxyl groups. Many vat dyes have quinonoid groups that are reduced to soluble, colourless hydroquinones in the vatting operation and then oxidized back to the original dye. Similar reactions are utilized in the developing process in colour photography.

Anthraquinone dyes

The recognition of carbon’s tetravalency (1858) and the structure for benzene (1865) proposed by the German chemist Friedrich August Kekulé led to the structural elucidation of aromatic compounds and the rational development of the dye industry. The first example was the elucidation of the alizarin structure in 1868 (see above History of dyes: Natural dyes), followed a year later by its synthesis. Preparations of derivatives gave a host of anthraquinone dyes that today constitute the second largest group of commercial colorants. After 1893 sulfonated anthraquinones provided a group of bright, fast dyes for wool; the unsulfonated analogs are disperse dyes for synthetic fibres. In 1901 German chemist Rene Bohn obtained a brilliant blue vat dye with high fastness properties from experiments expected to produce a new substituted indigo. BASF, the leading manufacturer of vat dyes, marketed Bohn’s dye as Indanthren Blue RS; it was later given the chemical name indanthrone. Related compounds, used primarily as pigments, span the colour range from blue to yellow.

Xanthene and related dyes

In 1871 the German chemist Adolph von Baeyer discovered a new dye class closely related to the triphenylmethane series and also without natural counterparts. Heating phthalic anhydride with resorcinol (1,3-dihydroxybenzene) produced a yellow compound he named fluorescein, because aqueous solutions show an intense fluorescence. Although not useful as a dye, its value as a marker for accidents at sea and as a tracer of underground water flow is well established. Phthalic anhydride and phenol react to give phenolphthalein, which is similar in structure to fluorescein but lacks the oxygen linking two of the aryl rings. Since phenolphthalein is colourless in acid and intensely red in base, it is commonly used as a pH indicator in titrations and also as the active ingredient in mild laxatives, a property said to have been discovered after it was used to enhance the colour of wine. While these compounds lack fastness, some derivatives are useful dyes. Tetrabromofluorescein, or eosin, is a red dye used for paper, inks, and cosmetics; its tetraiodo analog, erythrosine, is a red food dye (see below Food dyes).

Parent compounds of heterocyclic dye classes related to the xanthene class. chemical compound
Many other useful dyes related to these xanthenes also were prepared in the late 1880s. Initially, oxazines and thiazines were used for dyeing silk, but a lack of good lightfastness led to their disappearance from the market. In the 1950s, however, it was found that their lightfastness on acrylic fibres is surprisingly high, and further studies also revealed that triphenylmethane dyes such as malachite green behave similarly. This accidental discovery led to their return as industrial products. Methylene blue is widely used as a biological stain, as first noted by German medical scientist Paul Ehrlich. Its derivative with a nitro group ortho to sulfur is methylene green, which has excellent lightfastness on acrylics. Some thiazines—namely, those with X = NR but lacking the ―N(CH3)2 groups—are antihistamines. A number of oxazines and acridines are good leather dyes. Mauve is an azine but is of only historical interest; only one example of this class, Safranine T, is used.

The oldest, most commonly used acid-base indicator, litmus, is a mixture of several oxazine derivatives, obtained by treating various species of lichens with ammonia, potash, and lime. Archil, orchil, and orseille are similar mixtures of dyes, obtained from lichens by different methods; cudbear is the common name for the lichen Ochrolechia tartarea and the dye therefrom.

Azo dyes

Nitrous acid (HONO) was one of the reagents tried in the early experiments with aniline, and in 1858 the German chemist Johann Peter Griess obtained a yellow compound with dye properties. Although used only briefly commercially, this dye sparked interest in the reaction that became the most important process in the synthetic dye industry. The reaction between nitrous acid and an arylamine yields a highly reactive intermediate; the reaction of this intermediate with phenols and aryl amines is the key step in the synthesis of more than 50 percent of the commercial dyes produced today.

The chemistry involved in these reactions was unclear until 1866, when Kekulé proposed correctly that the products have aryl rings linked through a ―N=N― unit, called an azo group; hence, the dyes containing this functional group are termed the azo dyes. The reaction of nitrous acid with Ar―NH2 (where Ar represents an aryl group) gives Ar―NN+, an aryldiazonium ion, which readily couples with anilines or phenols to furnish azo compounds. An early commercial success was chrysoidine, which had been synthesized by coupling aniline to m-phenylenediamine; it was the first azo dye for wool and has been in use since 1875.

Diazotization of both amino groups of m-phenylenediamine followed by coupling with more of the diamine gives Bismark brown, a major component in the first successful disazo dye—i.e., a dye with two azo groups. In 1884 a conjugated disazo dye, Congo red, made by coupling 4-sulfo-1-naphthylamine with bisdiazotized benzidine, was found to dye cotton by simple immersion of the fabric in a hot aqueous bath of the dye. Congo red was the first dye to be known as a direct dye; today it is used as a pH indicator.

Until the 1970s derivatives from methyl- or methoxyl-substituted benzidines constituted the major group of disazo dyes, but they are no longer produced in many countries because they are carcinogens.

The discovery of the azo dyes led to the development of ingrain dyeing, whereby the dye is synthesized within the fabric (see above Dyeing techniques: Azo dyeing techniques). Since the process was done at ice temperature, some dyes were called ice colours. In 1912 it was found that 2-hydroxy-3-naphthanilide (Naphtol AS, from the German Naphtol Anilid Säure) forms a water-soluble anion with affinity for cotton, a major step in the development of the ingrain dyes. Its reaction with unsulfonated azoic diazo components on the fabric gives insoluble dyes with good wetfastness; with Diazo Component 13, Fast Scarlet R is formed, a member of the Naphtol AS series.

Many arylamides have been employed as coupling components, but Naphtol AS is the most important. Since the dye is formed in the dyeing process, the coupler and the diazonium component—as a free base or diazonium salt—are supplied to dyers. More than 100 of each are listed in the Colour Index, so the number of possible combinations is great, but the number of those known to give useful colorants with adequate fastness is much smaller. Many are insoluble in water and can be utilized as pigments.

The ―OH and ―NH2 groups direct the coupling to the ortho and para sites, and the directive effects are pH-dependent. In alkaline media, coupling is directed by the ―OH groups, whereas ―NH2 groups direct in weakly acidic media. H-acid (8-amino-1-naphthol-3,6-disulfonic acid) has both functional groups and can be selectively coupled to two diazo components in a two-step process. C.I. Acid Black 1 is formed by coupling first to diazotized p-nitroaniline in weakly acidic solution and then to diazotized aniline in alkaline solution.

Azo dyes became the most important commercial colorants because of their wide colour range, good fastness properties, and tinctorial strength (colour density), which is twice that of the anthraquinones, the second most important group of dyes. Azo dyes are easily prepared from many readily available, inexpensive compounds and meet the demands of a wide range of end uses. Cost advantages tend to offset the fact that these are less brilliant and less lightfast than the anthraquinones.

Reactive dyes

The first examples of reactive dyes utilized monoazo systems for bright yellow and red shades. Coupling aniline to H-acid gave the azo dye used in the first Procion Red (C.I. Reactive Red 1), and anthraquinone dyes were used to obtain bright blue shades. An early example in the Remazol series is Remazol Brilliant Blue R (C.I. Reactive Blue 19).

Dichlorotriazinyl dyes are produced by more than 30 dye manufacturers, since the early patents on these dyes have expired. Replacement of one of the chlorines in a dichloro-s-triazinyl dye (e.g., C.I. Reactive Red 1) with a noncoloured group results in dye series (Procion H and Procion P) that can be applied at 80 °C (176 °F). These are analogous to the direct dyes Ciba produced in the 1920s and reintroduced in the late 1950s as Cibacron reactive dyes. Alternatively, the second chlorine can be replaced with another dye. In such cases, the triazinyl grouping acts as a chromophoric block, a feature that Ciba utilized in the 1920s to produce direct green dyes by the sequential attachment of blue and yellow chromogens.

In practice, all of the dye is not transferred to the fabric. Reaction with water (hydrolysis) in the dyebath competes with the dyeing reaction to reduce the level of fixation (transfer of the dye to the fabric), which can vary from 30–90 percent. Considerable effort has been directed toward achieving 100 percent fixation, which has led to the introduction of dyes having two reactive groups—for example, Procion Red H-E3B (C.I. Reactive Red 120), Remazol Black B (C.I. Reactive Black 5), and Remazol Brilliant Red FG (C.I. Reactive Red 227). The azo-dye moiety in each is derived from H-acid.

Although azo chromogens are most commonly used (about 80 percent of the time), reactive dyes can contain almost any chromogen; thus, a vast array of colours is available. With the introduction of reactive dyes, cotton could finally be dyed in bright shades with monoazo dyes for yellows to reds, anthraquinones for blues, and copper phthalocyanines for bright turquoise colours.

Phthalocyanine compounds

Phthalocyanines, the most important chromogens developed in the 20th century, were introduced in 1934. They are analogs of two natural porphyrins: chlorophyll and hemoglobin. Phthalocyanine was discovered in 1907 and its copper salt in 1927, but their potential as colorants was not immediately recognized. Identification of a brilliant blue impurity in an industrial preparation of phthalimide by ICI awakened interest among chemists. Phthalocyanines became commercially available in the 1930s, with the parent compound and its copper complex marketed as Monastral Fast Blue B and Monastral Fast Blue G, respectively.

Of several known metal complexes, copper phthalocyanine (CuPc) is the most important. Although it is used mainly as a pigment, it can be formed directly on cotton. Although not useful for PET and acrylics, some complexes are utilized with nylon. Halogenation of the benzene rings alters the shade to bluish-green and green. In an important phthalocyanine, Monastral Fast Green G (C.I. Pigment Green 7), all 16 hydrogens on the four benzo rings are replaced with chlorine. Water-soluble analogs for use as dyes were developed later by the introduction of sulfonic acid groups. Disulfonation of the copper complex gave a direct dye for cotton, Chlorantine Fast Turquoise Blue Gll (C.I. Direct Blue 86), the first commercial phthalocyanine dye. Reaction of sulfonyl derivatives with amines yield organic-soluble dyes in wide use in lacquers and inks. Solubilized phthalocyanine reactive dyes are used for bright turquoise shades that cannot be obtained with either azo or anthraquinone chromogens. After treatment of the tetrasulfonyl derivative with one equivalent of a diamine, the residual sulfonyl groups are hydrolyzed and the reactive group (e.g., cyanuryl chloride) added. Condensation of some of the chlorosulfonyl groups with ammonia before hydrolysis yields dyes with brighter hues (e.g., C.I. Reactive Blue 7).

These colorants display strong, bright blue to green shades with remarkable chemical stability. Copper phthalocyanine sublimes unchanged at 580 °C (1,076 °F) and dissolves in concentrated sulfuric acid without change. These compounds exhibit excellent lightfastness, and their properties are in striking contrast to those of natural pigments (i.e., hemoglobin and chlorophyll) that are destroyed by intense light or heat and mild chemical reagents. The high stability, strength, and brightness of the phthalocyanines render them cost-effective, illustrated by the wide use of blue and green labels on many products.

Quinacridone compounds

A second group of pigments developed in the 20th century were the quinacridone compounds. Quinacridone itself was introduced in 1958. Its seven crystalline forms range in colour from yellowish-red to violet; the violet β and red γ forms are used as pigments, both classified as C.I. Pigment Violet 19.

The dichloro and dimethyl analogs, substituted on each outer ring, are commercial pink and bluish-red pigments.

Fluorescent brighteners

Raw natural fibres, paper, and plastics tend to appear yellowish because of weak light absorption near 400 nm by certain peptides and natural pigments in wool and silk, by natural flavonoid dyes in cellulose, and by minor decomposition products in plastics. Although bleaching can reduce this tinting, it must be mild to avoid degradation of the material. A bluing agent can mask the yellowish tint to make the material appear whiter, or the material can be treated with a fluorescent compound that absorbs ultraviolet light and weakly emits blue visible light. These compounds, also called “optical brighteners,” are not dyes in the usual sense and, in fact, were introduced in 1927 by banknote printers to protect against forgery. Today, however, the major industrial applications are as textile finishers, pulp and paper brighteners, and additives for detergents and synthetic polymers. Many of these fluorescent brighteners contain triazinyl units and water-solubilizing groups, as, for example, Blankophor B.

Food dyes

Upon their discovery, synthetic dyes rapidly replaced many metallic compounds used to colour foods. The advantages of synthetic dyes over natural colorants—such as brightness, stability, colour range, and lower cost—were quickly appreciated, but the recognition of some potentially hazardous effects was slower. Opinion remains widely divided on this issue, since few countries agree on which dyes are safe. For example, no food dyes are used in Norway and Sweden, whereas 16 are approved in the United Kingdom, although some of these dyes have been linked with adverse health effects. Dozens were used in the United States prior to 1906, when a limit of seven was set. This number had increased to 15 by 1938—with certification of purity required by law—and to 19 in 1950. Today, seven are certified, including erythrosine (tetraiodofluorescein), indigotine (5,5′-disulfonatoindigo), two triphenylmethanes (Fast Green FCF and Brilliant Blue FCF), and three azo dyes (Sunset Yellow FCF, Allura Red, and Tartrazine).

The azo dye amaranth was banned in 1976 after a long court battle but is still approved in many countries—including Canada, whose list includes one other azo dye, Ponceau SX, which is banned in the United States.

Dye-industry research

Since the 1970s the primary aim of colorant research has shifted from the development of new dye structures to optimizing the manufacture of existing dyes, devising more economical application methods, and developing new areas of application, such as liquid crystal displays, lasers, solar cells, and optical data discs, as well as imaging and other data-recording systems.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1942 2023-10-26 00:11:36

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1946) Campus


A campus is an area of land that contains the main buildings of a university or college.


A campus is by tradition the land on which a college or university and related institutional buildings are situated. Usually a college campus includes libraries, lecture halls, residence halls, student centers or dining halls, and park-like settings.

A modern campus is a collection of buildings and grounds that belong to a given institution, either academic or non-academic. Examples include the Googleplex and the Apple Campus.


The word derives from a Latin word for "field" and was first used to describe the large field adjacent Nassau Hall of the College of New Jersey (now Princeton University) in 1774. The field separated Princeton from the small nearby town.

Some other American colleges later adopted the word to describe individual fields at their own institutions, but "campus" did not yet describe the whole university property. A school might have one space called a campus, another called a field, and still another called a yard.


The tradition of a campus began with the medieval European universities where the students and teachers lived and worked together in a cloistered environment. The notion of the importance of the setting to academic life later migrated to America, and early colonial educational institutions were based on the Scottish and English collegiate system.

The campus evolved from the cloistered model in Europe to a diverse set of independent styles in the United States. Early colonial colleges were all built in proprietary styles, with some contained in single buildings, such as the campus of Princeton University or arranged in a version of the cloister reflecting American values, such as Harvard's. Both the campus designs and the architecture of colleges throughout the country have evolved in response to trends in the broader world, with most representing several different contemporary and historical styles and arrangements.


The meaning expanded to include the whole institutional property during the 20th century, with the old meaning persisting into the 1950s in some places.

Office buildings

Sometimes the lands on which company office buildings sit, along with the buildings, are called campuses. The Microsoft Campus in Redmond, Washington, is a good example of this usage. Hospitals and even airports sometimes use the term to describe the territory of their respective facilities.


The word campus has also been applied to European universities, although some such institutions (in particular, "ancient" universities such as Bologna, Padua, Oxford and Cambridge) are characterized by ownership of individual buildings in university town-like urban settings rather than sprawling park-like lawns in which buildings are placed.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1943 2023-10-27 00:23:10

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1947). Cafetaria


A cafeteria a restaurant, especially one for staff or workers, where people collect their meals themselves and carry them to their tables.


A Cafeteria is a self-service restaurant in which customers select various dishes from an open-counter display. The food is usually placed on a tray, paid for at a cashier’s station, and carried to a dining table by the customer. The modern cafeteria, designed to facilitate a smooth flow of patrons, is particularly well adapted to the needs of institutions—schools, hospitals, corporations—attempting to serve large numbers of people efficiently and inexpensively. In addition to providing quick service, the cafeteria requires fewer service personnel than most other commercial eating establishments.

Early versions of self-service restaurants began to appear in the late 19th century in the United States. In 1891 the Young Women’s Christian Association (YWCA) of Kansas City, Missouri, established what some food-industry historians consider the first cafeteria. This institution, founded to provide low-cost meals for working women, was patterned after a Chicago luncheon club for women where some aspects of self-service were already in practice. Cafeterias catering to the public opened in several U.S. cities in the 1890s, but cafeteria service did not become widespread until shortly after the turn of the century, when it became the accepted method of providing food for employees of factories and other large businesses.


A cafeteria, sometimes called a canteen outside the U.S. and Canada, is a type of food service location in which there is little or no waiting staff table service, whether a restaurant or within an institution such as a large office building or school; a school dining location is also referred to as a dining hall or lunchroom (in American English). Cafeterias are different from coffeehouses, although the English term came from the Spanish term cafetería, which carries the same meaning.

Instead of table service, there are food-serving counters/stalls or booths, either in a line or allowing arbitrary walking paths. Customers take the food that they desire as they walk along, placing it on a tray. In addition, there are often stations where customers order food, particularly items such as hamburgers or tacos which must be served hot and can be immediately prepared with little waiting. Alternatively, the patron is given a number and the item is brought to their table. For some food items and drinks, such as sodas, water, or the like, customers collect an empty container, pay at check-out, and fill the container after check-out. Free unlimited-second servings are often allowed under this system. For legal purposes (and the consumption patterns of customers), this system is rarely, if at all, used for alcoholic drinks in the United States.

Customers are either charged a flat rate for admission (as in a buffet) or pay at check-out for each item. Some self-service cafeterias charge by the weight of items on a patron's plate. In universities and colleges, some students pay for three meals a day by making a single large payment for the entire semester.

As cafeterias require few employees, they are often found within a larger institution, catering to the employees or clientele of that institution. For example, schools, colleges and their residence halls, department stores, hospitals, museums, places of worship, amusement parks, military bases, prisons, factories, and office buildings often have cafeterias. Although some of such institutions self-operate their cafeterias, many outsource their cafeterias to a food service management company or lease space to independent businesses to operate food service facilities. The three largest food service management companies servicing institutions are Aramark, Compass Group, and Sodexo.

At one time, upscale cafeteria-style restaurants dominated the culture of the Southern United States, and to a lesser extent the Midwest. There were numerous prominent chains of them: Bickford's, Morrison's Cafeteria, Piccadilly Cafeteria, S&W Cafeteria, Apple House, Luby's, K&W, Britling, Wyatt's Cafeteria, and Blue Boar among them. Currently, two Midwestern chains still exist, Sloppy Jo's Lunchroom and Manny's, which are both located in Illinois. There were also several smaller chains, usually located in and around a single city. These institutions, except K&W, went into a decline in the 1960s with the rise of fast food and were largely finished off in the 1980s by the rise of all-you-can-eat buffets and other casual dining establishments. A few chains—particularly Luby's and Piccadilly Cafeterias (which took over the Morrison's chain in 1998)—continue to fill some of the gap left by the decline of the older chains. Some of the smaller Midwestern chains, such as MCL Cafeterias centered in Indianapolis, are still in business.


Perhaps the first self-service restaurant (not necessarily a cafeteria) in the U.S. was the Exchange Buffet in New York City, which opened September 4, 1885, and catered to an exclusively male clientele. Food was purchased at a counter and patrons ate standing up. This represents the predecessor of two formats: the cafeteria, described below, and the automat.

During the 1893 World's Columbian Exposition in Chicago, entrepreneur John Kruger built an American version of the smörgåsbords he had seen while traveling in Sweden. Emphasizing the simplicity and light fare, he called it the 'Cafeteria' - Spanish for 'coffee shop'. The exposition attracted over 27 million visitors (half the U.S. population at the time) in six months, and it was because of Kruger's operation that the United States first heard the term and experienced the self-service dining format.

Meanwhile, in the mid-scale United States, the chain of Childs Restaurants quickly grew from about 10 locations in New York City in 1890 to hundreds across the U.S. and Canada by 1920. Childs is credited with the innovation of adding trays and a "tray line" to the self-service format, introduced in 1898 at their 130 Broadway location. Childs did not change its format of sit-down dining, however. This was soon the standard design for most Childs Restaurants, and many ultimately the dominant method for cafeterias.

It has been conjectured that the 'cafeteria craze' started in May 1905, when Helen Mosher opened a downtown L.A. restaurant where people chose their food at a long counter and carried their trays to their tables. California has a long history in the cafeteria format - notably the Boos Brothers Cafeterias, and the Clifton's and Schaber's. The earliest cafeterias in California were opened at least 12 years after Kruger's Cafeteria, and Childs already had many locations around the country. Horn & Hardart, an automat format chain (different from cafeterias), was well established in the mid-Atlantic region before 1900.

Between 1960 and 1981, the popularity of cafeterias was overcome by fast food restaurants and fast casual restaurant formats.

Outside the United States, the development of cafeterias can be observed in France as early as 1881 with the passing of the Ferry Law. This law mandated that public school education be available to all children. Accordingly, the government also encouraged schools to provide meals for students in need, thus resulting in the conception of cafeterias or cantine (in French). According to Abramson, before the creation of cafeterias, only some students could bring home-cooked meals and be properly fed in schools.

As cafeterias in France became more popular, their use spread beyond schools and into the workforce. Thus, due to pressure from workers and eventually new labor laws, sizable businesses had to, at minimum, provide established eating areas for their workers. Support for this practice was also reinforced by the effects of World War II when the importance of national health and nutrition came under great attention.

Other names

A cafeteria in a U.S. military installation is known as a chow hall, a mess hall, a galley, a mess deck, or, more formally, a dining facility, often abbreviated to DFAC, whereas in common British Armed Forces parlance, it is known as a cookhouse or mess. Students in the United States often refer to cafeterias as lunchrooms, which also often serve school breakfast. Some school cafeterias in the U.S. and Canada have stages and movable seating that allow use as auditoriums. These rooms are known as cafetoriums or All Purpose Rooms. In some older facilities, a school's gymnasium is also often used as a cafeteria, with the kitchen facility being hidden behind a rolling partition outside non-meal hours. Newer rooms which also act as the school's grand entrance hall for crowd control and are used for multiple purposes are often called the commons.

Cafeterias serving university dormitories are sometimes called dining halls or dining commons. A food court is a type of cafeteria found in many shopping malls and airports featuring multiple food vendors or concessions. However, a food court could equally be styled as a type of restaurant as well, being more aligned with the public, rather than institutionalized, dining. Some institutions, especially schools, have food courts with stations offering different types of food served by the institution itself (self-operation) or a single contract management company, rather than leasing space to numerous businesses. Some monasteries, boarding schools, and older universities refer to their cafeteria as a refectory. Modern-day British cathedrals and abbeys, notably in the Church of England, often use the phrase refectory to describe a cafeteria open to the public. Historically, the refectory was generally only used by monks and priests. For example, although the original 800-year-old refectory at Gloucester Cathedral (the stage setting for dining scenes in the Harry Potter movies) is now mostly used as a choir practice area, the relatively modern 300-year-old extension, now used as a cafeteria by staff and public alike, is today referred to as the refectory.

A cafeteria located within a movie or TV studio complex is often called a commissary.

College cafeteria

In American English, a college cafeteria is a cafeteria intended for college students. In British English, it is often called the refectory. These cafeterias can be a part of a residence hall or in a separate building. Many of these colleges employ their students to work in the cafeteria. The number of meals served to students varies from school to school but is normally around 21 meals per week. Like normal cafeterias, a person will have a tray to select the food that they want, but (at some campuses) instead of paying money, pays beforehand by purchasing a meal plan.

The method of payment for college cafeterias is commonly in the form of a meal plan, whereby the patron pays a certain amount at the start of the semester and details of the plan are stored on a computer system. Student ID cards are then used to access the meal plan. Meal plans can vary widely in their details and are often not necessary to eat at a college cafeteria. Typically, the college tracks students' plan usage by counting the number of predefined meal servings, points, dollars, or buffet dinners. The plan may give the student a certain number of any of the above per week or semester and they may or may not roll over to the next week or semester.

Many schools offer several different options for using their meal plans. The main cafeteria is usually where most of the meal plan is used but smaller cafeterias, cafés, restaurants, bars, or even fast food chains located on campus, on nearby streets, or in the surrounding town or city may accept meal plans. A college cafeteria system often has a virtual monopoly on the students due to an isolated location or a requirement that residence contracts include a full meal plan.

Taiwanese cafeteria

There are many self-service bento shops in Taiwan. The store will put the dishes in the self-service area for the customers to pick them up by themselves. After the customers choose, they will go to the cashier to check out; many stores will use the staff to visually check the amount of food when assessing the price, and some stores will use the method of weighing.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1944 2023-10-28 00:16:09

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1948) Magnitude


Magnitude is used in stating the size or extent of something such as a star, earthquake, or explosion.


Magnitude, in astronomy, is the measure of the brightness of a star or other celestial body. The brighter the object, the lower the number assigned as a magnitude. In ancient times, stars were ranked in six magnitude classes, the first magnitude class containing the brightest stars. In 1850 the English astronomer Norman Robert Pogson proposed the system presently in use. One magnitude is defined as a ratio of brightness of 2.512 times; e.g., a star of magnitude 5.0 is 2.512 times as bright as one of magnitude 6.0. Thus, a difference of five magnitudes corresponds to a brightness ratio of 100 to 1. After standardization and assignment of the zero point, the brightest class was found to contain too great a range of luminosities, and negative magnitudes were introduced to spread the range.

Apparent magnitude is the brightness of an object as it appears to an observer on Earth. The Sun’s apparent magnitude is −26.7, that of the full Moon is about −11, and that of the bright star Sirius, −1.5. The faintest objects visible through the Hubble Space Telescope are of (approximately) apparent magnitude 30. Absolute magnitude is the brightness an object would exhibit if viewed from a distance of 10 parsecs (32.6 light-years). The Sun’s absolute magnitude is 4.8.

Bolometric magnitude is that measured by including a star’s entire radiation, not just the portion visible as light. Monochromatic magnitude is that measured only in some very narrow segment of the spectrum. Narrow-band magnitudes are based on slightly wider segments of the spectrum and broad-band magnitudes on areas wider still. Visual magnitude may be called yellow magnitude because the eye is most sensitive to light of that colour.


Absolute magnitude

In astronomy, absolute magnitude (M) is a measure of the luminosity of a celestial object on an inverse logarithmic astronomical magnitude scale. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly 10 parsecs (32.6 light-years), without extinction (or dimming) of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly compared among each other on a magnitude scale. For Solar System bodies that shine in reflected light, a different definition of absolute magnitude (H) is used, based on a standard reference distance of one astronomical unit.

Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter).

The more luminous an object, the smaller the numerical value of its absolute magnitude. A difference of 5 magnitudes between the absolute magnitudes of two objects corresponds to a ratio of 100 in their luminosities, and a difference of n magnitudes in absolute magnitude corresponds to a luminosity ratio of {100}^{n/5}. For example, a star of absolute magnitude MV = 3.0 would be 100 times as luminous as a star of absolute magnitude MV = 8.0 as measured in the V filter band. The Sun has absolute magnitude MV = +4.83. Highly luminous objects can have negative absolute magnitudes: for example, the Milky Way galaxy has an absolute B magnitude of about −20.8.

As with all astronomical magnitudes, the absolute magnitude can be specified for different wavelength ranges corresponding to specified filter bands or passbands; for stars a commonly quoted absolute magnitude is the absolute visual magnitude, which uses the visual (V) band of the spectrum (in the UBV photometric system). Absolute magnitudes are denoted by a capital M, with a subscript representing the filter band used for measurement, such as MV for absolute magnitude in the V band.

An object's absolute bolometric magnitude (Mbol) represents its total luminosity over all wavelengths, rather than in a single filter band, as expressed on a logarithmic magnitude scale. To convert from an absolute magnitude in a specific filter band to absolute bolometric magnitude, a bolometric correction (BC) is applied.

Apparent Magnitude

Apparent magnitude (m) is a measure of the brightness of a star or other astronomical object. An object's apparent magnitude depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer.

The word magnitude in astronomy, unless stated otherwise, usually refers to a celestial object's apparent magnitude. The magnitude scale dates back to the ancient Roman astronomer Claudius Ptolemy, whose star catalog listed stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined in a way to closely match this historical system.

The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to a brightness ratio of

, or about 2.512. For example, a star of magnitude 2.0 is 2.512 times as bright as a star of magnitude 3.0, 6.31 times as bright as a star of magnitude 4.0, and 100 times as bright as one of magnitude 7.0.

Differences in astronomical magnitudes can also be related to another logarithmic ratio scale, the decibel: an increase of one astronomical magnitude is exactly equal to a decrease of 4 decibels (dB).

The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5.

The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system.

Absolute magnitude is a measure of the intrinsic luminosity of a celestial object, rather than its apparent brightness, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of 10 parsecs (33 light-years; 3.1×{10}^{14} kilometres; 1.9×{10}^{14} miles). Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, unqualified references to "magnitude" are understood to mean apparent magnitude.

Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution.

Apparent magnitude is really a measure of illuminance, which can also be measured in photometric units such as lux.

Additional Information

The brightness of a celestial body on a numerical scale for which brighter objects have smaller values. Differences in magnitude are based on a logarithmic scale that matches the response of the human eye to differences in brightness so that a decrease of one magnitude represents an increase in apparent brightness by a factor of 2.512.

The magnitude system began harmlessly enough. Scientists like to classify objects, and the Greek astronomer Hipparchus (160-127 B.C.) grouped the visible stars into six classes based on their apparent brightness. The brightest stars he called first-class stars, the next faintest he called second-class stars, and so on (Seeds 1997). The brightness classes are now known as apparent magnitudes, and are denoted by a lowercase m. The system is counterintuitive in that we naturally associate larger numbers with increasing brightness. The magnitude system uses the reverse philosophy -- a first magnitude star is brighter than a sixth magnitude star. Although the magnitude system can be awkward, it is so deeply ingrained in astronomical literature (and databases) that it would be difficult to abandon the system now.

There were no binoculars or telescopes in the time of Hipparchus. The original classification system was based on naked-eye observations, and was fairly simple. Under the best observing conditions, the average person can discern stars as faint as sixth magnitude. The light pollution in Chicagoland limits our view of the night sky at Northwestern. On a typical clear night in Evanston, you can see 3rd to 4th magnitude stars. On the very best nights, you might be able to see 5th magnitude stars. Although the loss of one or two magnitudes does not seem like much, consider that most of the stars in the sky are dimmer than 5th magnitude. The difference between a light polluted sky and dark sky is astonishing.

As instruments were developed which could measure light levels more accurately than the human eye, and telescopes revealed successively dimmer stars, the magnitude system was refined. A subscript may be added to the apparent magnitude to signify how the magnitude was obtained. The most common magnitude is the V magnitude, denoted mV, which is obtained instrumentally using an astronomical V filter. The V filter allows only light near the wavelength of 550 nanometers to pass through it (approximately 505-595 nm). V magnitudes are very close to those perceived by the human eye. (To what range of wavelengths is the human eye sensitive?) The following chart lists the apparent V magnitudes of a few common celestial objects.

While you may perceive one star to be only a few times brighter than another, the intensity of the two stars may differ by orders of magnitude. (Light intensity is defined as the amount of light energy striking each square cm of surface per second.) The eye is a logarithmic detector. While the eye is perceiving linear steps in brightness, the light intensity is changing by multiplicative factors. This is fortunate; if the eye responded linearly instead of logarithmically to light intensity, you would be able to distinguish objects in bright sunlight, but would be nearly blind in the shade! If logarithms are a faint memory, you should peruse a refresher on logs and logarithmic scales before continuing.

Given that the eye is a logarithmic detector, and the magnitude system is based on the response of the human eye, it follows that the magnitude system is a logarithmic scale. In the original magnitude system, a difference of 5 magnitudes corresponded to a factor of roughly 100 in light intensity. The magnitude system was formalized to assume that a factor of 100 in intensity corresponds exactly to a difference of 5 magnitudes. Since a logarithmic scale is based on multiplicative factors, each magnitude corresponds to a factor of the 5th root of 100, or 2.512, in intensity. The magnitude scale is thus a logarithmic scale in base {100}^{1/5} = 2.512.

Absolute magnitude

Apparent magnitudes describe how bright stars appear to be. However, they tell us nothing about the intrinsic brightness of the stars. Why? The apparent brightness of a star depends on two factors: the intrinsic brightness of the star, and the distance to the star. The Sun is not particularly bright as stars go, but it appears spectacularly bright to us because we live so close to it.

If we want to compare the intrinsic brightness of stars using the magnitude system, we have to level the playing field. We will have to imagine that all the stars are at the same distance, and then measure their apparent brightness. We define the absolute magnitude of an object as the apparent magnitude one would measure if the object was viewed from a distance of 10 parsecs (10 pc, where 1 pc = 3.26 light years). We denote absolute magnitude by an upper case M. As before, we denote such magnitudes measured through a V filter by the subscript V. The absolute magnitude is thus a measure of the intrinsic brightness of the object.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1945 2023-10-29 00:14:33

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1949) Printing Technology


What is Printing Technology? Printing is a process for reproducing text and image, typically with ink on paper using a printing press. It is often carried out as a large-scale industrial process and is an essential part of publishing and transaction printing.


Printing is a process for mass reproducing text and images using a master form or template. The earliest non-paper products involving printing include cylinder seals and objects such as the Cyrus Cylinder and the Cylinders of Nabonidus. The earliest known form of printing as applied to paper was woodblock printing, which appeared in China before 220 AD for cloth printing. However, it would not be applied to paper until the seventh century. Later developments in printing technology include the movable type invented by Bi Sheng around 1040 AD and the printing press invented by Johannes Gutenberg in the 15th century. The technology of printing played a key role in the development of the Renaissance and the Scientific Revolution and laid the material basis for the modern knowledge-based economy and the spread of learning to the masses.

Conventional printing technology

All printing process are concerned with two kinds of areas on the final output:

* Image area (printing areas)
* Non-image area (non-printing areas)

After the information has been prepared for production (the prepress step), each printing process has definitive means of separating the image from the non-image areas.

Conventional printing has four types of process:

* Planographics, in which the printing and non-printing areas are on the same plane surface and the difference between them is maintained chemically or by physical properties, the examples are: offset lithography, collotype, and screenless printing.
* Relief, in which the printing areas are on a plane surface and the non printing areas are below the surface, examples: flexography and letterpress.
* Intaglio, in which the non-printing areas are on a plane surface and the printing area are etched or engraved below the surface, examples: steel die engraving, gravure, etching, collagraph.
* Porous or Stencil, in which the printing areas are on fine mesh screens through which ink can penetrate, and the non-printing areas are a stencil over the screen to block the flow of ink in those areas, examples: screen printing, stencil duplicator, risograph.


Printing, traditionally, is a technique for applying under pressure a certain quantity of colouring agent onto a specified surface to form a body of text or an illustration. Certain modern processes for reproducing texts and illustrations, however, are no longer dependent on the mechanical concept of pressure or even on the material concept of colouring agent. Because these processes represent an important development that may ultimately replace the other processes, printing should probably now be defined as any of several techniques for reproducing texts and illustrations, in black and in colour, on a durable surface and in a desired number of identical copies. There is no reason why this broad definition should not be retained, for the whole history of printing is a progression away from those things that originally characterized it: lead, ink, and the press.

It is also true that, after five centuries during which printing has maintained a quasi-monopoly of the transmission or storage of information, this role is being seriously challenged by new audiovisual and information media. Printing, by the very magnitude of its contribution to the multiplication of knowledge, has helped engender radio, television, film, microfilm, tape recording, and other rival techniques. Nevertheless, its own field remains immense. Printing is used not merely for books and newspapers but also for textiles, plates, wallpaper, packaging, and billboards. It has even been used to manufacture miniature electronic circuits.

The invention of printing at the dawn of the age of the great discoveries was in part a response and in part a stimulus to the movement that, by transforming the economic, social, and ideological relations of civilization, would usher in the modern world. The economic world was marked by the high level of production and exchange attained by the Italian republics, as well as by the commercial upsurge of the Hanseatic League and the Flemish cities; social relations were marked by the decline of the landed aristocracy and the rise of the urban mercantile bourgeoisie; and the world of ideas reflected the aspirations of this bourgeoisie for a political role that would allow it to fulfill its economic ambitions. Ideas were affected by the religious crisis that would lead to the Protestant Reformation.

The first major role of the printed book was to spread literacy and then general knowledge among the new economic powers of society. In the beginning it was scorned by the princes. It is significant that the contents of the first books were often devoted to literary and scientific works as well as to religious texts, though printing was used to ensure the broad dissemination of religious material, first Catholic and, shortly, Protestant.

There is a material explanation for the fact that printing developed in Europe in the 15th century rather than in the Far East, even though the principle on which it is based had been known in the Orient long before. European writing was based on an alphabet composed of a limited number of abstract symbols. This simplifies the problems involved in developing techniques for the use of movable type manufactured in series. Chinese handwriting, with its vast number of ideograms requiring some 80,000 symbols, lends itself only poorly to the requirements of a typography. Partly for this reason, the unquestionably advanced Oriental civilization, of which the richness of their writing was evidence, underwent a slowing down of its evolution in comparison with the formerly more backward Western civilizations.

Printing participated in and gave impetus to the growth and accumulation of knowledge. In each succeeding era there were more people who were able to assimilate the knowledge handed to them and to augment it with their own contribution. From Diderot’s encyclopaedia to the present profusion of publications printed throughout the world, there has been a constant acceleration of change, a process highlighted by the Industrial Revolution at the beginning of the 19th century and the scientific and technical revolution of the 20th.

At the same time, printing has facilitated the spread of ideas that have helped to shape alterations in social relations made possible by industrial development and economic transformations. By means of books, pamphlets, and the press, information of all kinds has reached all levels of society in most countries.

In view of the contemporary competition over some of its traditional functions, it has been suggested by some observers that printing is destined to disappear. On the other hand, this point of view has been condemned as unrealistic by those who argue that information in printed form offers particular advantages different from those of other audio or visual media. Radio scripts and television pictures report facts immediately but only fleetingly, while printed texts and documents, though they require a longer time to be produced, are permanently available and so permit reflection. Though films, microfilms, punch cards, punch tapes, tape recordings, holograms, and other devices preserve a large volume of information in small space, the information on them is available to human senses only through apparatus such as enlargers, readers, and amplifiers. Print, on the other hand, is directly accessible, a fact that may explain why the most common accessory to electronic calculators is a mechanism to print out the results of their operations in plain language. Far from being fated to disappear, printing seems more likely to experience an evolution marked by its increasingly close association with these various other means by which information is placed at the disposal of humankind.

Additional Information

There are many different types of printing methods available and they're continuing to evolve. Each type is suited to a different need, meaning that businesses can choose a printing technique that best highlights their products or service. So what are the different types of printing and how do they vary from each other?

Printing is something that's been around since before 220AD. The oldest known printing technique is known as woodcut and involves carving an image onto a wooden surface.

Printing has evolved a lot since then - instead of manual wood carving, you can choose from a wide range of technologically advanced methods. Here are seven of the most well-known and commonly used types:

* Offset Lithography
* Flexography
* Digital Printing
* Large Format
* Screen Printing
* 3D Printing


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1946 2023-10-30 00:07:25

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1950) Pharmacy


A pharmacy is the art, practice, or profession of preparing, preserving, compounding, and dispensing medical drugs. It is a place where medicines are compounded or dispensed.


Pharmacy is the science and practice of discovering, producing, preparing, dispensing, reviewing and monitoring medications, aiming to ensure the safe, effective, and affordable use of medicines. It is a miscellaneous science as it links health sciences with pharmaceutical sciences and natural sciences. The professional practice is becoming more clinically oriented as most of the drugs are now manufactured by pharmaceutical industries. Based on the setting, pharmacy practice is either classified as community or institutional pharmacy. Providing direct patient care in the community of institutional pharmacies is considered clinical pharmacy.

The scope of pharmacy practice includes more traditional roles such as compounding and dispensing of medications. It also includes more modern services related to health care including clinical services, reviewing medications for safety and efficacy, and providing drug information. Pharmacists, therefore, are experts on drug therapy and are the primary health professionals who optimize the use of medication for the benefit of the patients.

An establishment in which pharmacy (in the first sense) is practiced is called a pharmacy (this term is more common in the United States) or chemists (which is more common in Great Britain, though pharmacy is also used). In the United States and Canada, drugstores commonly sell medicines, as well as miscellaneous items such as confectionery, cosmetics, office supplies, toys, hair care products and magazines, and occasionally refreshments and groceries.

In its investigation of herbal and chemical ingredients, the work of the apothecary may be regarded as a precursor of the modern sciences of chemistry and pharmacology, prior to the formulation of the scientific method.


Pharmacy is the science and art concerned with the preparation and standardization of drugs. Its scope includes the cultivation of plants that are used as drugs, the synthesis of chemical compounds of medicinal value, and the analysis of medicinal agents. Pharmacists are responsible for the preparation of the dosage forms of drugs, such as tablets, capsules, and sterile solutions for injection. They compound physicians’, dentists’, and veterinarians’ prescriptions for drugs. The science that embraces knowledge of drugs with special reference to the mechanism of their action in the treatment of disease is pharmacology.

History of pharmacy

The beginnings of pharmacy are ancient. When the first person expressed juice from a succulent leaf to apply to a wound, this art was being practiced. In the Greek legend, Asclepius, the god of the healing art, delegated to Hygieia the duty of compounding his remedies. She was his apothecary or pharmacist. The physician-priests of Egypt were divided into two classes: those who visited the sick and those who remained in the temple and prepared remedies for the patients.

In ancient Greece and Rome and during the Middle Ages in Europe, the art of healing recognized a separation between the duties of the physician and those of the herbalist, who supplied the physician with the raw materials from which to make medicines. The Arabian influence in Europe during the 8th century AD, however, brought about the practice of separate duties for the pharmacist and physician. The trend toward specialization was later reinforced by a law enacted by the city council of Bruges in 1683, forbidding physicians to prepare medications for their patients. In America, Benjamin Franklin took a pivotal step in keeping the two professions separate when he appointed an apothecary to the Pennsylvania Hospital.

The development of the pharmaceutical industry since World War II led to the discovery and use of new and effective drug substances. It also changed the role of the pharmacist. The scope for extemporaneous compounding of medicines was much diminished and with it the need for the manipulative skills that were previously applied by the pharmacist to the preparation of bougies, cachets, pills, plasters, and potions. The pharmacist continues, however, to fulfill the prescriber’s intentions by providing advice and information; by formulating, storing, and providing correct dosage forms; and by assuring the efficacy and quality of the dispensed or supplied medicinal product.

The practice of pharmacy:


The history of pharmaceutical education has closely followed that of medical education. As the training of the physician underwent changes from the apprenticeship system to formal educational courses, so did the training of the pharmacist. The first college of pharmacy was founded in the United States in 1821 and is now known as the Philadelphia College of Pharmacy and Science. Other institutes and colleges were established soon after in the United States, Great Britain, and continental Europe. Colleges of pharmacy as independent organizations or as schools of universities now operate in most developed countries of the world.

The course of instruction leading to a bachelor of science in pharmacy extends at least five years. The first and frequently the second year of training, embracing general education subjects, are often provided by a school of arts and sciences. Many institutions also offer graduate courses in pharmacy and cognate sciences leading to the degrees of master of science and doctor of philosophy in pharmacy, pharmacology, or related disciplines. These advanced courses are intended especially for those who are preparing for careers in research, manufacturing, or teaching in the field of pharmacy.

Since the treatment of the sick with drugs encompasses a wide field of knowledge in the biological and physical sciences, an understanding of these sciences is necessary for adequate pharmaceutical training. The basic five-year curriculum in the colleges of pharmacy of the United States, for example, embraces physics, chemistry, biology, bacteriology, physiology, pharmacology, and many other specialized courses. As the pharmacist is engaged in a business as well as a profession, special training is provided in merchandising, accounting, computer techniques, and pharmaceutical jurisprudence.

Licensing and regulation

To practice pharmacy in those countries in which a license is required, an applicant must be qualified by graduation from a recognized college of pharmacy, meet specific requirements for experience, and pass an examination conducted by a board of pharmacy appointed by the government.

Pharmacy laws generally include the regulations for the practice of pharmacy, the sale of poisons, the dispensing of narcotics, and the labeling and sale of dangerous drugs. The pharmacist sells and dispenses drugs within the provisions of the food and drug laws of the country in which he practices. These laws recognize the national pharmacopoeia (which defines products used in medicine, their purity, dosages, and other pertinent data) as the standard for drugs. The World Health Organization of the United Nations began publishing the Pharmacopoeia Internationalis in the early 1950s. Its purpose is to standardize drugs internationally and to supply standards, strengths, and nomenclature for those countries that have no national pharmacopoeia.


Pharmaceutical research, in schools of pharmacy and in the laboratories of the pharmaceutical manufacturing houses, embraces the organic chemical synthesis of new chemical agents for use as drugs and is also concerned with the isolation and purification of plant constituents that might be useful as drugs. Research in pharmacy also includes formulation of dosage forms of medicaments and study of their stability, methods of assay, and standardization.

Another facet of pharmaceutical research that has attracted wide medical attention is the “availability” to the body (bioavailability) of various dosage forms of drugs. Exact methods of determining levels of drugs in blood and organs have revealed that slight changes in the mode of manufacture or the incorporation of a small amount of inert ingredient in a tablet may diminish or completely prevent its absorption from the gastrointestinal tract, thus nullifying the action of the drug. Ingenious methods have been devised to test the bioavailability of dosage forms. Although such in vitro, or test-tube, methods are useful and indicative, the ultimate test of bioavailability is the patient’s response to the dosage form of the drug.

Licensing systems for new medicinal products in Europe and North America demand extensive and increasingly costly investigation and testing in the laboratory and in clinical trials to establish the efficacy and safety of new products in relation to the claims to be made for their use. Proprietary rights for innovation by the grant of patents and by the registration of trademarks have become increasingly important in the growth of the pharmaceutical industry and its development internationally.

The results of research in pharmacy are usually published in such journals as the Journal of Pharmacy and Pharmacology (London), the Journal of the American Pharmaceutical Association and the Journal of Pharmaceutical Sciences (Washington, D.C.), the American Journal of Pharmacy and the American Journal of Hospital Pharmacy (Philadelphia), and the Pharmaceutica Acta Helvetiae (Zürich).


There are numerous national and international organizations of pharmacists. The Pharmaceutical Society of Great Britain, established in 1841, is typical of pharmaceutical organizations. In the United States the American Pharmaceutical Association, established in 1852, is a society that embraces all pharmaceutical interests. Among the international societies is the Fédération Internationale Pharmaceutique, founded in 1910 and supported by some 50 national societies, for the advancement of the professional and scientific interests of pharmacy on a worldwide basis. The Pan American Pharmaceutical and Biochemical Federation includes the pharmaceutical societies in the various countries in the Western Hemisphere.

There are also other international societies in which history, teaching, and the military aspects of pharmacy are given special emphasis.

Additional Information

Pharmacists are healthcare professionals who specialize in the right way to use, store, preserve, and provide medicine. They can guide you on how to use medications, and let you know about any potential adverse effects of what you take. They fill prescriptions issued by doctors and other healthcare professionals.

Pharmacists also contribute to research and testing of new drugs. They work in pharmacies, medical clinics, hospitals, universities, and government institutions.

What Does a Pharmacist Do?

People have been using plants and other natural substances as medicine for thousands of years. However, the practice of professional pharmacy became its own separate professional field in the mid-nineteenth century.

Pharmacists distribute prescription drugs to individuals. They also provide advice to patients and other health professionals on how to use or take medication, the correct dose of a drug, and potential side effects. Plus, they can make sure that a drug won’t interact badly with other medications you take or health conditions you have.

They can also provide information about general health topics like diet and exercise, as well as advice on products like home healthcare supplies and medical equipment.

Compounding (the mixing of ingredients to form medications) is a very small part of a modern pharmacists’ practice. Nowadays, pharmaceutical companies produce medicines and provide them to pharmacies, where pharmacists measure the right dosage amounts for patients.

Education and Training

In order to become a pharmacist in the U.S., a person needs a Doctor of Pharmacy (PharmD) degree from an institution that is accredited by the Accreditation Council for Pharmacy Education (ACPE).

Even though admissions requirements vary depending on the university, all PharmD programs require students to take postsecondary courses in chemistry, biology, and physics. Additionally, pharmacy programs require at least 2 years of undergraduate study, with most requiring a bachelor’s degree. Students must also take the Pharmacy College Admissions Test (PCAT).

PharmD programs take about 4 years to finish. Additional coursework for a degree in this field includes courses in pharmacology and medical ethics. Students also complete internships in hospitals, clinics, or retail pharmacies to gain real-life experience.

Pharmacists must also take continuing education courses to keep up with the latest advances in pharmacological science.

Reasons to See a Pharmacist

Pharmacists are one of the most easily-accessible health care professionals. Every pharmacy has a licensed pharmacist, and you can speak to one without making an appointment. Some of the reasons to see a pharmacist include:

Answering Medical and Drug-Related Questions

Pharmacists are qualified to answer most medical or drug-related questions you may have. They can explain what each medication you’re taking is for, how you are supposed to take it, and what you can expect while on the medication.

Filling Your Prescriptions

Once you have a prescription from your doctor, you can take it to the pharmacy where the pharmacist will fill the order. If you get all of your prescriptions filled at the same pharmacy, they can better track your medicinal history and provide you with a written history if needed.

Safely Disposing of Unwanted Medicines

If you have any unused or unwanted medicines, it’s best to get rid of them so they don’t fall into the wrong hands. Taking them to the pharmacy is the best and safest way to dispose of them.

Simple Health Checks

Pharmacists are qualified to perform simple healthcare procedures like taking your blood pressure and temperature, testing your blood sugar levels, and checking your cholesterol. They can also diagnose everyday ailments like the cold, flu, aches, pains, cuts, and rashes, just to name a few. They’ll then be able to recommend the right treatment or let you know if you should see a doctor.


You can get your annual flu shot and, in most states, other vaccines, too, at the pharmacy. Most of the time you do not need an appointment, and the whole process takes only a few minutes.

What to Expect at the Pharmacist

When visiting the pharmacist, you can expect that your personal and medical information will be protected and kept private. If you don’t want other customers to overhear your conversation or questions, you can ask the pharmacist to speak with you in a quiet, private area. You should feel comfortable asking them any questions you have, and they should be able to provide all the information you need regarding any medication you’re taking.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1947 2023-10-31 00:06:37

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1951) Notebook


A notebook is a small book for writing notes in.


A notebook (also known as a notepad, writing pad, drawing pad, or legal pad) is a book or stack of paper pages that are often ruled and used for purposes such as note-taking, journaling or other writing, drawing, or scrapbooking.


The earliest form of notebook was the wax tablet, which was used as a reusable and portable writing surface in classical antiquity and throughout the Middle Ages. As paper became more readily available in European countries from the 11th century onwards, wax tablets gradually fell out of use, although they remained relatively common in England, which did not possess a commercially successful paper mill until the late 16th century. While paper was cheaper than wax, its cost was sufficiently high to ensure the popularity of erasable notebooks, made of specially-treated paper that could be wiped clean and used again. These were commonly known as table-books, and are frequently referenced in Renaissance literature, most famously in Shakespeare's Hamlet: "My tables,—meet it is I set it down, That one may smile, and smile, and be a villain."

Despite the apparent ubiquity of such table-books in Shakespeare's time, very few examples have survived, and little is known about their exact nature, use, or history of production. The earliest extant edition, bound together with a printed almanac, was made in Antwerp, Belgium, in 1527. By the end of this decade, table-books were being imported into England, and they were being printed in London from the 1570s. At this time, however, it appears that the concept of an erasable notebook was still something of a novelty to the British public, as the printed instructions included with some books were headed: "To make clean your Tables when they be written on, which to some as yet is unknown." The leaves of some table-books were made of donkey skin; others had leaves of ivory or simple pasteboard. The coating was made from a mixture of glue and gesso, and modern-day experiments have shown that ink, graphite and silverpoint writing can be easily erased from the treated pages with the application of a wet sponge or fingertip. Other types of notebook may also have been in circulation during this time; 17th-century writer Samuel Hartlib describes a table-book made of slate, which did "not need such tedious wiping out by spunges or cloutes".

The leaves of a table-book could be written upon with a stylus, which added to their convenience, as it meant that impromptu notes could be taken without the need for an inkwell (graphite pencils were not in common use until the late 17th century). Table-books were owned by all classes of people, from merchants to nobles, and were employed for a variety of purposes:

Surviving copies suggest that at least some owners (and/or their children) used table-books as suitable places in which to learn how to write. Tables were also used for collecting pieces of poetry, noteworthy epigrams, and new words; recording sermons, legal proceedings, or parliamentary debates; jotting down conversations, recipes, cures, and jokes; keeping financial records; recalling addresses and meetings; and collecting notes on foreign customs while traveling.

Their use in some contexts was seen as pretentious; Joseph Hall, writing in 1608, describes "the hypocrite" as one who, "in the midst of the sermon pulls out his tables in haste, as if he feared to lose that note". The practice of making notes during sermons was a common subject of ridicule, and led to table-books becoming increasingly associated with Puritanism during the 17th century.

By the early 19th century, there was far less demand for erasable notebooks, due to the mass-production of fountain pens and the development of cheaper methods for manufacturing paper. Ordinary paper notebooks became the norm. During the Enlightenment, British schoolchildren were commonly taught how to make their own notebooks out of loose sheets of paper, a process that involved folding, piercing, gathering, sewing and/or binding the sheets.

Legal pad

According to a legend, Thomas W. Holley of Holyoke, Massachusetts, invented the legal pad around the year 1888 when he innovated the idea to collect all the sortings, various sorts of sub-standard paper scraps from various factories, and stitch them together in order to sell them as pads at an affordable and fair price. In about 1900, the latter then evolved into the modern, traditionally yellow legal pad when a local judge requested for a margin to be drawn on the left side of the paper. This was the first legal pad. The only technical requirement for this type of stationery to be considered a true "legal pad" is that it must have margins of 1.25 inches (3.17 centimeters) from the left edge. Here, the margin, also known as down lines, is room used to write notes or comments. Legal pads usually have a gum binding at the top instead of a spiral or stitched binding.

In 1902, J.A. Birchall of Birchalls, a Launceston, Tasmania, Australia-based stationery shop, decided that the cumbersome method of selling writing paper in folded stacks of "quires" (four sheets of paper or parchment folded to form eight leaves) was inefficient. As a solution, he glued together a stack of halved sheets of paper, supported by a sheet of cardboard, creating what he called the "Silver City Writing Tablet".

Binding and cover

Principal types of binding are padding, perfect, spiral, comb, sewn, clasp, disc, and pressure, some of which can be combined. Binding methods can affect whether a notebook can lie flat when open and whether the pages are likely to remain attached. The cover material is usually distinct from the writing surface material, more durable, more decorative, and more firmly attached. It also is stiffer than the pages, even taken together. Cover materials should not contribute to damage or discomfort. It is frequently cheaper to purchase notebooks that are spiral-bound, meaning that a spiral of wire is looped through large perforations at the top or side of the page. Other bound notebooks are available that use glue to hold the pages together; this process is "padding." Today, it is common for pages in such notebooks to include a thin line of perforations that make it easier to tear out the page. Spiral-bound pages can be torn out, but frequently leave thin scraggly strips from the small amount of paper that is within the spiral, as well as an uneven rip along the top of the torn-out page. Hard-bound notebooks include a sewn spine, and the pages are not easily removed. Some styles of sewn bindings allow pages to open flat, while others cause the pages to drape.

Variations of notebooks that allow pages to be added, removed, and replaced are bound by rings, rods, or discs. In each of these systems, the pages are modified with perforations that facilitate the specific binding mechanism's ability to secure them. Ring-bound and rod-bound notebooks secure their contents by threading perforated pages around straight or curved prongs. In the open position, the pages can be removed and rearranged. In the closed position, the pages are kept in order. Disc-bound notebooks remove the open or closed operation by modifying the pages themselves. A page perforated for a disc-bound binding system contains a row of teeth along the side edge of the page that grip onto the outside raised perimeter of individual discs.


Notebooks used for drawing and scrapbooking are usually blank. Notebooks for writing usually have some kind of printing on the writing material, if only lines to align writing or facilitate certain kinds of drawing. Inventor's notebooks have page numbers preprinted to support priority claims. They may be considered as grey literature. Many notebooks have graphic decorations. Personal organizers can have various kinds of preprinted pages.


Artists often use large notebooks, which include wide spaces of blank paper appropriate for drawing. They may also use thicker paper, if painting or using a variety of mediums in their work. Although large, artists' notebooks also are usually considerably light, because they usually take their notebooks with them everywhere to draw scenery. Similarly composers utilize notebooks for writing their lyrics. Lawyers use rather large notebooks known as legal pads that contain lined paper (often yellow) and are appropriate for use on tables and desks. These horizontal lines or "rules" are sometimes classified according to their space apart with "wide rule" the farthest, "college rule" closer, "legal rule" slightly closer and "narrow rule" closest, allowing more lines of text per page. When sewn into a pasteboard backing, these may be called composition books, or in smaller signatures may be called "blue books" or exam books and used for essay exams.

Various notebooks are popular among students for taking notes. The types of notebooks used for school work are single line, double line, four line, square grid line etc. These notebooks are also used by students for school assignments (homeworks) and writing projects.

In contrast, journalists prefer small, hand-held notebooks for portability (reporters' notebooks), and sometimes use shorthand when taking notes. Scientists and other researchers use lab notebooks to document their experiments. The pages in lab notebooks are sometimes graph paper to plot data. Police officers are required to write notes on what they observe, using a police notebook. Land surveyors commonly record field notes in durable, hard-bound notebooks called "field books."

Coloring enthusiasts use coloring notebooks for stress relief. The pages in coloring notebooks contain different adult coloring pages. Students take notes in notebooks, and studies suggest that the act of writing (as opposed to typing) improves learning.

Notebook pages can be recycled via standard paper recycling. Recycled notebooks are available, differing in recycled percentage and paper quality.

Electronic successors

Since the late 20th century, many attempts have been made to integrate the simplicity of a notebook with the editing, searching, and communication capacities of computers through the development of note taking software. Laptop computers began to be called notebooks when they reached a small size in the mid-1990s, but they did not have any special note-taking ability. Most notably Personal digital assistants (PDAs) came next, integrating small liquid crystal displays with a touch-sensitive layer to input graphics and written text. Later on, this role was taken over by smartphones and tablets.

Digital paper combines the simplicity of a traditional pen and notebook with digital storage and interactivity. By printing an invisible dot pattern on the notebook paper and using a pen with a built in infrared camera the written text can be transferred to a laptop, mobile phone or back office for storage and processing.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1948 2023-11-01 00:11:52

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1952) Textbook


A textbook is a a book used in the study of a subject: such as one containing a presentation of the principles of a subject
or a literary work relevant to the study of a subject. A textbook is a a book about a particular subject that is used in the study of that subject especially in a school.


A textbook is a book containing a comprehensive compilation of content in a branch of study with the intention of explaining it. Textbooks are produced to meet the needs of educators, usually at educational institutions. Schoolbooks are textbooks and other books used in schools. Today, many textbooks are published in both print and digital formats.


The history of textbooks dates back to ancient civilizations. For example, Ancient Greeks wrote educational texts. The modern textbook has its roots in the mass production made possible by the printing press. Johannes Gutenberg himself may have printed editions of Ars Minor, a schoolbook on Latin grammar by Aelius Donatus. Early textbooks were used by tutors and teachers (e.g. alphabet books), as well as by individuals who taught themselves.

The Greek philosopher Socrates lamented the loss of knowledge because the media of transmission were changing. Before the invention of the Greek alphabet 2,500 years ago, knowledge and stories were recited aloud, much like Homer's epic poems. The new technology of writing meant stories no longer needed to be memorized, a development Socrates feared would weaken the Greeks' mental capacities for memorizing and retelling. (Ironically, we know about Socrates' concerns only because they were written down by his student Plato in his famous Dialogues.)

The next revolution in the field of books came with the 15th-century invention of printing with changeable type. The invention is attributed to German metalsmith Johannes Gutenberg, who cast type in molds using a melted metal alloy and constructed a wooden-screw printing press to transfer the image onto paper.

Gutenberg's first and only large-scale printing effort was the now iconic Gutenberg Bible in the 1450s – a Latin translation from the Hebrew Old Testament and the Greek New Testament. Gutenberg's invention made mass production of texts possible for the first time. Although the Gutenberg Bible itself was expensive, printed books began to spread widely over European trade routes during the next 50 years, and by the 16th century, printed books had become more widely accessible and less costly.

While many textbooks were already in use, compulsory education and the resulting growth of schooling in Europe led to the printing of many more textbooks for children. Textbooks have been the primary teaching instrument for most children since the 19th century. Two textbooks of historical significance in United States schooling were the 18th century New England Primer and the 19th century McGuffey Readers.

Recent technological advances have changed the way people interact with textbooks. Online and digital materials are making it increasingly easy for students to access materials other than the traditional print textbook. Students now have access to electronic books ("e-books"), online tutoring systems and video lectures. An example of an e-book is Principles of Biology from Nature Publishing.

Most notably, an increasing number of authors are avoiding commercial publishers and instead offering their textbooks under a creative commons or other open license.


As in many industries, the number of providers has declined in recent years (there are just a handful of major textbook companies in the United States).[6] Also, elasticity of demand is fairly low. The term "broken market" appeared in the economist James Koch's analysis of the market commissioned by the Advisory Committee on Student Financial Assistance.

In the United States, the largest textbook publishers are Pearson Education, Cengage, McGraw-Hill Education, and Wiley. Together they control 90% of market revenue. Another textbook publisher is Houghton Mifflin Harcourt.

The market for textbooks doesn't reflect classic supply and demand because of agency problems.

Some students save money by buying used copies of textbooks, which tend to be less expensive, and are available from many college bookstores in the US, who buy them back from students at the end of a term. Books that are not being re-used at the school are often purchased by an off-campus wholesaler for 0–30% of the new cost, for distribution to other bookstores. Some textbook companies have countered this by encouraging teachers to assign homework that must be done on the publisher's website. Students with a new textbook can use the pass code in the book to register on the site; otherwise they must pay the publisher to access the website and complete assigned homework.

Students who look beyond the campus bookstore can typically find lower prices. With the ISBN or title, author and edition, most textbooks can be located through online used booksellers or retailers.

Most leading textbook companies publish a new edition every 3 or 4 years, more frequently in math and science. Harvard economics chair James K. Stock has stated that new editions are often not about significant improvements to the content. "New editions are to a considerable extent simply another tool used by publishers and textbook authors to maintain their revenue stream, that is, to keep up prices." A study conducted by The Student PIRGs found that a new edition costs 12% more than a new copy of the previous edition (not surprising if the old version is obsolete), and 58% more than a used copy of the previous edition. Textbook publishers maintain these new editions are driven by demand from teachers. That study found that 76% of teachers said new editions were justified "half of the time or less" and 40% said they were justified "rarely" or "never". The PIRG study has been criticized by publishers, who argue that the report contains factual inaccuracies regarding the annual average cost of textbooks per student.

The Student PIRGs also point out that recent emphasis on e-textbooks does not always save students money. Even though the book costs less up-front, the student will not recover any of the cost through resale.

Bundling in the United States

Another publishing industry practice that has been highly criticized is "bundling", or shrink-wrapping supplemental items into a textbook.[citation needed] Supplemental items range from CD-ROMs and workbooks to online passcodes and bonus material. Students often cannot buy these things separately, and often the one-time-use supplements destroy the resale value of the textbook.

According to the Student PIRGs, the typical bundled textbook is 10%–50% more than an unbundled textbook, and 65% of professors said they "rarely" or "never" use the bundled items in their courses.

A 2005 Government Accountability Office (GAO) Report in the United States found that the production of these supplemental items was the primary cause of rapidly increasing prices:

While publishers, retailers, and wholesalers all play a role in textbook pricing, the primary factor contributing to increases in the price of textbooks has been the increased investment publishers have made in new products to enhance instruction and learning...While wholesalers, retailers, and others do not question the quality of these materials, they have expressed concern that the publishers' practice of packaging supplements with a textbook to sell as one unit limits the opportunity students have to purchase less expensive used books....If publishers continue to increase these investments, particularly in technology, the cost to produce a textbook is likely to continue to increase in the future.

Bundling has also been used to segment the used book market. Each combination of a textbook and supplemental items receives a separate ISBN. A single textbook could therefore have dozens of ISBNs that denote different combinations of supplements packaged with that particular book. When a bookstore attempts to track down used copies of textbooks, they will search for the ISBN the course instructor orders, which will locate only a subset of the copies of the textbook.

Legislation at state and federal levels seeks to limit the practice of bundling, by requiring publishers to offer all components separately. Publishers have testified in favor of bills including this provision, but only in the case that the provision exempts the loosely defined category of "integrated textbooks". The Federal bill only exempts 3rd party materials in integrated textbooks, however publisher lobbyists have attempted to create a loophole through this definition in state bills.

Price disclosure

Given that the problem of high textbook prices is linked to the "broken" economics of the market, requiring publishers to disclose textbook prices to faculty is a solution pursued by a number of legislatures. By inserting price into sales interactions, this regulation will supposedly make the economic forces operate more normally.

No data suggests that this is in fact true. However, The Student PIRGs have found that publishers actively withhold pricing information from faculty, making it difficult to obtain. Their most recent study found that 77% of faculty say publisher sales representatives do not volunteer prices, and only 40% got an answer when they directly asked. Furthermore, the study found that 23% of faculty rated publisher websites as "informative and easy to use" and less than half said they typically listed the price.

The US Congress passed a law in the 2008 Higher Education Opportunity Act that would require price disclosure. Legislation requiring price disclosure has passed in Connecticut, Washington, Minnesota, Oregon, Arizona, Oklahoma, and Colorado. Publishers are currently supporting price disclosure mandates, though they insist that the "suggested retail price" should be disclosed, rather than the actual price the publisher would get for the book.

Used textbook market

Once a textbook is purchased from a retailer for the first time, there are several ways a student can sell his/her textbooks back at the end of the semester or later. Students can sell to 1) the college/university bookstore; 2) fellow students; 3) numerous online websites; or 4) a student swap service.

Campus buyback

As for buyback on a specific campus, faculty decisions largely determine how much a student receives. If a professor chooses to use the same book the following semester, even if it is a custom text, designed specifically for an individual instructor, bookstores often buy the book back. The GAO report found that, generally, if a book is in good condition and will be used on the campus again the next term, bookstores will pay students 50 percent of the original price paid. If the bookstore has not received a faculty order for the book at the end of the term and the edition is still current, they may offer students the wholesale price of the book, which could range from 5 to 35 percent of the new retail price, according to the GAO report.

When students resell their textbooks during campus "buyback" periods, these textbooks are often sold into the national used textbook distribution chain. If a textbook is not going to be used on campus for the next semester of courses then many times the college bookstore will sell that book to a national used book company. The used book company then resells the book to another college bookstore. Finally, that book is sold as used to a student at another college at a price that is typically 75% of the new book price. At each step, a markup is applied to the book to enable the respective companies to continue to operate.

Student to student sales

Students can also sell or trade textbooks among themselves. After completing a course, sellers will often seek out members of the next enrolling class, people who are likely to be interested in purchasing the required books. This may be done by posting flyers to advertise the sale of the books or simply soliciting individuals who are shopping in the college bookstore for the same titles. Many larger schools have independent websites set up for the purpose of facilitating such trade. These often operate much like digital classified ads, enabling students to list their items for sale and browse for those they wish to acquire. Also, at the US Air Force Academy, it is possible to e-mail entire specific classes, allowing for an extensive network of textbook sales to exist.

Student online marketplaces

Online marketplaces are one of the two major types of online websites students can use to sell used textbooks. Online marketplaces may have an online auction format or may allow the student to list their books for a fixed price. In either case, the student must create the listing for each book themselves and wait for a buyer to order, making the use of marketplaces a more passive way of selling used textbooks. Unlike campus buyback and online book, students are unlikely to sell all their books to one buyer using online marketplaces, and will likely have to send out multiple books individually.

Online book buyers

Online book buyers buy textbooks, and sometimes other types of books, with the aim of reselling them for a profit. Like online marketplaces, online book buyers operate year-round, giving students the opportunity to sell their books even when campus "buyback" periods are not in effect. Online book buyers, who are often online book sellers as well, will sometimes disclaim whether or not a book can be sold back prior to purchase. Students enter the ISBN numbers of the books they wish to sell and receive a price quote or offer. These online book buyers often offer "free shipping" (which in actuality is built into the offer for the book), and allow students to sell multiple books to the same source. Because online book buyers are buying books for resale, the prices they offer may be lower than students can get on online marketplaces. However, their prices are competitive, and they tend to focus on the convenience of their service. Some even claim that buying used textbooks online and selling them to online book buyers has a lower total cost than even textbook rental services.

Textbook exchanges

In response to escalating textbook prices, limited competition, and to provide a more efficient system to connect buyers and sellers together, online textbook exchanges were developed. Most of today's sites handle buyer and seller payments, and usually deduct a small commission only after the sale is completed.

According to textbook author Henry L. Roediger (and Wadsworth Publishing Company senior editor Vicki Knight), the used textbook market is illegitimate, and entirely to blame for the rising costs of textbooks. As methods of "dealing with this problem", he recommends making previous editions of textbooks obsolete, binding the textbook with other materials, and passing laws to prevent the sale of used books. The concept is not unlike the limited licensing approach for computer software, which places rigid restrictions on resale and reproduction. The intent is to make users understand that the content of any textbook is the intellectual property of the author and/or the publisher, and that as such, subject to copyright. Obviously, this idea is completely opposed to the millennia-old tradition of the sale of used books, and would make that entire industry illegal.


Another alternative to save money and obtaining the materials you are required are e-textbooks. The article "E books rewrite the rules of education" states that, alternately to spending a lot of money on textbooks, you can purchase an e-textbook at a small amount of the cost. With the growth of digital applications for iPhone, and gadgets like the Amazon kindle, e-textbooks are not an innovation, but have been "gaining momentum". According to the article " Are textbooks obsolete?", publishers and editorials are concerned about the issue of expensive textbooks. "The expense of textbooks is a concern for students, and e-textbooks, address the face of the issue, Williams says " As publishers we understand the high cost of these materials, and the electronic format permit us diminish the general expense of our content to the market". E-textbook applications facilitate similar experiences to physical textbooks by allowing the user to highlight and take notes in-page. These applications also extend textbook learning by providing quick definitions, reading the text aloud, and search functionality.

(PIRG: Public Interest Research Groups (PIRGs) are a federation of U.S. and Canadian non-profit organizations that employ grassroots organizing and direct advocacy on issues such as consumer protection, public health and transportation. The PIRGs are closely affiliated with the Fund for the Public Interest, which conducts fundraising and canvassing on their behalf.)


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1949 2023-11-02 00:12:00

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1953) Notebook Computer


What is a notebook computer?

A notebook computer is a battery- or AC-powered personal computer generally smaller than a briefcase that can easily be transported and conveniently used in temporary spaces, such as airplanes, libraries, temporary offices and meetings. A notebook computer is more often called a laptop and typically weighs less than 5 pounds and is 3 inches or less in thickness.

Among the best-known makers of notebook and laptop computers are Acer, Apple, Asus, Dell, Hewlett-Packard, Lenovo and Microsoft. Some former top notebook makers have left the business, such as IBM and Toshiba, or gone out of business, like Compaq.

Characteristics of a notebook computer

Notebook computers generally cost more than desktop computers with the same capabilities because these portable computers are more difficult to design and manufacture. A notebook can effectively be turned into a desktop computer with a docking station, which is a hardware frame that supplies connections for peripheral input/output devices, such as printers and monitors. Port replicators can also be used to connect a notebook to peripherals through a single plug.

Notebooks include the following components:

* Embedded central processing unit, such as ones made by Intel or AMD.
* Motherboard.
* Operating system, most often a version of Microsoft Windows.
* Processors to handle peripherals and video displays.
* Keyboard.
* Cursor control device, typically a touchpad.
* Internal storage such as a hard disk drive or, more commonly, a solid-state drive.
* Battery-based power supply used when external power is unavailable.

Notebook displays typically use thin-screen technology. Thin film transistor screens are active matrix liquid crystal displays that are brighter and easier to view at different angles than a super-twisted nematic or dual-scan screen.

Contemporary notebooks usually come with high-definition screens with resolutions of 1920 x 1200 or better. Some laptops have touch screens, so on-screen elements can be manipulated similar to a tablet or smartphone.

Notebooks have traditionally used several approaches for integrating cursor control into the keyboard, including the touchpad, the trackball and the pointing stick. The touchpad is generally the most popular device for cursor control today. A serial port -- typically a USB port -- enables a mouse and other devices, such as memory sticks and larger external displays, to be attached.

Most notebooks include a slot where an SD card can be inserted and mounted; adapters enable microSD cards to be read in the same slot. Older notebooks included a PC card interface for additional hardware such as a modem or network interface card. DVD drives are sometimes built into notebooks, but they're more likely to be external devices attached via a USB port.

Benefits of notebooks

The advantages of using a notebook computer include the following:

* Portability and convenience. Notebooks can perform most of the same functions as a desktop workstation, but their portability, light weight and compact size make them more versatile than a desk-bound computer. Traveling employees can take their notebook with them, along with any application and data access they need.
* Workplace flexibility. Mobile computing devices, like notebooks, enable workers to do their jobs wherever they're needed. During the COVID-19 pandemic, notebook computers enabled many people to work remotely when unable to come into their offices or when they were needed at home to care for children.
* Cost savings. Though more expensive upfront, notebooks can save money when organizations don't need to buy employees multiple computing devices.

Drawbacks of notebooks

* Limited advanced capabilities. Notebooks might not have the processing power needed for performance-intensive applications, such as when advanced graphics and design capabilities are needed or for compute-intensive applications such as analyzing large volumes of data.
* Cost. Depending on the manufacturer and device configuration, notebooks can often be more expensive than traditional desktop systems.
* Battery issues. Battery life can limit a notebook's usefulness in some circumstances.
* Size. Notebooks are smaller than desktops, but they're still larger than slimmer, lighter netbook-style laptops, as well as tablets and smartphones. Their smaller screen size can also be a disadvantage for some applications.

Additional Information

A laptop computer or notebook computer, also known as a laptop or notebook for short, is a small, portable personal computer (PC). Laptops typically have a clamshell form factor with a flat panel screen (usually 11–17 in or 280–430 mm in diagonal size) on the inside of the upper lid and an alphanumeric keyboard and pointing device (such as a trackpad and/or trackpoint) on the inside of the lower lid, although 2-in-1 PCs with a detachable keyboard are often marketed as laptops or as having a "laptop mode". Most of the computer's internal hardware is fitted inside the lower lid enclosure under the keyboard, although many laptops have a built-in webcam at the top of the screen and some modern ones even feature a touch-screen display. In most cases, unlike tablet computers which run on mobile operating systems, laptops tend to run on desktop operating systems which have been traditionally associated with desktop computers.

Laptops run on both an AC power supply and a rechargeable battery pack and can be folded shut for convenient storage and transportation, making them suitable for mobile use. Today, laptops are used in a variety of settings, such as at work (especially on business trips), in education, for playing games, web browsing, for personal multimedia, and for general home computer use.

The names "laptop" and "notebook" refer to the fact that the computer can be practically placed on (or on top of) the user's lap and can be used similarly to a notebook. As of 2022, in American English, the terms "laptop" and "notebook" are used interchangeably; in other dialects of English, one or the other may be preferred. Although the term "notebook" originally referred to a specific size of laptop (originally smaller and lighter than mainstream laptops of the time), the term has come to mean the same thing and no longer refers to any specific size.

Laptops combine many of the input/output components and capabilities of a desktop computer into a single unit, including a display screen, small speakers, a keyboard, and a pointing device (such as a touch pad or pointing stick). Most modern laptops include a built-in webcam and microphone, and many also have a touchscreen. Laptops can be powered by an internal battery or an external power supply by using an AC adapter. Hardware specifications may vary significantly between different types, models, and price points.

Design elements, form factors, and construction can also vary significantly between models depending on the intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low-production-cost laptops such as those from the One Laptop per Child (OLPC) organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers. Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as in the military, for accountants, or traveling sales representatives. As portable computers evolved into modern laptops, they became widely used for a variety of purposes.


As the personal computer (PC) became feasible in 1971, the idea of a portable personal computer soon followed. A "personal, portable information manipulator" was imagined by Alan Kay at Xerox PARC in 1968, and described in his 1972 paper as the "Dynabook". The IBM Special Computer APL Machine Portable (SCAMP) was demonstrated in 1973. This prototype was based on the IBM PALM processor. The IBM 5100, the first commercially available portable computer, appeared in September 1975, and was based on the SCAMP prototype.

As 8-bit CPU machines became widely accepted, the number of portables increased rapidly. The first "laptop-sized notebook computer" was the Epson HX-20, invented (patented) by Suwa Seikosha's Yukio Yokozawa in July 1980, introduced at the COMDEX computer show in Las Vegas by Japanese company Seiko Epson in 1981, and released in July 1982. It had an LCD screen, a rechargeable battery, and a calculator-size printer, in a 1.6 kg (3.5 lb) chassis, the size of an A4 notebook. It was described as a "laptop" and "notebook" computer in its patent.

The portable micro computer Portal of the French company R2E Micral CCMC officially appeared in September 1980 at the Sicob show in Paris. It was a portable microcomputer designed and marketed by the studies and developments department of R2E Micral at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It was equipped with a central 64 KB RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (separate blocks), a 32-character screen, a floppy disk: capacity = 140,000 characters, of a thermal printer: speed = 28 characters / second, an asynchronous channel, asynchronous channel, a 220 V power supply. It weighed 12 kg and its dimensions were 45 × 45 × 15 cm. It provided total mobility. Its operating system was aptly named Prologue.

The Osborne 1, released in 1981, was a luggable computer that used the Zilog Z80 CPU and weighed 24.5 pounds (11.1 kg). It had no battery, a 5 in (13 cm) cathode-ray tube (CRT) screen, and dual 5.25 in (13.3 cm) single-density floppy drives. Both Tandy/RadioShack and Hewlett-Packard (HP) also produced portable computers of varying designs during this period. The first laptops using the flip form factor appeared in the early 1980s. The Dulmont Magnum was released in Australia in 1981–82, but was not marketed internationally until 1984–85. The US$8,150 (equivalent to $24,710 in 2022) GRiD Compass 1101, released in 1982, was used at NASA and by the military, among others. The Sharp PC-5000, Ampere and Gavilan SC released in 1983. The Gavilan SC was described as a "laptop" by its manufacturer, while the Ampere had a modern clamshell design. The Toshiba T1100 won acceptance not only among PC experts but the mass market as a way to have PC portability.

From 1983 onward, several new input techniques were developed and included in laptops, including the touch pad (Gavilan SC, 1983), the pointing stick (IBM ThinkPad 700, 1992), and handwriting recognition (Linus Write-Top, 1987). Some CPUs, such as the 1990 Intel i386SL, were designed to use minimum power to increase battery life of portable computers and were supported by dynamic power management features such as Intel SpeedStep and AMD PowerNow! in some designs.

Displays reached 640x480 (VGA) resolution by 1988 (Compaq SLT/286), and color screens started becoming a common upgrade in 1991, with increases in resolution and screen size occurring frequently until the introduction of 17" screen laptops in 2003. Hard drives started to be used in portables, encouraged by the introduction of 3.5" drives in the late 1980s, and became common in laptops starting with the introduction of 2.5" and smaller drives around 1990; capacities have typically lagged behind physically larger desktop drives.

Common resolutions of laptop webcams are 720p (HD), and in lower-end laptops 480p. The earliest known laptops with 1080p (Full HD) webcams like the Samsung 700G7C were released in the early 2010s.

Optical disc drives became common in full-size laptops around 1997; this initially consisted of CD-ROM drives, which were supplanted by CD-R, DVD, and Blu-ray drives with writing capability over time. Starting around 2011, the trend shifted against internal optical drives, and as of 2022, they have largely disappeared, but are still readily available as external peripherals.


While the terms laptop and notebook are used interchangeably today, there are some questions as to the original etymology and specificity of either term. The term laptop was coined in 1983 to describe a mobile computer which could be used on one's lap, and to distinguish these devices from earlier and much heavier portable computers (informally called "luggables"). The term notebook appears to have gained currency somewhat later as manufacturers started producing even smaller portable devices, further reducing their weight and size and incorporating a display roughly the size of A4 paper; these were marketed as notebooks to distinguish them from bulkier mainstream or desktop replacement laptops.

Types of laptops

Since the introduction of portable computers during the late 1970s, their form has changed significantly, spawning a variety of visually and technologically differing subclasses. Except where there is a distinct legal trademark around a term (notably, Ultrabook), there are rarely hard distinctions between these classes and their usage has varied over time and between different sources. Since the late 2010s, the use of more specific terms has become less common, with sizes distinguished largely by the size of the screen.

Smaller and larger laptops

There were in the past a number of marketing categories for smaller and larger laptop computers; these included "subnotebook" models, low cost "netbooks", and "ultra-mobile PCs" where the size class overlapped with devices like smartphone and handheld tablets, and "Desktop replacement" laptops for machines notably larger and heavier than typical to operate more powerful processors or graphics hardware. All of these terms have fallen out of favor as the size of mainstream laptops has gone down and their capabilities have gone up; except for niche models, laptop sizes tend to be distinguished by the size of the screen, and for more powerful models, by any specialized purpose the machine is intended for, such as a "gaming laptop" or a "mobile workstation" for professional use.

Convertible, hybrid, 2-in-1

The latest trend of technological convergence in the portable computer industry spawned a broad range of devices, which combined features of several previously separate device types. The hybrids, convertibles, and 2-in-1s emerged as crossover devices, which share traits of both tablets and laptops. All such devices have a touchscreen display designed to allow users to work in a tablet mode, using either multi-touch gestures or a stylus/digital pen.

Convertibles are devices with the ability to conceal a hardware keyboard. Keyboards on such devices can be flipped, rotated, or slid behind the back of the chassis, thus transforming from a laptop into a tablet. Hybrids have a keyboard detachment mechanism, and due to this feature, all critical components are situated in the part with the display. 2-in-1s can have a hybrid or a convertible form, often dubbed 2-in-1 detachable and 2-in-1 convertibles respectively, but are distinguished by the ability to run a desktop OS, such as Windows 10. 2-in-1s are often marketed as laptop replacement tablets.

2-in-1s are often very thin, around 10 millimetres (0.39 in), and light devices with a long battery life. 2-in-1s are distinguished from mainstream tablets as they feature an x86-architecture CPU (typically a low- or ultra-low-voltage model), such as the Intel Core i5, run a full-featured desktop OS like Windows 10, and have a number of typical laptop I/O ports, such as USB 3 and Mini DisplayPort.

2-in-1s are designed to be used not only as a media consumption device but also as valid desktop or laptop replacements, due to their ability to run desktop applications, such as Adobe Photoshop. It is possible to connect multiple peripheral devices, such as a mouse, keyboard, and several external displays to a modern 2-in-1.

Microsoft Surface Pro-series devices and Surface Book are examples of modern 2-in-1 detachable, whereas Lenovo Yoga-series computers are a variant of 2-in-1 convertibles. While the older Surface RT and Surface 2 have the same chassis design as the Surface Pro, their use of ARM processors and Windows RT do not classify them as 2-in-1s, but as hybrid tablets. Similarly, a number of hybrid laptops run a mobile operating system, such as Android. These include Asus's Transformer Pad devices, examples of hybrids with a detachable keyboard design, which do not fall in the category of 2-in-1s.

Rugged laptop

A rugged laptop is designed to reliably operate in harsh usage conditions such as strong vibrations, extreme temperatures, and wet or dusty environments. Rugged laptops are bulkier, heavier, and much more expensive than regular laptops, and thus are seldom seen in regular consumer use.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


#1950 2023-11-03 00:11:19

Jai Ganesh
Registered: 2005-06-28
Posts: 47,159

Re: Miscellany

1954) Book


A book is a number of pieces of paper, usually with words printed on them, which are fastened together and fixed inside a cover of stronger paper or cardboard. Books contain information, stories, or poetry, for example.


A Book is published work of literature or scholarship; the term has been defined by UNESCO for statistical purposes as a “non-periodical printed publication of at least 49 pages excluding covers,” but no strict definition satisfactorily covers the variety of publications so identified.

Although the form, content, and provisions for making books have varied widely during their long history, some constant characteristics may be identified. The most obvious is that a book is designed to serve as an instrument of communication—the purpose of such diverse forms as the Babylonian clay tablet, the Egyptian papyrus roll, the medieval vellum or parchment codex, the printed paper codex (most familiar in modern times), microfilm, and various other media and combinations. The second characteristic of the book is its use of writing or some other system of visual symbols (such as pictures or musical notation) to convey meaning. A third distinguishing feature is publication for tangible circulation. A temple column with a message carved on it is not a book nor is a sign or placard, which, though it may be easy enough to transport, is made to attract the eye of the passerby from a fixed location. Nor are private documents considered books. A book may be defined, therefore, as a written (or printed) message of considerable length, meant for public circulation and recorded on materials that are light yet durable enough to afford comparatively easy portability. Its primary purpose is to announce, expound, preserve, and transmit knowledge and information between people, depending on the twin faculties of portability and permanence. Books have attended the preservation and dissemination of knowledge in every literate society.

The papyrus roll of ancient Egypt is more nearly the direct ancestor of the modern book than is the clay tablet of the ancient Sumerians, Babylonians, Assyrians, and Hittites; examples of both date from about 3000 BC.

The Chinese independently created an extensive scholarship based on books, though not so early as the Sumerians and the Egyptians. Primitive Chinese books were made of wood or bamboo strips bound together with cords. The emperor Shih Huang Ti attempted to blot out publishing by burning books in 213 BC, but the tradition of book scholarship was nurtured under the Han dynasty (206 BC to AD 220). The survival of Chinese texts was assured by continuous copying. In AD 175, Confucian texts began to be carved into stone tablets and preserved by rubbings. Lampblack ink was introduced in China in AD 400 and printing from wooden blocks in the 6th century.

The Greeks adopted the papyrus roll and passed it on to the Romans. The vellum or parchment codex, which had superseded the roll by AD 400, was a revolutionary change in the form of the book. The codex introduced several advantages: a series of pages could be opened to any point in the text, both sides of the leaf could carry the message, and longer texts could be bound in a single volume. The medieval vellum or parchment leaves were prepared from the skins of animals. By the 15th century paper manuscripts were common. During the Middle Ages, monasteries characteristically had libraries and scriptoria, places in which scribes copied books. The manuscript books of the Middle Ages, the models for the first printed books, were affected by the rise of Humanism and the growing interest in vernacular languages in the 14th and 15th centuries.

The spread of printing was rapid in the second half of the 15th century; the printed books of that period are known as incunabula. The book made possible a revolution in thought and scholarship that became evident by the 16th century: the sources lay in the capacity of the press to multiply copies, to complete editions, and to reproduce a uniform graphic design along new conventional patterns that made the printed volume differ in appearance from the handwritten book. Other aspects of the printing revolution—cultural change associated with concentration on visual communication as contrasted to the oral modes of earlier times—have been emphasized by Marshall McLuhan.

In the 17th century books were generally inferior in appearance to the best examples of the art of the book in the 16th. There was a great expansion in the reading public in the 17th and 18th centuries in the West, in part because of the increasing literacy of women. Type designs were advanced. The lithographic process of printing illustrations, discovered at the end of the 18th century, was significant because it became the basis for offset printing.

In the 19th century the mechanization of printing provided the means for meeting the increased demand for books in industrialized societies. William Morris, in an effort to renew a spirit of craftsmanship, started the private press movement late in the 19th century. In the 20th century the book maintained a role of cultural ascendancy, although challenged by new media for dissemination of knowledge and its storage and retrieval. The paperbound format proved successful not only for the mass marketing of books but also from the 1950s for books of less general appeal. After World War II, an increase in use of colour illustration, particularly in children’s books and textbooks, was an obvious trend, facilitated by the development of improved high-speed, offset printing.


A book is a medium for recording information in the form of writing or images, typically composed of many pages (made of papyrus, parchment, vellum, or paper) bound together and protected by a cover. It can also be a handwritten or printed work of fiction or nonfiction, usually on sheets of paper fastened or bound together within covers. The technical term for this physical arrangement is codex (plural, codices). In the history of hand-held physical supports for extended written compositions or records, the codex replaces its predecessor, the scroll. A single sheet in a codex is a leaf and each side of a leaf is a page.

As an intellectual object, a book is prototypically a composition of such great length that it takes a considerable investment of time to compose and still considered as an investment of time to read. In a restricted sense, a book is a self-sufficient section or part of a longer composition, a usage reflecting that, in antiquity, long works had to be written on several scrolls and each scroll had to be identified by the book it contained. Each part of Aristotle's Physics is called a book. In an unrestricted sense, a book is the compositional whole of which such sections, whether called books or chapters or parts, are parts.

The intellectual content in a physical book need not be a composition, nor even be called a book. Books can consist only of drawings, engravings or photographs, crossword puzzles or cut-out dolls. In a physical book, the pages can be left blank or can feature an abstract set of lines to support entries, such as in an account book, appointment book, autograph book, notebook, diary or sketchbook. Some physical books are made with pages thick and sturdy enough to support other physical objects, like a scrapbook or photograph album. Books may be distributed in electronic form as ebooks and other formats.

Although in ordinary academic parlance a monograph is understood to be a specialist academic work, rather than a reference work on a scholarly subject, in library and information science monograph denotes more broadly any non-serial publication complete in one volume (book) or a finite number of volumes (even a novel like Proust's seven-volume In Search of Lost Time), in contrast to serial publications like a magazine, journal or newspaper. An avid reader or collector of books is a bibliophile or, colloquially, "bookworm". Books are traded at both regular stores and specialized bookstores, and people can read borrowed books, often for free, at libraries. Google has estimated that by 2010, approximately 130,000,000 titles had been published.

In some wealthier nations, the sale of printed books has decreased because of the increased usage of e-books. However, in most countries, printed books continue to outsell their digital counterparts due to many people still preferring to read in a traditional way. The 21st century has also seen a rapid rise in the popularity of audiobooks, which are recordings of books being read aloud.

The Latin word codex, meaning a book in the modern sense (bound and with separate leaves), originally meant 'block of wood'.



When writing systems were created in ancient civilizations, a variety of objects, such as stone, clay, tree bark, metal sheets, and bones, were used for writing; these are studied in epigraphy.


A tablet is a physically robust writing medium, suitable for casual transport and writing. Clay tablets were flattened and mostly dry pieces of clay that could be easily carried, and impressed with a stylus. They were used as a writing medium, especially for writing in cuneiform, throughout the Bronze Age and well into the Iron Age. Wax tablets were pieces of wood covered in a coating of wax thick enough to record the impressions of a stylus. They were the normal writing material in schools, in accounting, and for taking notes. They had the advantage of being reusable: the wax could be melted, and reformed into a blank.

The custom of binding several wax tablets together (Roman pugillares) is a possible precursor of modern bound (codex) books. The etymology of the word codex (block of wood) also suggests that it may have developed from wooden wax tablets.


Scrolls can be made from papyrus, a thick paper-like material made by weaving the stems of the papyrus plant, then pounding the woven sheet with a hammer-like tool until it is flattened. Papyrus was used for writing in Ancient Egypt, perhaps as early as the First Dynasty, although the first evidence is from the account books of King Neferirkare Kakai of the Fifth Dynasty (about 2400 BC). Papyrus sheets were glued together to form a scroll. Tree bark such as lime and other materials were also used.

According to Herodotus (History 5:58), the Phoenicians brought writing and papyrus to Greece around the 10th or 9th century BC. The Greek word for papyrus as writing material (biblion) and book (biblos) come from the Phoenician port town Byblos, through which papyrus was exported to Greece. From Greek we also derive the word tome, which originally meant a slice or piece and from there began to denote "a roll of papyrus". Tomus was used by the Latins with exactly the same meaning as volumen.

Whether made from papyrus, parchment, or paper, scrolls were the dominant form of book in the Hellenistic, Roman, Chinese, Hebrew, and Macedonian cultures. The Romans and Etruscans also made 'books' out of folded linen called in Latin Libri lintei, the only extant example of which is the Etruscan Liber Linteus. The more modern codex book format form took over the Roman world by late antiquity, but the scroll format persisted much longer in Asia.


Isidore of Seville (died 636) explained the then-current relation between a codex, book, and scroll in his Etymologiae (VI.13): "A codex is composed of many books; a book is of one scroll. It is called codex by way of metaphor from the trunks (codex) of trees or vines, as if it were a wooden stock, because it contains in itself a multitude of books, as it were of branches". Modern usage differs.

A codex (in modern usage) is the first information repository that modern people would recognize as a "book": leaves of uniform size bound in some manner along one edge, and typically held between two covers made of some more robust material. The first written mention of the codex as a form of book is from Martial, in his Apophoreta CLXXXIV at the end of the first century, where he praises its compactness. However, the codex never gained much popularity in the pagan Hellenistic world, and only within the Christian community did it gain widespread use. This change happened gradually during the 3rd and 4th centuries, and the reasons for adopting the codex form of the book are several: the format is more economical, as both sides of the writing material can be used; and it is portable, searchable, and easy to conceal. A book is much easier to read, to find a page that you want, and to flip through. A scroll is more awkward to use. The Christian authors may also have wanted to distinguish their writings from the pagan and Judaic texts written on scrolls. In addition, some metal books were made, that required smaller pages of metal, instead of an impossibly long, unbending scroll of metal. A book can also be easily stored in more compact places, or side by side in a tight library or shelf space.


The fall of the Roman Empire in the 5th century AD saw the decline of the culture of ancient Rome. Papyrus became difficult to obtain due to lack of contact with Egypt, and parchment, which had been used for centuries, became the main writing material. Parchment is a material made from processed animal skin and used, mainly in the past, for writing on, especially in the Middle Ages. Parchment is most commonly made of calfskin, sheepskin, or goatskin. It was historically used for writing documents, notes, or the pages of a book, and first came into use around the 200s BC. Parchment is limed, scraped and dried under tension. It is not tanned, and is thus different from leather. This makes it more suitable for writing on, but leaves it very reactive to changes in relative humidity and makes it revert to rawhide if overly wet.

Monasteries carried on the Latin writing tradition in the Western Roman Empire. Cassiodorus, in the monastery of Vivarium (established around 540), stressed the importance of copying texts. St. Benedict of Nursia, in his Rule of Saint Benedict (completed around the middle of the 6th century) later also promoted reading. The Rule of Saint Benedict (Ch. XLVIII), which set aside certain times for reading, greatly influenced the monastic culture of the Middle Ages and is one of the reasons why the clergy were the predominant readers of books. The tradition and style of the Roman Empire still dominated, but slowly the peculiar medieval book culture emerged.

Before the invention and adoption of the printing press, almost all books were copied by hand, which made books expensive and comparatively rare. Smaller monasteries usually had only a few dozen books, medium-sized perhaps a few hundred. By the 9th century, larger collections held around 500 volumes and even at the end of the Middle Ages, the papal library in Avignon and Paris library of the Sorbonne held only around 2,000 volumes.

The scriptorium of the monastery was usually located over the chapter house. Artificial light was forbidden for fear it may damage the manuscripts. There were five types of scribes:

* Calligraphers, who dealt in fine book production
* Copyists, who dealt with basic production and correspondence
* Correctors, who collated and compared a finished book with the manuscript from which it had been produced
* Illuminators, who painted illustrations
* Rubricators, who painted in the red letters

The bookmaking process was long and laborious. The parchment had to be prepared, then the unbound pages were planned and ruled with a blunt tool or lead, after which the text was written by the scribe, who usually left blank areas for illustration and rubrication. Finally, the book was bound by the bookbinder.

Different types of ink were known in antiquity, usually prepared from soot and gum, and later also from gall nuts and iron vitriol. This gave writing a brownish black color, but black or brown were not the only colors used. There are texts written in red or even gold, and different colors were used for illumination. For very luxurious manuscripts the whole parchment was colored purple, and the text was written on it with gold or silver (for example, Codex Argenteus).

Irish monks introduced spacing between words in the 7th century. This facilitated reading, as these monks tended to be less familiar with Latin. However, the use of spaces between words did not become commonplace before the 12th century. It has been argued that the use of spacing between words shows the transition from semi-vocalized reading into silent reading.

The first books used parchment or vellum (calfskin) for the pages. The book covers were made of wood and covered with leather. Because dried parchment tends to assume the form it had before processing, the books were fitted with clasps or straps. During the later Middle Ages, when public libraries appeared, up to the 18th century, books were often chained to a bookshelf or a desk to prevent theft. These chained books are called libri catenati.

At first, books were copied mostly in monasteries, one at a time. With the rise of universities in the 13th century, the Manuscript culture of the time led to an increase in the demand for books, and a new system for copying books appeared. The books were divided into unbound leaves (pecia), which were lent out to different copyists, so the speed of book production was considerably increased. The system was maintained by secular stationers guilds, which produced both religious and non-religious material.

Judaism has kept the art of the scribe alive up to the present. According to Jewish tradition, the Torah scroll placed in a synagogue must be written by hand on parchment and a printed book would not do, though the congregation may use printed prayer books and printed copies of the Scriptures are used for study outside the synagogue. A sofer "scribe" is a highly respected member of many Jewish communities.

Middle East

People of various religious (Jews, Christians, Zoroastrians, Muslims) and ethnic backgrounds (Syriac, Coptic, Persian, Arab etc.) in the Middle East also produced and bound books in the Islamic Golden Age (mid 8th century to 1258), developing advanced techniques in Islamic calligraphy, miniatures and bookbinding. A number of cities in the medieval Islamic world had book production centers and book markets. Yaqubi (died 897) says that in his time Baghdad had over a hundred booksellers. Book shops were often situated around the town's principal mosque as in Marrakesh, Morocco, that has a street named Kutubiyyin or book sellers in English and the famous Koutoubia Mosque is named so because of its location in this street.

The medieval Muslim world also used a method of reproducing reliable copies of a book in large quantities known as check reading, in contrast to the traditional method of a single scribe producing only a single copy of a single manuscript. In the check reading method, only "authors could authorize copies, and this was done in public sessions in which the copyist read the copy aloud in the presence of the author, who then certified it as accurate." With this check-reading system, "an author might produce a dozen or more copies from a single reading," and with two or more readings, "more than one hundred copies of a single book could easily be produced." By using as writing material the relatively cheap paper instead of parchment or papyrus the Muslims, in the words of Pedersen "accomplished a feat of crucial significance not only to the history of the Islamic book, but also to the whole world of books".

Wood block printing

In woodblock printing, a relief image of an entire page was carved into blocks of wood, inked, and used to print copies of that page. This method originated in China, in the Han dynasty (before 220 AD), as a method of printing on textiles and later paper, and was widely used throughout East Asia. The oldest dated book printed by this method is The Diamond Sutra (868 AD). The method (called woodcut when used in art) arrived in Europe in the early 14th century. Books (known as block-books), as well as playing-cards and religious pictures, began to be produced by this method. Creating an entire book was a painstaking process, requiring a hand-carved block for each page; and the wood blocks tended to crack, if stored for long. The monks or people who wrote them were paid highly.

Movable type and incunabula

The Chinese inventor Bi Sheng made movable type of earthenware c. 1045, but there are no known surviving examples of his printing. Around 1450, in what is commonly regarded as an independent invention, Johannes Gutenberg invented movable type in Europe, along with innovations in casting the type based on a matrix and hand mould. This invention gradually made books less expensive to produce and more widely available.

Early printed books, single sheets and images which were created before 1501 in Europe are known as incunables or incunabula. "A man born in 1453, the year of the fall of Constantinople, could look back from his fiftieth year on a lifetime in which about eight million books had been printed, more perhaps than all the scribes of Europe had produced since Constantine founded his city in AD 330."

19th century to 21st centuries

Steam-powered printing presses became popular in the early 19th century. These machines could print 1,100 sheets per hour, but workers could only set 2,000 letters per hour.[citation needed] Monotype and linotype typesetting machines were introduced in the late 19th century. They could set more than 6,000 letters per hour and an entire line of type at once. There have been numerous improvements in the printing press. As well, the conditions for freedom of the press have been improved through the gradual relaxation of restrictive censorship laws. See also intellectual property, public domain, copyright. In mid-20th century, European book production had risen to over 200,000 titles per year.

Throughout the 20th century, libraries have faced an ever-increasing rate of publishing, sometimes called an information explosion. The advent of electronic publishing and the internet means that much new information is not printed in paper books, but is made available online through a digital library, on CD-ROM, in the form of ebooks or other online media. An on-line book is an ebook that is available online through the internet. Though many books are produced digitally, most digital versions are not available to the public, and there is no decline in the rate of paper publishing. There is an effort, however, to convert books that are in the public domain into a digital medium for unlimited redistribution and infinite availability. This effort is spearheaded by Project Gutenberg combined with Distributed Proofreaders. There have also been new developments in the process of publishing books. Technologies such as POD or "print on demand", which make it possible to print as few as one book at a time, have made self-publishing (and vanity publishing) much easier and more affordable. On-demand publishing has allowed publishers, by avoiding the high costs of warehousing, to keep low-selling books in print rather than declaring them out of print.

Indian manuscripts

Goddess Saraswati image dated 132 AD excavated from Kankali tila depicts her holding a manuscript in her left hand represented as a bound and tied palm leaf or birch bark manuscript. In India a bounded manuscript made of birch bark or palm leaf existed side by side since antiquity. The text in palm leaf manuscripts was inscribed with a knife pen on rectangular cut and cured palm leaf sheets; colouring was then applied to the surface and wiped off, leaving the ink in the incised grooves. Each sheet typically had a hole through which a string could pass, and with these the sheets were tied together with a string to bind like a book.

Mesoamerican codices

The codices of pre-Columbian Mesoamerica (Mexico and Central America) had the same form as the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century. Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the local amatl paper.

Modern manufacturing

The spine of the book is an important aspect in book design, especially in the cover design. When books are stacked up or stored on a shelf, the spine is often the only visible surface that contains information about the book. In stores, it is the details on the spine that attract a prospective buyer's attention first.

The methods used for the printing and binding of books continued fundamentally unchanged from the 15th century into the early 20th century. While there was more mechanization, a book printer in 1900 had much in common with Gutenberg. Gutenberg's invention was the use of movable metal types, assembled into words, lines, and pages and then printed by letterpress to create multiple copies. Modern paper books are printed on papers designed specifically for printed books. Traditionally, book papers are off-white or low-white papers (easier to read), are opaque to minimize the show-through of text from one side of the page to the other and are (usually) made to tighter caliper or thickness specifications, particularly for case-bound books. Different paper qualities are used depending on the type of book: Machine finished coated papers, woodfree uncoated papers, coated fine papers and special fine papers are common paper grades.

Today, the majority of books are printed by offset lithography. When a book is printed, the pages are laid out on the plate so that after the printed sheet is folded the pages will be in the correct sequence. Books tend to be manufactured nowadays in a few standard sizes. The sizes of books are usually specified as "trim size": the size of the page after the sheet has been folded and trimmed. The standard sizes result from sheet sizes (therefore machine sizes) which became popular 200 or 300 years ago, and have come to dominate the industry. British conventions in this regard prevail throughout the English-speaking world, except for the US. The European book manufacturing industry works to a completely different set of standards.


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.


Board footer

Powered by FluxBB