Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1901 2023-09-15 00:59:56

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1905) Clown

Gist

A clown is a fool, jester, or comedian in an entertainment (such as a play) specifically : a grotesquely dressed comedy performer in a circus. (ii) : a person who habitually jokes and plays the buffoon.

2) an entertainer who wears funny clothes, has a painted face, and makes people laugh by performing tricks and behaving in a silly way.

Summary

A clown is a familiar comic character of pantomime and circus, known by distinctive makeup and costume, ludicrous antics, and buffoonery, whose purpose is to induce hearty laughter. The clown, unlike the traditional fool or court jester, usually performs a set routine characterized by broad, graphic humour, absurd situations, and vigorous physical action.

The earliest ancestors of the clown flourished in ancient Greece—bald-headed, padded buffoons who performed as secondary figures in farces and mime, parodying the actions of more serious characters and sometimes pelting the spectators with nuts. The same clown appeared in the Roman mime, wearing a pointed hat and a motley patchwork robe and serving as the butt for all the tricks and abuse of his fellow actors.

Clowning was a general feature of the acts of medieval minstrels and jugglers, but the clown did not emerge as a professional comic actor until the late Middle Ages, when traveling entertainers sought to imitate the antics of the court jesters and the amateur fool societies, such as the Enfants san Souci, who specialized in comic drama at festival times. The traveling companies of the Italian commedia dell’arte developed one of the most famous and durable clowns of all time, the Arlecchino, or Harlequin, some time in the latter half of the 16th century, spreading his fame throughout Europe. The Harlequin began as a comic valet, or zany, but soon developed into an acrobatic trickster, wearing a black domino mask and carrying a bat or noisy slapstick, with which he frequently belaboured the posteriors of his victims.

The English clown was descended from the Vice character of the medieval mystery plays, a buffoon and prankster who could sometimes deceive even the Devil. Among the first professional stage clowns were the famous William Kempe and Robert Armin, both of whom were connected with Shakespeare’s company. Traveling English actors of the 17th century were responsible for the introduction of stage clowns to Germany, among them such popular characters as Pickelherring, who remained a German favourite until the 19th century. Pickelherring and his confederates wore clown costumes that have hardly changed to this day: oversized shoes, waistcoats, and hats, with giant ruffs around their necks.

The traditional whiteface makeup of the clown is said to have been introduced with the character of Pierrot (or Pedrolino), the French clown with a bald head and flour-whitened face who first appeared during the latter part of the 17th century. First created as a butt for Harlequin, Pierrot was gradually softened and sentimentalized. The pantomimist Jean-Baptiste-Gaspard Deburau took on the character in the early 19th century and created the famous lovesick, pathetic clown, whose melancholy has since remained part of the clown tradition.

The earliest of the true circus clowns was Joseph Grimaldi, who first appeared in England in 1805. Grimaldi’s clown, affectionately called “Joey,” specialized in the classic physical tricks, tumbling, pratfalls, and slapstick beatings. In the 1860s a low-comedy buffoon appeared under the name of Auguste, who had a big nose, baggy clothes, large shoes, and untidy manners. He worked with a whiteface clown and always spoiled the latter’s trick by appearing at an inappropriate time to foul things up.

Grock (Adrien Wettach) was a famous whiteface pantomimist. His elaborate melancholy resembled that of Emmett Kelly, the American vagabond clown. Bill Irwin maintained the tradition in performances billed as “new vaudeville,” while Dario Fo, an Italian political playwright, carried the torch in a more dramatic context, through both his plays and his personal appearance.

The clown figure in motion pictures culminated in the immortal “little tramp” character of Charlie Chaplin, with his ill-fitting clothes, flat-footed walk, and winsome mannerisms.

Details

A clown is a person who performs comedy and arts in a state of open-mindedness using physical comedy, typically while wearing distinct makeup or costuming and reversing folkway-norms.

History

The most ancient clowns have been found in the Fifth Dynasty of Egypt, around 2400 BC. Unlike court jesters,[dubious – discuss] clowns have traditionally served a socio-religious and psychological role, and traditionally the roles of priest and clown have been held by the same persons. Peter Berger writes, "It seems plausible that folly and fools, like religion and magic, meet some deeply rooted needs in human society." For this reason, clowning is often considered an important part of training as a physical performance discipline, partly because tricky subject matter can be dealt with, but also because it requires a high level of risk and play in the performer.

In anthropology, the term clown has been extended to comparable jester or fool characters in non-Western cultures. A society in which such clowns have an important position are termed clown societies, and a clown character involved in a religious or ritual capacity is known as a ritual clown.

A Heyoka is an individual in Lakota and Dakota culture cultures who lives outside the constraints of normal cultural roles, playing the role of a backwards clown by doing everything in reverse. The Heyoka role is sometimes best filled by a Winkte.

Many native tribes have a history of clowning. The Canadian clowning method developed by Richard Pochinko and furthered by his former apprentice, Sue Morrison, combines European and Native American clowning techniques. In this tradition, masks are made of clay while the creator's eyes are closed. A mask is made for each direction of the medicine wheel. During this process, the clown creates a personal mythology that explores their personal experiences.

"Grimaldi was the first recognizable ancestor of the modern clown, sort of the Homo erectus of clown evolution. Before him, a clown may have worn make-up, but it was usually just a bit of rouge on the cheeks to heighten the sense of them being florid, funny drunks or rustic yokels. Grimaldi, however, suited up in bizarre, colorful costumes, stark white face paint punctuated by spots of bright red on his cheeks and topped with a blue mohawk. He was a master of physical comedy—he leapt in the air, stood on his head, fought himself in hilarious fisticuffs that had audiences rolling in the aisles—as well as of satire lampooning the absurd fashions of the day, comic impressions, and ribald songs."

—The History and Psychology of Clowns Being Scary, Smithsonian.

The circus clown tradition developed out of earlier comedic roles in theatre or Varieté shows during the 19th to mid 20th centuries. This recognizable character features outlandish costumes, distinctive makeup, colorful wigs, exaggerated footwear, and colorful clothing, with the style generally being designed to entertain large audiences.

The first mainstream clown role was portrayed by Joseph Grimaldi (who also created the traditional whiteface make-up design). In the early 1800s, he expanded the role of Clown in the harlequinade that formed part of British pantomimes, notably at the Theatre Royal, Drury Lane and the Sadler's Wells and Covent Garden theatres. He became so dominant on the London comic stage that harlequinade Clowns became known as "Joey", and both the nickname and Grimaldi's whiteface make-up design are still used by other clowns.

The comedy that clowns perform is usually in the role of a fool whose everyday actions and tasks become extraordinary—and for whom the ridiculous, for a short while, becomes ordinary. This style of comedy has a long history in many countries and cultures across the world. Some writers have argued that due to the widespread use of such comedy and its long history it is a need that is part of the human condition.

The modern clowning school of comedy in the 21st century diverged from white-face clown tradition, with more of an emphasis on personal vulnerability.

Origin

The clown character developed out of the zanni rustic fool characters of the early modern commedia dell'arte, which were themselves directly based on the rustic fool characters of ancient Greek and Roman theatre. Rustic buffoon characters in Classical Greek theater were known as sklêro-paiktês (from paizein: to play (like a child)) or deikeliktas, besides other generic terms for rustic or peasant. In Roman theater, a term for clown was fossor, literally digger; labourer.

The English word clown was first recorded c. 1560 (as clowne, cloyne) in the generic meaning rustic, boor, peasant. The origin of the word is uncertain, perhaps from a Scandinavian word cognate with clumsy. It is in this sense that Clown is used as the name of fool characters in Shakespeare's Othello and The Winter's Tale. The sense of clown as referring to a professional or habitual fool or jester developed soon after 1600, based on Elizabethan rustic fool characters such as Shakespeare's.

The harlequinade developed in England in the 17th century, inspired by Arlecchino and the commedia dell'arte. It was here that Clown came into use as the given name of a stock character. Originally a foil for Harlequin's slyness and adroit nature, Clown was a buffoon or bumpkin fool who resembled less a jester than a comical idiot. He was a lower class character dressed in tattered servants' garb.

The now-classical features of the clown character were developed in the early 1800s by Joseph Grimaldi, who played Clown in Charles Dibdin's 1800 pantomime Peter Wilkins: or Harlequin in the Flying World at Sadler's Wells Theatre, where Grimaldi built the character up into the central figure of the harlequinade.

Modern circuses

The circus clown developed in the 19th century. The modern circus derives from Philip Astley's London riding school, which opened in 1768. Astley added a clown to his shows to amuse the spectators between equestrian sequences. American comedian George L. Fox became known for his clown role, directly inspired by Grimaldi, in the 1860s. Tom Belling senior (1843–1900) developed the red clown or Auguste (Dummer August) character c. 1870, acting as a foil for the more sophisticated white clown. Belling worked for Circus Renz in Vienna. Belling's costume became the template for the modern stock character of circus or children's clown, based on a lower class or hobo character, with red nose, white makeup around the eyes and mouth, and oversized clothes and shoes. The clown character as developed by the late 19th century is reflected in Ruggero Leoncavallo's 1892 opera Pagliacci (Clowns). Belling's Auguste character was further popularized by Nicolai Poliakoff's Coco in the 1920s to 1930s.

The English word clown was borrowed, along with the circus clown act, by many other languages, such as French clown, Russian (and other Slavic languages) , Greek, Danish/Norwegian klovn, Romanian clovn etc.

Italian retains Pagliaccio, a Commedia dell'arte zanni character, and derivations of the Italian term are found in other Romance languages, such as French Paillasse, Spanish payaso, Catalan/Galician pallasso, Portuguese palhaço, Greek, Turkish palyaço, German Pajass (via French) Yiddish (payats), Russian, Romanian paiață.

20th-century North America

In the early 20th century, with the disappearance of the rustic simpleton or village idiot character of everyday experience, North American circuses developed characters such as the tramp or hobo. Examples include Marceline Orbes, who performed at the Hippodrome Theater (1905), Charlie Chaplin's The Tramp (1914), and Emmett Kelly's Weary Willie based on hobos of the Depression era. Another influential tramp character was played by Otto Griebling during the 1930s to 1950s. Red Skelton's Dodo the Clown in The Clown (1953), depicts the circus clown as a tragicomic stock character, "a funny man with a drinking problem".

In the United States, Bozo the Clown was an influential Auguste character since the late 1950s. The Bozo Show premiered in 1960 and appeared nationally on cable television in 1978. McDonald's derived its mascot clown, Ronald McDonald, from the Bozo character in the 1960s. Willard Scott, who had played Bozo during 1959–1962, performed as the mascot in 1963 television spots. The McDonald's trademark application for the character dates to 1967.

Based on the Bozo template, the US custom of birthday clown, private contractors who offer to perform as clowns at children's parties, developed in the 1960s to 1970s. The strong association of the (Bozo-derived) clown character with children's entertainment as it has developed since the 1960s also gave rise to Clown Care or hospital clowning in children's hospitals by the mid-1980s. Clowns of America International (established 1984) and World Clown Association (established 1987) are associations of semi-professionals and professional performers.

The shift of the Auguste or red clown character from his role as a foil for the white in circus or pantomime shows to a Bozo-derived standalone character in children's entertainment by the 1980s also gave rise to the evil clown character, with the attraction of clowns for small children being based in their fundamentally threatening or frightening nature. The fear of clowns, particularly circus clowns, has become known by the term "coulrophobia."

clown2.png?auto=webp&width=720&height=405


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1902 2023-09-16 00:02:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1906) Welding

Gist

Welding is a fabrication process whereby two or more parts are fused together by means of heat, pressure or both forming a join as the parts cool. Welding is usually used on metals and thermoplastics but can also be used on wood. The completed welded joint may be referred to as a weldment.

Summary

Welding is a fabrication process that joins materials, usually metals or thermoplastics, by using high heat to melt the parts together and allowing them to cool, causing fusion. Welding is distinct from lower temperature techniques such as brazing and soldering, which do not melt the base metal (parent metal).

In addition to melting the base metal, a filler material is typically added to the joint to form a pool of molten material (the weld pool) that cools to form a joint that, based on weld configuration (butt, full penetration, fillet, etc.), can be stronger than the base material. Pressure may also be used in conjunction with heat or by itself to produce a weld. Welding also requires a form of shield to protect the filler metals or melted metals from being contaminated or oxidized.

Many different energy sources can be used for welding, including a gas flame (chemical), an electric arc (electrical), a laser, an electron beam, friction, and ultrasound. While often an industrial process, welding may be performed in many different environments, including in open air, under water, and in outer space. Welding is a hazardous undertaking and precautions are required to avoid burns, electric shock, vision damage, inhalation of poisonous gases and fumes, and exposure to intense ultraviolet radiation.

Until the end of the 19th century, the only welding process was forge welding, which blacksmiths had used for millennia to join iron and steel by heating and hammering. Arc welding and oxy-fuel welding were among the first processes to develop late in the century, and electric resistance welding followed soon after. Welding technology advanced quickly during the early 20th century as world wars drove the demand for reliable and inexpensive joining methods. Following the wars, several modern welding techniques were developed, including manual methods like shielded metal arc welding, now one of the most popular welding methods, as well as semi-automatic and automatic processes such as gas metal arc welding, submerged arc welding, flux-cored arc welding and electroslag welding. Developments continued with the invention of laser beam welding, electron beam welding, magnetic pulse welding, and friction stir welding in the latter half of the century. Today, as the science continues to advance, robot welding is commonplace in industrial settings, and researchers continue to develop new welding methods and gain greater understanding of weld quality.

History

The history of joining metals goes back several millennia. The earliest examples of this come from the Bronze and Iron Ages in Europe and the Middle East. The ancient Greek historian Herodotus states in The Histories of the 5th century BC that Glaucus of Chios "was the man who single-handedly invented iron welding". Welding was used in the construction of the Iron pillar of Delhi, erected in Delhi, India about 310 AD and weighing 5.4 metric tons.

The Middle Ages brought advances in forge welding, in which blacksmiths pounded heated metal repeatedly until bonding occurred. In 1540, Vannoccio Biringuccio published De la pirotechnia, which includes descriptions of the forging operation. Renaissance craftsmen were skilled in the process, and the industry continued to grow during the following centuries.

In 1800, Sir Humphry Davy discovered the short-pulse electrical arc and presented his results in 1801. In 1802, Russian scientist Vasily Petrov created the continuous electric arc, and subsequently published "News of Galvanic-Voltaic Experiments" in 1803, in which he described experiments carried out in 1802. Of great importance in this work was the description of a stable arc discharge and the indication of its possible use for many applications, one being melting metals. In 1808, Davy, who was unaware of Petrov's work, rediscovered the continuous electric arc. In 1881–82 inventors Nikolai Benardos (Russian) and Stanisław Olszewski (Polish) created the first electric arc welding method known as carbon arc welding using carbon electrodes. The advances in arc welding continued with the invention of metal electrodes in the late 1800s by a Russian, Nikolai Slavyanov (1888), and an American, C. L. Coffin (1890). Around 1900, A. P. Strohmenger released a coated metal electrode in Britain, which gave a more stable arc. In 1905, Russian scientist Vladimir Mitkevich proposed using a three-phase electric arc for welding. Alternating current welding was invented by C. J. Holslag in 1919, but did not become popular for another decade.

Resistance welding was also developed during the final decades of the 19th century, with the first patents going to Elihu Thomson in 1885, who produced further advances over the next 15 years. Thermite welding was invented in 1893, and around that time another process, oxyfuel welding, became well established. Acetylene was discovered in 1836 by Edmund Davy, but its use was not practical in welding until about 1900, when a suitable torch was developed. At first, oxyfuel welding was one of the more popular welding methods due to its portability and relatively low cost. As the 20th century progressed, however, it fell out of favor for industrial applications. It was largely replaced with arc welding, as advances in metal coverings (known as flux) were made. Flux covering the electrode primarily shields the base material from impurities, but also stabilizes the arc and can add alloying components to the weld metal.

World War I caused a major surge in the use of welding, with the various military powers attempting to determine which of the several new welding processes would be best. The British primarily used arc welding, even constructing a ship, the "Fullagar" with an entirely welded hull.  Arc welding was first applied to aircraft during the war as well, as some German airplane fuselages were constructed using the process. Also noteworthy is the first welded road bridge in the world, the Maurzyce Bridge in Poland (1928).

During the 1920s, significant advances were made in welding technology, including the introduction of automatic welding in 1920, in which electrode wire was fed continuously. Shielding gas became a subject receiving much attention, as scientists attempted to protect welds from the effects of oxygen and nitrogen in the atmosphere. Porosity and brittleness were the primary problems, and the solutions that developed included the use of hydrogen, argon, and helium as welding atmospheres. During the following decade, further advances allowed for the welding of reactive metals like aluminum and magnesium. This in conjunction with developments in automatic welding, alternating current, and fluxes fed a major expansion of arc welding during the 1930s and then during World War II. In 1930, the first all-welded merchant vessel, M/S Carolinian, was launched.

During the middle of the century, many new welding methods were invented. In 1930, Kyle Taylor was responsible for the release of stud welding, which soon became popular in shipbuilding and construction. Submerged arc welding was invented the same year and continues to be popular today. In 1932 a Russian, Konstantin Khrenov eventually implemented the first underwater electric arc welding. Gas tungsten arc welding, after decades of development, was finally perfected in 1941, and gas metal arc welding followed in 1948, allowing for fast welding of non-ferrous materials but requiring expensive shielding gases. Shielded metal arc welding was developed during the 1950s, using a flux-coated consumable electrode, and it quickly became the most popular metal arc welding process. In 1957, the flux-cored arc welding process debuted, in which the self-shielded wire electrode could be used with automatic equipment, resulting in greatly increased welding speeds, and that same year, plasma arc welding was invented by Robert Gage. Electroslag welding was introduced in 1958, and it was followed by its cousin, electrogas welding, in 1961. In 1953, the Soviet scientist N. F. Kazakov proposed the diffusion bonding method.

Other recent developments in welding include the 1958 breakthrough of electron beam welding, making deep and narrow welding possible through the concentrated heat source. Following the invention of the laser in 1960, laser beam welding debuted several decades later, and has proved to be especially useful in high-speed, automated welding. Magnetic pulse welding (MPW) has been industrially used since 1967. Friction stir welding was invented in 1991 by Wayne Thomas at The Welding Institute (TWI, UK) and found high-quality applications all over the world. All of these four new processes continue to be quite expensive due to the high cost of the necessary equipment, and this has limited their applications.

Details

Welding is a technique used for joining metallic parts usually through the application of heat. This technique was discovered during efforts to manipulate iron into useful shapes. Welded blades were developed in the 1st millennium CE, the most famous being those produced by Arab armourers at Damascus, Syria. The process of carburization of iron to produce hard steel was known at this time, but the resultant steel was very brittle. The welding technique—which involved interlayering relatively soft and tough iron with high-carbon material, followed by hammer forging—produced a strong, tough blade.

In modern times the improvement in iron-making techniques, especially the introduction of cast iron, restricted welding to the blacksmith and the jeweler. Other joining techniques, such as fastening by bolts or rivets, were widely applied to new products, from bridges and railway engines to kitchen utensils.

Modern fusion welding processes are an outgrowth of the need to obtain a continuous joint on large steel plates. Rivetting had been shown to have disadvantages, especially for an enclosed container such as a boiler. Gas welding, arc welding, and resistance welding all appeared at the end of the 19th century. The first real attempt to adopt welding processes on a wide scale was made during World War I. By 1916 the oxyacetylene process was well developed, and the welding techniques employed then are still used. The main improvements since then have been in equipment and safety. Arc welding, using a consumable electrode, was also introduced in this period, but the bare wires initially used produced brittle welds. A solution was found by wrapping the bare wire with asbestos and an entwined aluminum wire. The modern electrode, introduced in 1907, consists of a bare wire with a complex coating of minerals and metals. Arc welding was not universally used until World War II, when the urgent need for rapid means of construction for shipping, power plants, transportation, and structures spurred the necessary development work.

Resistance welding, invented in 1877 by Elihu Thomson, was accepted long before arc welding for spot and seam joining of sheet. Butt welding for chain making and joining bars and rods was developed during the 1920s. In the 1940s the tungsten-inert gas process, using a nonconsumable tungsten electrode to perform fusion welds, was introduced. In 1948 a new gas-shielded process utilized a wire electrode that was consumed in the weld. More recently, electron-beam welding, laser welding, and several solid-phase processes such as diffusion bonding, friction welding, and ultrasonic joining have been developed.

Basic principles of welding

A weld can be defined as a coalescence of metals produced by heating to a suitable temperature with or without the application of pressure, and with or without the use of a filler material.

In fusion welding a heat source generates sufficient heat to create and maintain a molten pool of metal of the required size. The heat may be supplied by electricity or by a gas flame. Electric resistance welding can be considered fusion welding because some molten metal is formed.

Solid-phase processes produce welds without melting the base material and without the addition of a filler metal. Pressure is always employed, and generally some heat is provided. Frictional heat is developed in ultrasonic and friction joining, and furnace heating is usually employed in diffusion bonding.

The electric arc used in welding is a high-current, low-voltage discharge generally in the range 10–2,000 amperes at 10–50 volts. An arc column is complex but, broadly speaking, consists of a cathode that emits electrons, a gas plasma for current conduction, and an anode region that becomes comparatively hotter than the cathode due to electron bombardment. A direct current (DC) arc is usually used, but alternating current (AC) arcs can be employed.

Total energy input in all welding processes exceeds that which is required to produce a joint, because not all the heat generated can be effectively utilized. Efficiencies vary from 60 to 90 percent, depending on the process; some special processes deviate widely from this figure. Heat is lost by conduction through the base metal and by radiation to the surroundings.

Most metals, when heated, react with the atmosphere or other nearby metals. These reactions can be extremely detrimental to the properties of a welded joint. Most metals, for example, rapidly oxidize when molten. A layer of oxide can prevent proper bonding of the metal. Molten-metal droplets coated with oxide become entrapped in the weld and make the joint brittle. Some valuable materials added for specific properties react so quickly on exposure to the air that the metal deposited does not have the same composition as it had initially. These problems have led to the use of fluxes and inert atmospheres.

In fusion welding the flux has a protective role in facilitating a controlled reaction of the metal and then preventing oxidation by forming a blanket over the molten material. Fluxes can be active and help in the process or inactive and simply protect the surfaces during joining.

Inert atmospheres play a protective role similar to that of fluxes. In gas-shielded metal-arc and gas-shielded tungsten-arc welding an inert gas—usually argon—flows from an annulus surrounding the torch in a continuous stream, displacing the air from around the arc. The gas does not chemically react with the metal but simply protects it from contact with the oxygen in the air.

The metallurgy of metal joining is important to the functional capabilities of the joint. The arc weld illustrates all the basic features of a joint. Three zones result from the passage of a welding arc: (1) the weld metal, or fusion zone, (2) the heat-affected zone, and (3) the unaffected zone. The weld metal is that portion of the joint that has been melted during welding. The heat-affected zone is a region adjacent to the weld metal that has not been welded but has undergone a change in microstructure or mechanical properties due to the heat of welding. The unaffected material is that which was not heated sufficiently to alter its properties.

Weld-metal composition and the conditions under which it freezes (solidifies) significantly affect the ability of the joint to meet service requirements. In arc welding, the weld metal comprises filler material plus the base metal that has melted. After the arc passes, rapid cooling of the weld metal occurs. A one-pass weld has a cast structure with columnar grains extending from the edge of the molten pool to the centre of the weld. In a multipass weld, this cast structure may be modified, depending on the particular metal that is being welded.

The base metal adjacent to the weld, or the heat-affected zone, is subjected to a range of temperature cycles, and its change in structure is directly related to the peak temperature at any given point, the time of exposure, and the cooling rates. The types of base metal are too numerous to discuss here, but they can be grouped in three classes: (1) materials unaffected by welding heat, (2) materials hardened by structural change, (3) materials hardened by precipitation processes.

Welding produces stresses in materials. These forces are induced by contraction of the weld metal and by expansion and then contraction of the heat-affected zone. The unheated metal imposes a restraint on the above, and as contraction predominates, the weld metal cannot contract freely, and a stress is built up in the joint. This is generally known as residual stress, and for some critical applications must be removed by heat treatment of the whole fabrication. Residual stress is unavoidable in all welded structures, and if it is not controlled bowing or distortion of the weldment will take place. Control is exercised by welding technique, jigs and fixtures, fabrication procedures, and final heat treatment.

There are a wide variety of welding processes. Several of the most important are discussed below.

Forge welding

This original fusion technique dates from the earliest uses of iron. The process was first employed to make small pieces of iron into larger useful pieces by joining them. The parts to be joined were first shaped, then heated to welding temperature in a forge and finally hammered or pressed together. The Damascus sword, for example, consisted of wrought-iron bars hammered until thin, doubled back on themselves, and then rehammered to produce a forged weld. The larger the number of times this process was repeated, the tougher the sword that was obtained. In the Middle Ages cannons were made by welding together several iron bands, and bolts tipped with steel fired from crossbows were fabricated by forge welding. Forge welding has mainly survived as a blacksmith’s craft and is still used to some extent in chain making.

Arc welding

Shielded metal-arc welding accounts for the largest total volume of welding today. In this process an electric arc is struck between the metallic electrode and the workpiece. Tiny globules of molten metal are transferred from the metal electrode to the weld joint. Since arc welding can be done with either alternating or direct current, some welding units accommodate both for wider application. A holder or clamping device with an insulated handle is used to conduct the welding current to the electrode. A return circuit to the power source is made by means of a clamp to the workpiece.

Gas-shielded arc welding, in which the arc is shielded from the air by an inert gas such as argon or helium, has become increasingly important because it can deposit more material at a higher efficiency and can be readily automated. The tungsten electrode version finds its major applications in highly alloyed sheet materials. Either direct or alternating current is used, and filler metal is added either hot or cold into the arc. Consumable electrode gas-metal arc welding with a carbon dioxide shielding gas is widely used for steel welding. Two processes known as spray arc and short-circuiting arc are utilized. Metal transfer is rapid, and the gas protection ensures a tough weld deposit.

Submerged arc welding is similar to the above except that the gas shield is replaced with a granulated mineral material as a flux, which is mounded around the electrode so that no arc is visible.

Plasma welding is an arc process in which a hot plasma is the source of heat. It has some similarity to gas-shielded tungsten-arc welding, the main advantages being greater energy concentration, improved arc stability, and easier operator control. Better arc stability means less sensitivity to joint alignment and arc length variation. In most plasma welding equipment, a secondary arc must first be struck to create an ionized gas stream and permit the main arc to be started. This secondary arc may utilize either a high-frequency or a direct contact start. Water cooling is used because of the high energies forced through a small orifice. The process is amenable to mechanization, and rapid production rates are possible.

Thermochemical processes

One such process is gas welding. It once ranked as equal in importance to the metal-arc welding processes but is now confined to a specialized area of sheet fabrication and is probably used as much by artists as in industry. Gas welding is a fusion process with heat supplied by burning acetylene in oxygen to provide an intense, closely controlled flame. Metal is added to the joint in the form of a cold filler wire. A neutral or reducing flame is generally desirable to prevent base-metal oxidation. By deft craftsmanship very good welds can be produced, but welding speeds are very low. Fluxes aid in preventing oxide contamination of the joint.

Another thermochemical process is aluminothermic (thermite) joining. It has been successfully used for both ferrous and nonferrous metals but is more frequently used for the former. A mixture of finely divided aluminum and iron oxide is ignited to produce a superheated liquid metal at about 2,800 °C (5,000 °F). The reaction is completed in 30 seconds to 2 minutes regardless of the size of the charge. The process is suited to joining sections with large, compact cross sections, such as rectangles and rounds. A mold is used to contain the liquid metal.

Resistance welding

Spot, seam, and projection welding are resistance welding processes in which the required heat for joining is generated at the interface by the electrical resistance of the joint. Welds are made in a relatively short time (typically 0.2 seconds) using a low-voltage, high-current power source with force applied to the joint through two electrodes, one on each side. Spot welds are made at regular intervals on sheet metal that has an overlap. Joint strength depends on the number and size of the welds. Seam welding is a continuous process wherein the electric current is successively pulsed into the joint to form a series of overlapping spots or a continuous seam. This process is used to weld containers or structures where spot welding is insufficient. A projection weld is formed when one of the parts to be welded in the resistance machine has been dimpled or pressed to form a protuberance that is melted down during the weld cycle. The process allows a number of predetermined spots to be welded at one time. All of these processes are capable of very high rates of production with continuous quality control. The most modern equipment in resistance welding includes complete feedback control systems to self-correct any weld that does not meet the desired specifications.

Flash welding is a resistance welding process where parts to be joined are clamped, the ends brought together slowly and then drawn apart to cause an arc or flash. Flashing or arcing is continued until the entire area of the joint is heated; the parts are then forced together and pressure maintained until the joint is formed and cooled.

Low- and high-frequency resistance welding is used for the manufacture of tubing. The longitudinal joint in a tube is formed from metal squeezed into shape with edges abutted. Welding heat is governed by the current passing through the work and the speed at which the tube goes through the rolls. Welding speeds of 60 metres (200 feet) per minute are possible in this process.

Electron-beam welding

In electron-beam welding, the workpiece is bombarded with a dense stream of high-velocity electrons. The energy of these electrons is converted to heat upon impact. A beam-focusing device is included, and the workpiece is usually placed in an evacuated chamber to allow uninterrupted electron travel. Heating is so intense that the beam almost instantaneously vaporizes a hole through the joint. Extremely narrow deep-penetration welds can be produced using very high voltages—up to 150 kilovolts. Workpieces are positioned accurately by an automatic traverse device; for example, a weld in material 13 mm (0.5 inch) thick would only be 1 mm (0.04 inch) wide. Typical welding speeds are 125 to 250 cm (50 to 100 inches) per minute.

Cold welding

Cold welding, the joining of materials without the use of heat, can be accomplished simply by pressing them together. Surfaces have to be well prepared, and pressure sufficient to produce 35 to 90 percent deformation at the joint is necessary, depending on the material. Lapped joints in sheets and cold-butt welding of wires constitute the major applications of this technique. Pressure can be applied by punch presses, rolling stands, or pneumatic tooling. Pressures of 1,400,000 to 2,800,000 kilopascals (200,000 to 400,000 pounds per square inch) are needed to produce a joint in aluminum; almost all other metals need higher pressures.

Friction welding

In friction welding two workpieces are brought together under load with one part rapidly revolving. Frictional heat is developed at the interface until the material becomes plastic, at which time the rotation is stopped and the load is increased to consolidate the joint. A strong joint results with the plastic deformation, and in this sense the process may be considered a variation of pressure welding. The process is self-regulating, for, as the temperature at the joint rises, the friction coefficient is reduced and overheating cannot occur. The machines are almost like lathes in appearance. Speed, force, and time are the main variables. The process has been automated for the production of axle casings in the automotive industry.

Laser welding

Laser welding is accomplished when the light energy emitted from a laser source is focused upon a workpiece to fuse materials together. The limited availability of lasers of sufficient power for most welding purposes has so far restricted its use in this area. Another difficulty is that the speed and the thickness that can be welded are controlled not so much by power but by the thermal conductivity of the metals and by the avoidance of metal vaporization at the surface. Particular applications of the process with very thin materials up to 0.5 mm (0.02 inch) have, however, been very successful. The process is useful in the joining of miniaturized electrical circuitry.

Diffusion bonding

This type of bonding relies on the effect of applied pressure at an elevated temperature for an appreciable period of time. Generally, the pressure applied must be less than that necessary to cause 5 percent deformation so that the process can be applied to finished machine parts. The process has been used most extensively in the aerospace industries for joining materials and shapes that otherwise could not be made—for example, multiple-finned channels and honeycomb construction. Steel can be diffusion bonded at above 1,000 °C (1,800 °F) in a few minutes.

Ultrasonic welding

Ultrasonic joining is achieved by clamping the two pieces to be welded between an anvil and a vibrating probe or sonotrode. The vibration raises the temperature at the interface and produces the weld. The main variables are the clamping force, power input, and welding time. A weld can be made in 0.005 second on thin wires and up to 1 second with material 1.3 mm (0.05 inch) thick. Spot welds and continuous seam welds are made with good reliability. Applications include extensive use on lead bonding to integrated circuitry, transistor canning, and aluminum can bodies.

Explosive welding

Explosive welding takes place when two plates are impacted together under an explosive force at high velocity. The lower plate is laid on a firm surface, such as a heavier steel plate. The upper plate is placed carefully at an angle of approximately 5° to the lower plate with a sheet of explosive material on top. The charge is detonated from the hinge of the two plates, and a weld takes place in microseconds by very rapid plastic deformation of the material at the interface. A completed weld has the appearance of waves at the joint caused by a jetting action of metal between the plates.

Weldability of metals

Carbon and low-alloy steels are by far the most widely used materials in welded construction. Carbon content largely determines the weldability of plain carbon steels; at above 0.3 percent carbon some precautions have to be taken to ensure a sound joint. Low-alloy steels are generally regarded as those having a total alloying content of less than 6 percent. There are many grades of steel available, and their relative weldability varies.

Aluminum and its alloys are also generally weldable. A very tenacious oxide film on aluminum tends to prevent good metal flow, however, and suitable fluxes are used for gas welding. Fusion welding is more effective with alternating current when using the gas-tungsten arc process to enable the oxide to be removed by the arc action.

Copper and its alloys are weldable, but the high thermal conductivity of copper makes welding difficult. Refractory metals such as zirconium, niobium, molybdenum, tantalum, and tungsten are usually welded by the gas-tungsten arc process. Nickel is the most compatible material for joining, is weldable to itself, and is extensively used in dissimilar metal welding of steels, stainlesses, and copper alloys.

welder.jpg?t=1483464014&width=768


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1903 2023-09-17 00:03:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1907) Call center

Gist

A Call Center is an office equipped to handle a large volume of telephone calls for an organization (such as a retailer, bank, or marketing firm) especially for taking orders or for providing customer service.

Details

A call center is a centralized department that handles inbound and outbound calls from current and potential customers. Call centers are located either within an organization or outsourced to another company that specializes in handling calls.

What is the difference between a call center and a contact center?

Call centers focus on one communication channel: the telephone. Contact centers provide support from additional channels, such as email, chat, websites and applications. A contact center may include one or more call centers.

Contact centers provide omnichannel support, assisting customers on whichever channel or device they use. Whether an organization chooses a call center or contact center depends on its products and services, the channels on which it provides customer support and the structures of the organization's support teams.

How do call centers work?

Online merchants, telemarketing companies, help desks, mail-order organizations, polling services, charities and any large organization that uses the telephone to sell products or offer services use call centers. These organizations also use call centers to enhance CX.

The three most common types of call centers are inbound, outbound and blended call centers.

Inbound call center. Typically, these call centers handle a considerable volume of calls simultaneously and then screen, forward and log the calls. An interactive voice response (IVR) system can answer calls and use speech recognition technology to address customer queries with an automated message or route calls to the appropriate call center agents or recipients through an automated call distributor (ACD).

Agents in an inbound call center may handle calls from current or potential customers regarding accounts management, scheduling, technical support, complaints, queries about products or services, or intent to purchase from the organization.

Outbound call center. In these call centers, an agent makes calls on behalf of the organization or client for tasks, including lead generation, telemarketing, customer retention, fundraising, surveying, collecting debts or scheduling appointments. To maximize efficiencies, an automated dialer can make the calls and then transfer them to an available agent using an IVR system after the caller connects. Outbound call centers must ensure compliance with the National Do Not Call Registry, a list to which citizens can add their phone numbers to avoid unwanted solicitation calls.

Blended call center. This type of call center handles both inbound and outbound calls.

Importance of call centers

Customers have high expectations for customer service. They want their issues addressed and handled quickly and efficiently. Organizations must have representatives available when customers call for service or support, and those with call centers can more effectively assist customers in need. Call centers can make an organization available 24/7 or during a time window that matches customer expectations.

Customer phone calls have value beyond customer service. With some products or services, phone calls are the only interactions organizations have with customers -- therefore, the only opportunity to personally connect with customers.

Types of call centers

Beyond inbound, outbound and blended, further classifications of call centers include the following:

* In-house call center. The organization owns and runs its call center and hires its agents.
* Outsourced call center. The organization hires a third party to handle calls on its behalf, generally to remove the burdens of hiring and training call center agents and investing in and updating call center technology, which can reduce operating costs.
* Offshore call center. The organization outsources its operations to a company in another country, often to save money on wages and provide services around the clock. Drawbacks to an offshore call center include reduced customer satisfaction due to language issues and a lack of knowledge about the organization, product or service due to distance.
* Virtual call center. The organization employs geographically dispersed agents who answer calls using cloud call center technology. Call center agents work either in smaller groups in different offices or in their own homes.

Call center teams and structure

Many different roles make up call center teams, including agents, team leaders and IT personnel.

Call center agents. Agents are the key point of contact between an organization and its customers, as agents talk directly with customers and handle their calls. Depending on the type of call center, agents may handle either incoming or outgoing calls. Call center agents typically have customer service skills, are knowledgeable about the organization and are creative problem-solvers.

Team leaders. Many call centers split agents into smaller groups for easier management. Team leaders help call center agents deescalate conversations, solve issues or answer questions from customers or the agents. In addition, team leaders should ensure call center agents are happy and fulfilled in their roles.

Call center directors. While team leaders run smaller teams, call center directors run operations and ensure everything runs smoothly. Directors, or managers, set the metrics and expectations for agent performance to ensure they meet the standards for customer expectations and keep the center running smoothly.

Quality assurance team. Quality assurance (QA) is a practice that ensures products or services meet specific requirements, and QA teams put this into practice. These teams can monitor and evaluate agent phone calls in call centers to ensure the call quality and CX are up to the center's standards. In some cases, call center directors run the QA checks.

IT personnel. IT professionals are critical to call centers -- especially those with remote operations. And while IT personnel aren't exclusive to call centers, they ensure agents' technology and tools are up to date to keep the call center running smoothly.

Call center technology

Call centers, at their cores, require two key pieces of technology: computers and headsets. Call center agents need access to computers and reliable headsets to make and receive calls, so their voices sound clear and easy for customers to understand.

Remote call center agents may also require enhanced internet access to access their organizations' call center software reliably, so organizations may want to invest in home networking equipment for remote agents.

Other critical call center technology and software include the following:

* call management software, including ACD technology;
* call monitoring software;
* speech analytics tools;
* workforce management software;
* customer relationship management software;
* IVR software;
* outbound dialers; and
* chatbot or virtual assistant technology.

Examples of call centers across industries

Call centers can benefit any industry that interacts with customers over the phone. Examples include the following:

* Airlines. Customers call airline toll-free numbers to engage with IVR menus or speak to customer service agents. Customers can check flight statuses, obtain flight details and check frequent flyer mileage balances. In addition, flyers can speak to customer service agents to re-book a flight. When weather conditions, such as a large winter storm, cause flight delays or cancellations, airlines can quickly respond to customer needs.
* Healthcare. Customers call healthcare providers to make, change or confirm appointments and to ask physicians questions. When a medical emergency arises off-hours, healthcare providers can use outsourced call centers to receive calls and route them to an on-call physician.
* Retail. Customers call retail businesses for assistance before, during or after purchases. Before or during purchase, a customer may ask a customer service agent about shipping details or the retailer's return policy. After a purchase, customers may call to report a missing item or request a return.

How is call center success measured?

Organizations should track key performance indicators (KPIs) to measure the success rates and efficiency of call centers and agents. The KPIs may vary depending on the center's function: An outbound call center may measure cost per call, revenue earned, total calls made and tasks completed, among other metrics. Inbound call center metrics may include first call resolution (FCR), average wait time and abandoned call rates.

In addition, organizations can use speech analytics software to monitor and analyze call center agent performance. It can identify areas that may require more knowledge and training, which can improve call handling times and FCR.

Related Terms

* customer satisfaction (CSAT)
* Customer satisfaction (CSAT) is a measure of the degree to which a product or service meets customer expectations.
* omnichannel

Omnichannel -- also spelled omni-channel -- is an approach to sales, marketing and customer support that seeks to provide

* virtual agent

A virtual agent -- sometimes called an intelligent virtual agent, virtual rep or chatbot -- is a software program that uses.

Additional Information

A call centre (Commonwealth spelling) or call center (American spelling; see spelling differences) is a managed capability that can be centralised or remote that is used for receiving or transmitting a large volume of enquiries by telephone. An inbound call centre is operated by a company to administer incoming product or service support or information enquiries from consumers. Outbound call centres are usually operated for sales purposes such as telemarketing, for solicitation of charitable or political donations, debt collection, market research, emergency notifications, and urgent/critical needs blood banks. A contact centre is a further extension to call centres telephony based capabilities, administers centralised handling of individual communications, including letters, faxes, live support software, social media, instant message, and email.

A call center was previously seen to be an open workspace for call center agents, with workstations that include a computer and display for each agent and connected to an inbound/outbound call management system, and one or more supervisor stations. It can be independently operated or networked with additional centers, often linked to a corporate computer network, including mainframes, microcomputer/servers and LANs.

The contact center is a central point from which all customer contacts are managed. Through contact centers, valuable information can be routed to the appropriate people or systems, contacts can be tracked and data may be gathered. It is generally a part of the company's customer relationship management infrastructure. The majority of large companies use contact centers as a means of managing their customer interactions. These centers can be operated by either an in-house department responsible or outsourcing customer interaction to a third-party agency (known as Outsourcing Call Centres).

History

Answering services, as known in the 1960s through the 1980s, earlier and slightly later, involved a business that specifically provided the service. Primarily by the use of an off-premises extension (OPX) for each subscribing business, connected at a switchboard at the answering service business, the answering service would answer the otherwise unattended phones of the subscribing businesses with a live operator. The live operator could take messages or relay information, doing so with greater human interactivity than a mechanical answering machine. Although undoubtedly more costly (the human service, the cost of setting up and paying the phone company for the OPX on a monthly basis), it had the advantage of being more ready to respond to the unique needs of after-hours callers. The answering service operators also had the option of calling the client and alerting them to particularly important calls.

The origins of call centers date back to the 1960s with the UK-based Birmingham Press and Mail, which installed Private Automated Business Exchanges (PABX) to have rows of agents handling customer contacts. By 1973, call centers received mainstream attention after Rockwell International patented its Galaxy Automatic Call Distributor (GACD) for a telephone booking system as well as the popularization of telephone headsets as seen on televised NASA Mission Control Center events.

During the late 1970s, call center technology expanded to include telephone sales, airline reservations, and banking systems. The term "call center" was first published and recognised by the Oxford English Dictionary in 1983. The 1980s experienced the development of toll-free telephone numbers to increase the efficiency of agents and overall call volume. Call centers increased with the deregulation of long-distance calling and growth in information-dependent industries.

As call centres expanded, workers in North America began to join unions such as the Communications Workers of America and the United Steelworkers. In Australia, the National Union of Workers represents unionised workers; their activities form part of the Australian labour movement. In Europe, UNI Global Union of Switzerland is involved in assisting unionisation in the call center industry, and in Germany Vereinte Dienstleistungsgewerkschaft represents call centre workers.

During the 1990s, call centres expanded internationally and developed into two additional subsets of communication: contact centres and outsourced bureau centres. A contact centre is a coordinated system of people, processes, technologies, and strategies that provides access to information, resources, and expertise, through appropriate channels of communication, enabling interactions that create value for the customer and organization. In contrast to in-house management, outsourced bureau contact centres are a model of contact centre that provide services on a "pay per use" model. The overheads of the contact centre are shared by many clients, thereby supporting a very cost effective model, especially for low volumes of calls. The modern contact centre includes automated call blending of inbound and outbound calls as well as predictive dialing capabilities, dramatically increasing agents' productivity. New implementations of more complex systems require highly skilled operational and management staff that can use multichannel online and offline tools to improve customer interactions.

Technology:

Call centre technologies include: speech recognition software which allowed Interactive Voice Response (IVR) systems to handle first levels of customer support, text mining, natural language processing to allow better customer handling, agent training via interactive scripting and automatic mining using best practices from past interactions, support automation and many other technologies to improve agent productivity and customer satisfaction. Automatic lead selection or lead steering is also intended to improve efficiencies, both for inbound and outbound campaigns. This allows inbound calls to be directly routed to the appropriate agent for the task, whilst minimising wait times and long lists of irrelevant options for people calling in.

For outbound calls, lead selection allows management to designate what type of leads go to which agent based on factors including skill, socioeconomic factors, past performance, and percentage likelihood of closing a sale per lead.

The universal queue standardises the processing of communications across multiple technologies such as fax, phone, and email. The virtual queue provides callers with an alternative to waiting on hold when no agents are available to handle inbound call demand.

Premises-based technology

Historically, call centres have been built on Private branch exchange (PBX) equipment owned, hosted, and maintained by the call centre operator. The PBX can provide functions such as automatic call distribution, interactive voice response, and skills-based routing.

Virtual call centre

In a virtual call centre model, the call centre operator (business) pays a monthly or annual fee to a vendor that hosts the call centre telephony and data equipment in their own facility, cloud-based. In this model, the operator does not own, operate or host the equipment on which the call centre runs. Agents connect to the vendor's equipment through traditional PSTN telephone lines, or over voice over IP. Calls to and from prospects or contacts originate from or terminate at the vendor's data centre, rather than at the call centre operator's premises. The vendor's telephony equipment (at times data servers) then connects the calls to the call centre operator's agents.

Virtual call centre technology allows people to work from home or any other location instead of in a traditional, centralised, call centre location, which increasingly allows people 'on the go' or with physical or other disabilities to work from desired locations - i.e. not leaving their house. The only required equipment is Internet access, a workstation, and a softphone. If the virtual call centre software utilizes webRTC, a softphone is not required to dial. The companies are preferring Virtual Call Centre services due to cost advantage. Companies can start their call centre business immediately without installing the basic infrastructure like Dialer, ACD and IVRS.

Virtual call centres became increasingly used after the COVID-19 pandemic restricted businesses from operating with large groups of people working in close proximity.

Cloud computing

Through the use of application programming interfaces (APIs), hosted and on-demand call centres that are built on cloud-based software as a service (SaaS) platforms can integrate their functionality with cloud-based applications for customer relationship management (CRM), lead management and more.

Developers use APIs to enhance cloud-based call centre platform functionality—including Computer telephony integration (CTI) APIs which provide basic telephony controls and sophisticated call handling from a separate application, and configuration APIs which enable graphical user interface (GUI) controls of administrative functions.

Call centres that use cloud computing use software that Gartner refers to as "Contact Center as a Service" (or CCaaS, for short) that Gartner defines as "a software as a service (SaaS)-based application that enables customer service organizations to manage multichannel customer interactions holistically in terms of both customer experience and employee experience".

Outsourcing

Outsourced call centres are often located in developing countries, where wages are significantly lower. These include the call centre industries in the Philippines, Bangladesh, and India.

Companies that regularly utilise outsourced contact centre services include British Sky Broadcasting and Orange in the telecommunications industry, Adidas in the sports and leisure sector, Audi in car manufacturing and charities such as the RSPCA.

Industries:

Healthcare

The healthcare industry has used outbound call centre programmes for years to help manage billing, collections, and patient communication. The inbound call centre is a new and increasingly popular service for many types of healthcare facilities, including large hospitals. Inbound call centres can be outsourced or managed in-house.

These healthcare call centres are designed to help streamline communications, enhance patient retention and satisfaction, reduce expenses and improve operational efficiencies.

Hospitality

Many large hospitality companies such as the Hilton Hotels Corporation and Marriott International make use of call centres to manage reservations. These are known in the industry as "central reservations offices". Staff members at these call centres take calls from clients wishing to make reservations or other inquiries via a public number, usually a 1-800 number. These centres may operate as many as 24 hours per day, seven days a week, depending on the call volume the chain receives.

Evaluation:

Mathematical theory

Queueing theory is a branch of mathematics in which models of service systems have been developed. A call centre can be seen as a queueing network and results from queueing theory such as the probability an arriving customer needs to wait before starting service useful for provisioning capacity. (Erlang's C formula is such a result for an M/M/c queue and approximations exist for an M/G/k queue.) Statistical analysis of call centre data has suggested arrivals are governed by an inhomogeneous Poisson process and jobs have a log-normal service time distribution. Simulation algorithms are increasingly being used to model call arrival, queueing and service levels.

Call centre operations have been supported by mathematical models beyond queueing, with operations research, which considers a wide range of optimisation problems seeking to reduce waiting times while keeping server utilisation and therefore efficiency high.

Criticism

Call centres have received criticism for low pay rates and restrictive working practices for employees, which have been deemed as a dehumanising environment. Other research illustrates how call centre workers develop ways to counter or resist this environment by integrating local cultural sensibilities or embracing a vision of a new life. Most call centres provide electronic reports that outline performance metrics, quarterly highlights and other information about the calls made and received. This has the benefit of helping the company to plan the workload and time of its employees. However, it has also been argued that such close monitoring breaches the human right to privacy.

Complaints are often logged by callers who find the staff do not have enough skill or authority to resolve problems, as well as appearing apathetic. These concerns are due to a business process that exhibits levels of variability because the experience a customer gets and results a company achieves on a given call are dependent upon the quality of the agent. Call centres are beginning to address this by using agent-assisted automation to standardise the process all agents use. However, more popular alternatives are using personality and skill based approaches. The various challenges encountered by call operators are discussed by several authors.

Media portrayals

Indian call centres have been the focus of several documentary films, the 2004 film Thomas L. Friedman Reporting: The Other Side of Outsourcing, the 2005 films John and Jane, Nalini by Day, Nancy by Night, and 1-800-India: Importing a White-Collar Economy, and the 2006 film Bombay Calling, among others. An Indian call centre is also the subject of the 2006 film Outsourced and a key location in the 2008 film, Slumdog Millionaire. The 2014 BBC fly on the wall documentary series The Call Centre gave an often distorted although humorous view of life in a Welsh call centre.

Appointment Setting

Appointment setting is a specialized function within call centres, where dedicated agents focus on facilitating and scheduling meetings between clients and businesses or sales representatives. This service is particularly prevalent in various industries such as financial services, healthcare, real estate, and B2B sales, where time-sensitive and personalized communications are essential for effective client engagement.

Lead Generation

Lead generation is a vital aspect of call center operations, encompassing strategies and activities aimed at identifying potential customers or clients for businesses or sales representatives. It involves gathering information and generating interest among individuals or organizations who may have a potential interest in the products or services offered.

Image-1-2.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1904 2023-09-18 00:11:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1908) Fashion designer

Gist

Fashion design is the art of applying design, aesthetics, clothing construction and natural beauty to clothing and its accessories. It is influenced by culture and different trends, and has varied over time and place.

Details

Fashion designers use their technical knowledge and creative flair to work on designs for new and original clothing.

As a fashion designer, you'll research current fashion trends, forecasting what will be popular with consumers, and take inspiration from the world around you to create fresh and original designs.

You'll decide on fabrics, colours and patterns, produce sample designs and adjust them until you're happy with the final product.

You may work to your own brief or be given a brief to work towards, with specifications relating to colour, fabric and budget. In large companies, you're likely to work as part of a team of designers, headed by a creative director, whereas if working for a small company as sole designer or for yourself, you'll be responsible for all the designs.

You'll typically specialise in one area of design, such as sportswear, childrenswear, footwear or accessories.

Types of fashion designer

The main areas of work for fashion designers are:

* high street fashion - this is where the majority of designers work and where garments are mass manufactured (often in Europe or East Asia). Buying patterns, seasonal trends and celebrity catwalk influences play a key role in this design process. It's a commercial, highly media-led area to work in.
* ready-to-wear - established designers create ready-to-wear collections, produced in relatively small numbers.
haute couture - requires large amounts of time spent on the production of one-off garments for the catwalk.
Designs usually endorse the brand and create a 'look'.

Responsibilities

Tasks depend on the market you're working for, but you'll typically need to:

*create or visualise an idea and produce a design by hand or using programs like Adobe Illustrator
* create moodboards to show to clients
* keep up to date with emerging fashion trends as well as general trends relating to fabrics, colours and shapes
* plan and develop ranges, often based on a theme
* work with others in the design team, such as buyers and forecasters, to develop products to meet a brief
* liaise closely with sales, buying and production teams on an ongoing basis to ensure items suit the customer, market and price points
* understand design from a technical perspective, such as producing patterns and technical specifications for designs
* visit trade shows and manufacturers to source, select and buy fabrics, trims, fastenings and embellishments
* adapt existing designs for mass production
* develop a pattern that is cut and sewn into sample garments and supervise the creation of these, including fitting, detailing and adaptations
* oversee production
* negotiate with customers and suppliers
* showcase your designs at fashion and other trade shows
* work with models to try out your designs and also to wear them on the catwalk at fashion shows
* manage marketing, finances and other business activities, if working on a self-employed basis.

Experienced designers with larger companies may focus more on the design aspect, with pattern cutters and machinists preparing sample garments. In smaller companies these, and other tasks, may be part of the designer's role.

Salary

Starting salaries in the fashion industry are often low. Design assistants may start at around £16,000 to £18,000.
A junior designer can expect to earn approximately £25,000 a year.

Typical salaries at senior designer and creative director level range from around £42,000 to in excess of £85,000.
Salaries vary depending on your level of experience, geographical location and type of employer.

Income figures are intended as a guide only.

Working hours

Working hours typically include regular extra hours to meet project deadlines.

What to expect

* The working environment varies between companies, from a modern, purpose-built office to a small design studio. Freelance designers may work from home or in a rented studio.
* With the increase in online retailing, setting up in business or being self-employed is becoming more common, even straight after graduation. Extensive market research and business acumen are critical for any fashion business to succeed.
* The majority of opportunities are available in London and the South East and some large towns and cities in the North West and Scotland, with pockets of industry in the Midlands.
* Career success relies on a combination of creativity, perseverance, resilience and good communication and networking skills.
* There are opportunities to travel to meet suppliers, research new trends and to attend trade and fashion shows, either in the UK or abroad.

Qualifications

Fashion design is a very competitive industry, and you'll typically need a degree, HND or foundation degree in a subject that combines both technical and design skills. Relevant subjects include:

* art and design
* fashion and fashion design
* fashion business
* fashion buying, marketing and communication
* garment technology
* graphic design
* textiles and textile design.

As you research courses, carefully look at the subjects covered, links the department has with the fashion industry and opportunities available for work placements, showcasing your work and building your portfolio.

Although you don't need a postgraduate qualification, you may want to develop your skills in a particular area such as fashion design management, menswear or footwear.

If your degree is unrelated, you'll need to get experience in the industry or a related area, such as fashion retail. You may want to consider taking a postgraduate qualification in fashion or textile design. Search postgraduate courses in fashion and textile design.

Entry without a degree is sometimes possible if you have a background in fashion or art and design, but you'll need to get experience in the industry to develop your expertise.

Skills

You'll need to show:

* creativity, innovation and flair
* an eye for colour and a feel for fabrics and materials
* the ability to generate ideas and concepts, use your initiative and think outside the box
* design and visualisation skills, either by hand or through computer-aided design (CAD)
* technical skills, including pattern cutting and sewing
* garment technology skills and knowledge
* a proactive approach
* commercial awareness and business orientation
* self-promotion and confidence
* interpersonal, communication and networking skills
* the ability to negotiate and to influence others
* teamworking skills
* good organisation and time management.

Work experience

Getting work experience is vital and experience of any kind in a design studio will help you develop your skills and build up a network of contacts within the industry. Experience in retail can be useful too.

During your degree, take opportunities to develop your portfolio through work placements and internships, either in the UK or abroad. Some courses include a sandwich year in a fashion company. This type of placement can offer the opportunity to work on a more extensive project in industry.

Volunteer at fashion events or set up your own fashion show and make contact with photographers looking for fashion designers. Make the most of degree shows to showcase your work and visit fashion and trade shows, such as London Fashion Week, to pick up ideas and tips.

Employers will expect to see a portfolio that clearly demonstrates your ability to design and produce garments and accessories.

Employers

Most fashion and clothing designers work for high street stores and independent labels. They may be employed at an in-house design studio, based in either a manufacturing or retail organisation.

Others work in specialist design studios, serving the couture and designer ready-to-wear markets, and their work may include producing designs for several manufacturing or retailing companies.

However, the top design houses are a relatively small market compared with the high street fashion sector.

Some fashion designers find work overseas with designers based in Europe and the USA. A directory of fashion contact details, including companies and fashion organisations around the world, can be found at Apparel Search.

Opportunities exist for self-employment. Freelance fashion designers can market their work through trade fairs and via agents, or by making contact directly with buyers from larger businesses or niche clothing outlets.

A number of organisations offer specific training and support for setting up a fashion business. The British Fashion Council provides a range of initiatives, and courses and online resources on how to run your own creative business are offered by The Design Trust.

Competition for design jobs is intense throughout the industry, particularly in womenswear design.

Look for job vacancies at:

* Drapersjobs.com
* Fashion United
* Retail Choice
* The Business of Fashion (BoF) Careers

Employment opportunities are often secured via speculative applications and effective networking. It's important to try to build relationships with more established designers and companies, whether you're seeking permanent or freelance openings.

Recruitment agencies, specialist publications and fashion networks are an important source of contacts and vacancies. Specialist recruitment agencies that represent different market levels include:

* Denza International
* Fashion & Retail Personnel
* Indesign

Discover 5 ways to get into fashion design.

Professional development

The culture of the industry is very much that people learn on the job. However, self-development is important throughout your career, and you'll need to take responsibility for keeping your skills and knowledge up to date.

Initially, any training is likely to be related to learning about the practical processes that your employer uses and covering any relevant technological developments. Larger firms may provide business and computer training, which could include computer-aided design (CAD) or other specialist software, such as Photoshop and Illustrator.

Reading the trade press and fashion blogs, attending trade and fashion shows, and visiting suppliers are also important for keeping up to date with trends and fashions.

A range of specialist short courses and one-day workshops related to fashion are offered by the London College of Fashion, part of the University of the Arts London. It's also possible to take a Masters to develop your skills in a particular area of fashion, such as fashion management, pattern and garment technology, or womenswear.

Career prospects

How your career develops will depend on the specific area of design you trained in, the work experience you've built up and your professional reputation. Another influencing factor will be the type of company you work for and the opportunities for career development within it.

Progression may be slow, particularly at the start of your career. Being proactive and making contacts in the industry is essential, especially in a sector where people frequently move jobs in order to progress their career and where there is a lot of pressure to produce new ideas that are commercially viable.

Typically, you'll begin your career as an assistant.

Progression is then to a role with more creative input, involving proposing concepts and design ideas, although you're unlikely to have much influence on major decisions.

With several years' design experience, progression is possible through senior designer roles to the position of head designer or creative director. At this level, you'll have considerable responsibility for overall design decisions and influences for the range, but as this is a management position others will do the actual design work.

Technical director and quality management positions represent alternative progression routes. Other career options in the fashion industry include:

* colourist
* fashion illustrator
* fashion predictor
* fashion stylist
* pattern cutter/grader.

Fashion designers are increasingly becoming involved in homeware and gift design, which can open up new career paths.

There are also opportunities for self-employment or moving into related occupations, such as retail buying, photography, fashion styling or journalism.

Additional Information

Fashion design is the art of applying design, aesthetics, clothing construction and natural beauty to clothing and its accessories. It is influenced by culture and different trends, and has varied over time and place. "A fashion designer creates clothing, including dresses, suits, pants, and skirts, and accessories like shoes and handbags, for consumers. He or she can specialize in clothing, accessory, or jewelry design, or may work in more than one of these areas."

Fashion designers

Fashion designers work in a variety of different ways when designing their pieces and accessories such as rings, bracelets, necklaces and earrings. Due to the time required to put a garment out in market, designers must anticipate changes to consumer desires. Fashion designers are responsible for creating looks for individual garments, involving shape, color, fabric, trimming, and more.

Fashion designers attempt to design clothes that are functional as well as aesthetically pleasing. They consider who is likely to wear a garment and the situations in which it will be worn, and they work within a wide range of materials, colors, patterns, and styles. Though most clothing worn for everyday wear falls within a narrow range of conventional styles, unusual garments are usually sought for special occasions such as evening wear or party dresses.

Some clothes are made specifically for an individual, as in the case of haute couture or bespoke tailoring. Today, most clothing is designed for the mass market, especially casual and everyday wear, which are commonly known as ready to wear or fast fashion.

Structure

There are different lines of work for designers in the fashion industry, which consist of those who work full-time for a singular fashion house or those who are freelance designers. Fashion designers that work full-time for one fashion house, as 'in-house designers', own the designs and may either work alone or as a part of a design team. Freelance designers who work for themselves, sell their designs to fashion houses, directly to shops, or to clothing manufacturers. There are quite a few fashion designers who choose to set up their own labels, which offers them full control over their designs. While others are self-employed and design for individual clients. Other high-end fashion designers cater to specialty stores or high-end fashion department stores. These designers create original garments, as well as those that follow established fashion trends. Most fashion designers, however, work for apparel manufacturers, creating designs of men's, women's, and children's fashions for the mass market. Large designer brands which have a 'name' as their brand such as Abercrombie & Fitch, Justice, or Juicy are likely to be designed by a team of individual designers under the direction of a design director.

Designing a garment

Fashion designers work in various ways, some start with a vision in their head later on move into drawing it on paper or computer , while others go directly into draping fabric onto a dress form, also known as a mannequin. The design process is unique to the designer and it is rather intriguing to see the various steps that go into the process. A designer may choose to work with certain apps that are able to help connect all their ideas together and expand their thoughts to create a cohesive design. When a designer is completely satisfied with the fit of the toile (or muslin), they will consult a professional pattern maker who then makes the finished, working version of the pattern out of card or via a computer program. Finally, a sample garment is made up and tested on a model to make sure it is an operational outfit. Fashion design is expressive, the designers create art that may be functional or non-functional.

History

Modern Western fashion design is often considered to have started in the 19th century with Charles Frederick Worth who was the first designer to have his label sewn into the garments that he created. Before the former draper set up his maison couture (fashion house) in Paris, clothing design and creation of the garments were handled largely by anonymous seamstresses. At the time high fashion descended from what was popularly worn at royal courts. Worth's success was such that he was able to dictate to his customers what they should wear, instead of following their lead as earlier dressmakers had done. The term couturier was in fact first created in order to describe him. While all articles of clothing from any time period are studied by academics as costume design, only clothing created after 1858 is considered fashion design.

It was during this period that many design houses began to hire artists to sketch or paint designs for garments. Rather than going straight into manufacturing, the images were shown to clients to gain approval, which saved time and money for the designer. If the client liked their design, the patrons commissioned the garment from the designer, and it was produced for the client in the fashion house. This designer-patron construct launched designers sketching their work rather than putting the completed designs on models.

Types of fashion

Garments produced by clothing manufacturers fall into three main categories, although these may be split up into additional, different types.

Haute couture

Until the 1950s, fashion clothing was predominately designed and manufactured on a made-to-measure or haute couture basis (French for high-sewing), with each garment being created for a specific client. A couture garment is made to order for an individual customer, and is usually made from high-quality, expensive fabric, sewn with extreme attention to detail and finish, often using time-consuming, hand-executed techniques. Look and fit take priority over the cost of materials and the time it takes to make. Due to the high cost of each garment, haute couture makes little direct profit for the fashion houses, but is important for prestige and publicity.

Ready-to-wear (prêt-à-porter)

Ready-to-wear, or prêt-à-porter, clothes are a cross between haute couture and mass market. They are not made for individual customers, but great care is taken in the choice and cut of the fabric. Clothes are made in small quantities to guarantee exclusivity, so they are rather expensive. Ready-to-wear collections are usually presented by fashion houses each season during a period known as Fashion Week. This takes place on a citywide basis and occurs twice a year. The main seasons of Fashion Week include; spring/summer, fall/winter, resort, swim, and bridal.

Half-way garments are an alternative to ready-to-wear, "off-the-peg", or prêt-à-porter fashion. Half-way garments are intentionally unfinished pieces of clothing that encourage co-design between the "primary designer" of the garment, and what would usually be considered, the passive "consumer". This differs from ready-to-wear fashion, as the consumer is able to participate in the process of making and co-designing their clothing. During the Make{able} workshop, Hirscher and Niinimaki found that personal involvement in the garment-making process created a meaningful "narrative" for the user, which established a person-product attachment and increased the sentimental value of the final product.

Otto von Busch also explores half-way garments and fashion co-design in his thesis, "Fashion-able, Hacktivism and engaged Fashion Design".

Mass market

Currently, the fashion industry relies more on mass-market sales. The mass market caters for a wide range of customers, producing ready-to-wear garments using trends set by the famous names in fashion. They often wait around a season to make sure a style is going to catch on before producing their versions of the original look. To save money and time, they use cheaper fabrics and simpler production techniques which can easily be done by machines. The end product can, therefore, be sold much more cheaply.

There is a type of design called "kutch" originated from the German word kitschig, meaning "trashy" or "not aesthetically pleasing". Kitsch can also refer to "wearing or displaying something that is therefore no longer in fashion".

Income

The median annual wages for salaried fashion designers was $74,410 in February of 2023. The middle 50 percent earned an average of 76,700. The lowest 10 percent earned 32,320 and the highest 10 percent earned 130,900. Median annual earnings in May 2008 were $52,860 (£40,730.47) in apparel, piece goods, and notions - the industry employing the largest numbers of fashion designers. In 2016, 23,800 people were counted as fashion designers in the United States.

Fashion industry

Fashion today is a global industry, and most major countries have a fashion industry. Seven countries have established an international reputation in fashion: the United States, France, Italy, United Kingdom, Japan, Germany and Belgium. The "big four" fashion capitals of the fashion industry are New York City, Paris, Milan, and London.

4.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1905 2023-09-19 00:13:34

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1909) Hydroelectric power

Gist

Hydropower, or hydroelectric power, is one of the oldest and largest sources of renewable energy, which uses the natural flow of moving water to generate electricity.

Summary

Hydroelectric power, also called hydropower, is electricity produced from generators driven by turbines that convert the potential energy of falling or fast-flowing water into mechanical energy. In the early 21st century, hydroelectric power was the most widely utilized form of renewable energy; in 2019 it accounted for more than 18 percent of the world’s total power generation capacity.

In the generation of hydroelectric power, water is collected or stored at a higher elevation and led downward through large pipes or tunnels (penstocks) to a lower elevation; the difference in these two elevations is known as the head. At the end of its passage down the pipes, the falling water causes turbines to rotate. The turbines in turn drive generators, which convert the turbines’ mechanical energy into electricity. Transformers are then used to convert the alternating voltage suitable for the generators to a higher voltage suitable for long-distance transmission. The structure that houses the turbines and generators, and into which the pipes or penstocks feed, is called the powerhouse.

Hydroelectric power plants are usually located in dams that impound rivers, thereby raising the level of the water behind the dam and creating as high a head as is feasible. The potential power that can be derived from a volume of water is directly proportional to the working head, so that a high-head installation requires a smaller volume of water than a low-head installation to produce an equal amount of power. In some dams, the powerhouse is constructed on one flank of the dam, part of the dam being used as a spillway over which excess water is discharged in times of flood. Where the river flows in a narrow steep gorge, the powerhouse may be located within the dam itself.

In most communities the demand for electric power varies considerably at different times of the day. To even the load on the generators, pumped-storage hydroelectric stations are occasionally built. During off-peak periods, some of the extra power available is supplied to the generator operating as a motor, driving the turbine to pump water into an elevated reservoir. Then, during periods of peak demand, the water is allowed to flow down again through the turbine to generate electrical energy. Pumped-storage systems are efficient and provide an economical way to meet peak loads.

In certain coastal areas, such as the Rance River estuary in Brittany, France, hydroelectric power plants have been constructed to take advantage of the rise and fall of tides. When the tide comes in, water is impounded in one or more reservoirs. At low tide, the water in these reservoirs is released to drive hydraulic turbines and their coupled electric generators.

Falling water is one of the three principal sources of energy used to generate electric power, the other two being fossil fuels and nuclear fuels. Hydroelectric power has certain advantages over these other sources. It is continually renewable owing to the recurring nature of the hydrologic cycle. It does not produce thermal pollution. (However, some dams can produce methane from the decomposition of vegetation under water.) Hydroelectric power is a preferred energy source in areas with heavy rainfall and with hilly or mountainous regions that are in reasonably close proximity to the main load centres. Some large hydro sites that are remote from load centres may be sufficiently attractive to justify the long high-voltage transmission lines. Small local hydro sites may also be economical, particularly if they combine storage of water during light loads with electricity production during peaks. Many of the negative environmental impacts of hydroelectric power come from the associated dams, which can interrupt the migrations of spawning fish, such as salmon, and permanently submerge or displace ecological and human communities as the reservoirs fill. In addition, hydroelectric dams are vulnerable to water scarcity. In August 2021 Oroville Dam, one of the largest hydroelectric power plants in California, was forced to shut down due to historic drought conditions in the region.

Details

Hydroelectricity, or hydroelectric power, is electricity generated from hydropower (water power). Hydropower supplies one sixth of the world's electricity, almost 4500 TWh in 2020, which is more than all other renewable sources combined and also more than nuclear power. Hydropower can provide large amounts of low-carbon electricity on demand, making it a key element for creating secure and clean electricity supply systems. A hydroelectric power station that has a dam and reservoir is a flexible source, since the amount of electricity produced can be increased or decreased in seconds or minutes in response to varying electricity demand. Once a hydroelectric complex is constructed, it produces no direct waste, and almost always emits considerably less greenhouse gas than fossil fuel-powered energy plants. However, when constructed in lowland rainforest areas, where part of the forest is inundated, substantial amounts of greenhouse gases may be emitted.

Construction of a hydroelectric complex can have significant environmental impact, principally in loss of arable land and population displacement. They also disrupt the natural ecology of the river involved, affecting habitats and ecosystems, and siltation and erosion patterns. While dams can ameliorate the risks of flooding, dam failure can be catastrophic.

In 2021, global installed hydropower electrical capacity reached almost 1400 GW, the highest among all renewable energy technologies. Hydroelectricity plays a leading role in countries like Brazil, Norway and China. but there are geographical limits and environmental issues. Tidal power can be used in coastal regions.

History

Hydropower has been used since ancient times to grind flour and perform other tasks. In the late 18th century hydraulic power provided the energy source needed for the start of the Industrial Revolution. In the mid-1770s, French engineer Bernard Forest de Bélidor published Architecture Hydraulique, which described vertical- and horizontal-axis hydraulic machines, and in 1771 Richard Arkwright's combination of water power, the water frame, and continuous production played a significant part in the development of the factory system, with modern employment practices. In the 1840s the hydraulic power network was developed to generate and transmit hydro power to end users.

By the late 19th century, the electrical generator was developed and could now be coupled with hydraulics. The growing demand arising from the Industrial Revolution would drive development as well. In 1878, the world's first hydroelectric power scheme was developed at Cragside in Northumberland, England, by William Armstrong. It was used to power a single arc lamp in his art gallery. The old Schoelkopf Power Station No. 1, US, near Niagara Falls, began to produce electricity in 1881. The first Edison hydroelectric power station, the Vulcan Street Plant, began operating September 30, 1882, in Appleton, Wisconsin, with an output of about 12.5 kilowatts. By 1886 there were 45 hydroelectric power stations in the United States and Canada; and by 1889 there were 200 in the United States alone.

At the beginning of the 20th century, many small hydroelectric power stations were being constructed by commercial companies in mountains near metropolitan areas. Grenoble, France held the International Exhibition of Hydropower and Tourism, with over one million visitors 1925. By 1920, when 40% of the power produced in the United States was hydroelectric, the Federal Power Act was enacted into law. The Act created the Federal Power Commission to regulate hydroelectric power stations on federal land and water. As the power stations became larger, their associated dams developed additional purposes, including flood control, irrigation and navigation. Federal funding became necessary for large-scale development, and federally owned corporations, such as the Tennessee Valley Authority (1933) and the Bonneville Power Administration (1937) were created. Additionally, the Bureau of Reclamation which had begun a series of western US irrigation projects in the early 20th century, was now constructing large hydroelectric projects such as the 1928 Hoover Dam. The United States Army Corps of Engineers was also involved in hydroelectric development, completing the Bonneville Dam in 1937 and being recognized by the Flood Control Act of 1936 as the premier federal flood control agency.

Hydroelectric power stations continued to become larger throughout the 20th century. Hydropower was referred to as "white coal". Hoover Dam's initial 1,345 MW power station was the world's largest hydroelectric power station in 1936; it was eclipsed by the 6,809 MW Grand Coulee Dam in 1942. The Itaipu Dam opened in 1984 in South America as the largest, producing 14 GW, but was surpassed in 2008 by the Three Gorges Dam in China at 22.5 GW. Hydroelectricity would eventually supply some countries, including Norway, Democratic Republic of the Congo, Paraguay and Brazil, with over 85% of their electricity.

Future potential

In 2021 the International Energy Agency (IEA) said that more efforts are needed to help limit climate change. Some countries have highly developed their hydropower potential and have very little room for growth: Switzerland produces 88% of its potential and Mexico 80%. In 2022, the IEA released a main-case forecast of 141 GW generated by hydropower over 2022-2027, which is slightly lower than deployment achieved from 2017-2022. Because environmental permitting and construction times are long, they estimate hydropower potential will remain limited, with only an additional 40 GW deemed possible in the accelerated case.

Modernization of existing infrastructure

In 2021 the IEA said that major modernisation refurbishments are required.

Additional Information

Hydropower, or hydroelectric power, is one of the oldest and largest sources of renewable energy, which uses the natural flow of moving water to generate electricity. Hydropower currently accounts for 28.7% of total U.S. renewable electricity generation and about 6.2% of total U.S. electricity generation.

While most people might associate the energy source with the Hoover Dam—a huge facility harnessing the power of an entire river behind its wall—hydropower facilities come in all sizes. Some may be very large, but they can be tiny, too, taking advantage of water flows in municipal water facilities or irrigation ditches. They can even be “damless,” with diversions or run-of-river facilities that channel part of a stream through a powerhouse before the water rejoins the main river. Whatever the method, hydropower is much easier to obtain and more widely used than most people realize. In fact, all but two states (Delaware and Mississippi) use hydropower for electricity, some more than others. For example, in 2020 about 66% of the state of Washington’s electricity came from hydropower.

In a study led by the National Renewable Energy Laboratory on hydropower flexibility, preliminary analysis found that the firm capacity associated with U.S. hydropower’s flexibility is estimated to be over 24 GW. To replace this capability with storage would require the buildout of 24 GW of 10-hour storage—more than all the existing storage in the United States today. Additionally, in terms of integrating wind and solar, the flexibility presented in existing U.S. hydropower facilities could help bring up to 137 gigawatts of new wind and solar online by 2035.

More Details

Hydroelectric energy, also called hydroelectric power or hydroelectricity, is a form of energy that harnesses the power of water in motion—such as water flowing over a waterfall—to generate electricity. People have used this force for millennia. Over 2,000 years ago, people in Greece used flowing water to turn the wheel of their mill to ground wheat into flour.

How Does Hydroelectric Energy Work?

Most hydroelectric power plants have a reservoir of water, a gate or valve to control how much water flows out of the reservoir, and an outlet or place where the water ends up after flowing downward. Water gains potential energy just before it spills over the top of a dam or flows down a hill. The potential energy is converted into kinetic energy as water flows downhill. The water can be used to turn the blades of a turbine to generate electricity, which is distributed to the power plant’s customers.

Types of Hydroelectric Energy Plants

There are three different types of hydroelectric energy plants, the most common being an impoundment facility. In an impoundment facility, a dam is used to control the flow of water stored in a pool or reservoir. When more energy is needed, water is released from the dam. Once water is released, gravity takes over and the water flows downward through a turbine. As the blades of the turbine spin, they power a generator.

Another type of hydroelectric energy plant is a diversion facility. This type of plant is unique because it does not use a dam. Instead, it uses a series of canals to channel flowing river water toward the generator-powering turbines.

The third type of plant is called a pumped-storage facility. This plant collects the energy produced from solar, wind, and nuclear power and stores it for future use. The plant stores energy by pumping water uphill from a pool at a lower elevation to a reservoir located at a higher elevation. When there is high demand for electricity, water located in the higher pool is released. As this water flows back down to the lower reservoir, it turns a turbine to generate more electricity.

How Widely Is Hydroelectric Energy Used Around the World?

Hydroelectric energy is the most commonly-used renewable source of electricity. China is the largest producer of hydroelectricity. Other top producers of hydropower around the world include the United States, Brazil, Canada, India, and Russia. Approximately 71 percent of all of the renewable electricity generated on Earth is from hydropower.

What Is the Largest Hydroelectric Power Plant in the World?

The Three Gorges Dam in China, which holds back the Yangtze River, is the largest hydroelectric dam in the world, in terms of electricity production. The dam is 2,335 meters (7,660 feet) long and 185 meters (607 feet) tall, and has enough generators to produce 22,500 megawatts of power.

mnet_196117_hydropower_plant__moreen_mbogo__construction_review_.png?auto=format%2Ccompress&fit=max&q=70&w=700


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1906 2023-09-20 00:09:38

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1910) Thermal Power

Gist

Thermal power generation consists of using steam power created by burning oil, liquid natural gas (LNG), coal, and other substances to rotate generators and create electricity.

Summary

Thermal power refers to the energy that is generated by converting heat into electricity. It is the process of producing electricity from a primary source of heat by using a steam turbine, which drives an electrical generator.

The primary source of heat can be obtained from various sources, including burning fossil fuels such as coal, oil, and natural gas, or through nuclear fission.

The heat energy is used to produce steam, which is then directed towards the turbine.

The steam expands as it passes through the turbine blades, causing them to spin and generating electricity.

The electricity is then transmitted to the power grid for distribution to homes and businesses.

Thermal power is a widely used method of generating electricity due to the abundance and accessibility of fossil fuels.

However, it is also a significant contributor to greenhouse gas emissions and environmental pollution.

Efforts are being made to reduce the environmental impact of thermal power by developing more efficient and cleaner energy technologies such as solar, wind, and geothermal power.

What is Thermal Power?

Thermal power refers to the energy that is generated by converting heat into electricity. It is the process of producing electricity from a primary source of heat by using a steam turbine, which drives an electrical generator.

The primary source of heat can be obtained from various sources, including burning fossil fuels such as coal, oil, and natural gas, or through nuclear fission.

The heat energy is used to produce steam, which is then directed towards the turbine.

The steam expands as it passes through the turbine blades, causing them to spin and generating electricity.

The electricity is then transmitted to the power grid for distribution to homes and businesses.

Thermal power is a widely used method of generating electricity due to the abundance and accessibility of fossil fuels.

However, it is also a significant contributor to greenhouse gas emissions and environmental pollution.

Efforts are being made to reduce the environmental impact of thermal power by developing more efficient and cleaner energy technologies such as solar, wind, and geothermal power.

As the energy transition is drawing talent to these sectors from all over the economy, companies around the globe use our courses to get new hires up to speed fast.

What are the key components of a thermal power plant?

The key components of a thermal power plant include:

* Boiler: This is the part of the plant where fuel is burned to produce high-pressure steam.
* Turbine: The steam produced by the boiler is used to power a turbine. The turbine is a machine that converts the kinetic energy of steam into mechanical energy.
* Generator: The mechanical energy produced by the turbine is used to generate electricity. The generator is a machine that converts mechanical energy into electrical energy.
* Condenser: After the steam passes through the turbine, it is cooled and condensed back into water by passing it through a condenser. The condenser transfers the heat from the steam to a cooling medium, typically water or air.
* Cooling tower: The water used in the condenser is typically cooled in a cooling tower before being returned to the condenser.
* Fuel storage and handling system: This is the system that stores and transports the fuel to the boiler. The fuel can be coal, natural gas, or oil.
* Ash handling system: The ash produced during the burning of fuel in the boiler is collected and transported to an ash handling system.
* Control system: The control system monitors and controls the various processes in the power plant, such as the flow of fuel, steam, and water.
O
Overall, a thermal power plant is a complex system that requires a range of components and processes to work together in a coordinated manner to produce electricity efficiently and reliably.

Details

A thermal power station is a type of power station in which heat energy is converted to electrical energy. In a steam-generating cycle heat is used to boil water in a large pressure vessel to produce high-pressure steam, which drives a steam turbine connected to an electrical generator. The low-pressure exhaust from the turbine enters a steam condenser where it is cooled to produce hot condensate which is recycled to the heating process to generate more high pressure steam. This is known as a Rankine cycle.

The design of thermal power stations depends on the intended energy source: fossil fuel, nuclear and geothermal power, solar energy, biofuels, and waste incineration are all used. Certain thermal power stations are also designed to produce heat for industrial purposes; for district heating; or desalination of water, in addition to generating electrical power.

Fuels such as natural gas or oil can also be burnt directly in gas turbines (internal combustion). These plants can be of the open cycle or the more efficient combined cycle type.

Types of thermal energy

Almost all coal-fired power stations, petroleum, nuclear, geothermal, solar thermal electric, and waste incineration plants, as well as all natural gas power stations are thermal. Natural gas is frequently burned in gas turbines as well as boilers. The waste heat from a gas turbine, in the form of hot exhaust gas, can be used to raise steam by passing this gas through a heat recovery steam generator (HRSG). The steam is then used to drive a steam turbine in a combined cycle plant that improves overall efficiency. Power stations burning coal, fuel oil, or natural gas are often called fossil fuel power stations. Some biomass-fueled thermal power stations have appeared also. Non-nuclear thermal power stations, particularly fossil-fueled plants, which do not use cogeneration are sometimes referred to as conventional power stations.

Commercial electric utility power stations are usually constructed on a large scale and designed for continuous operation. Virtually all electric power stations use three-phase electrical generators to produce alternating current (AC) electric power at a frequency of 50 Hz or 60 Hz. Large companies or institutions may have their own power stations to supply heating or electricity to their facilities, especially if steam is created anyway for other purposes. Steam-driven power stations have been used to drive most ships in most of the 20th century. Shipboard power stations usually directly couple the turbine to the ship's propellers through gearboxes. Power stations in such ships also provide steam to smaller turbines driving electric generators to supply electricity. Nuclear marine propulsion is, with few exceptions, used only in naval vessels. There have been many turbo-electric ships in which a steam-driven turbine drives an electric generator which powers an electric motor for propulsion.

Cogeneration plants, often called combined heat and power (CHP) facilities, produce both electric power and heat for process heat or space heating, such as steam and hot water.

History

The reciprocating steam engine has been used to produce mechanical power since the 18th century, with notable improvements being made by James Watt. When the first commercially developed central electrical power stations were established in 1882 at Pearl Street Station in New York and Holborn Viaduct power station in London, reciprocating steam engines were used. The development of the steam turbine in 1884 provided larger and more efficient machine designs for central generating stations. By 1892 the turbine was considered a better alternative to reciprocating engines; turbines offered higher speeds, more compact machinery, and stable speed regulation allowing for parallel synchronous operation of generators on a common bus. After about 1905, turbines entirely replaced reciprocating engines in almost all large central power stations.

The largest reciprocating engine-generator sets ever built were completed in 1901 for the Manhattan Elevated Railway. Each of seventeen units weighed about 500 tons and was rated 6000 kilowatts; a contemporary turbine set of similar rating would have weighed about 20% as much.

Thermal power generation efficiency

The energy efficiency of a conventional thermal power station is defined as saleable energy produced as a percent of the heating value of the fuel consumed. A simple cycle gas turbine achieves energy conversion efficiencies from 20 to 35%. Typical coal-based power plants operating at steam pressures of 170 bar and 570 °C run at efficiency of 35 to 38%, with state-of-the-art fossil fuel plants at 46% efficiency. Combined-cycle systems can reach higher values. As with all heat engines, their efficiency is limited, and governed by the laws of thermodynamics.

The Carnot efficiency dictates that higher efficiencies can be attained by increasing the temperature of the steam. Sub-critical pressure fossil fuel power stations can achieve 36–40% efficiency. Supercritical designs have efficiencies in the low to mid 40% range, with new "ultra critical" designs using pressures above 4400 psi (30.3 MPa) and multiple stage reheat reaching 45-48% efficiency. Above the critical point for water of 705 °F (374 °C) and 3212 psi (22.06 MPa), there is no phase transition from water to steam, but only a gradual decrease in density.

Currently most nuclear power stations must operate below the temperatures and pressures that coal-fired plants do, in order to provide more conservative safety margins within the systems that remove heat from the nuclear fuel. This, in turn, limits their thermodynamic efficiency to 30–32%. Some advanced reactor designs being studied, such as the very-high-temperature reactor, Advanced Gas-cooled Reactor, and supercritical water reactor, would operate at temperatures and pressures similar to current coal plants, producing comparable thermodynamic efficiency.

The energy of a thermal power station not utilized in power production must leave the plant in the form of heat to the environment. This waste heat can go through a condenser and be disposed of with cooling water or in cooling towers. If the waste heat is instead used for district heating, it is called cogeneration. An important class of thermal power station is that associated with desalination facilities; these are typically found in desert countries with large supplies of natural gas, and in these plants freshwater production and electricity are equally important co-products.

Other types of power stations are subject to different efficiency limitations. Most hydropower stations in the United States are about 90 percent efficient in converting the energy of falling water into electricity while the efficiency of a wind turbine is limited by Betz's law, to about 59.3%, and actual wind turbines show lower efficiency.

3128010.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1907 2023-09-21 00:19:29

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1911) Stamina

Gist

Stamina is the ability to do something that involves a lot of physical or mental effort for a long time.

It is the physical and/or mental strength to do something that might be difficult and will take a long time.

Summary

What is stamina?

Stamina is the strength and energy that allow you to sustain physical or mental effort for long periods of time. Increasing your stamina helps you endure discomfort or stress when you’re doing an activity. It also reduces fatigue and exhaustion. Having high stamina allows you to perform your daily activities at a higher level while using less energy.

5 ways to increase stamina

Try these tips to build stamina:

1. Exercise

Exercise may be the last thing on your mind when you’re feeling low on energy, but consistent exercise will help build your stamina.

Results of a 2017 study showed that the participants who were experiencing work-related fatigue improved their energy levels after six weeks of exercise intervention. They improved their work ability, sleep quality, and cognitive functioning.

2. Yoga and meditation

Yoga and meditation can greatly increase your stamina and ability to handle stress.

As part of a study from 2016, 27 medical students attended yoga and meditation classes for six weeks. They saw significant improvements in stress levels and sense of well-being. They also reported more endurance and less fatigue.

3. Music

Listening to music can increase your cardiac efficiency. The 30 participants in this study had a lowered heart rate when exercising while listening to their chosen music. They were able to put forth less effort exercising when listening to music than when exercising without music.

4. Caffeine

In a 2017 study, nine male swimmers took a 3-milligram (mg) dose of caffeine one hour before freestyle sprints. These swimmers improved their sprint time without increasing their heart rates. Caffeine may give you a boost on days you are feeling too tired to exercise.

Try not to rely on caffeine too much, since you can build up a tolerance. You should also stay away from caffeine sources that have a lot of sugar or artificial flavorings.

5. Ashwagandha

Ashwagandha is an herb that is used for overall health and vitality. It can also be used to boost cognitive function and to reduce stress. Ashwagandha is also shown to boost energy levels. In a 2015 study, 50 athletic adults took 300 mg capsules of Ashwagandha for 12 weeks. They increased their cardiorespiratory endurance and overall quality of life more than those in the placebo group.

Takeaway

As you focus on increasing your energy levels, bear in mind that it’s natural to experience energy ebbs and flows. Don’t expect to be operating at your maximum potential at all times. Remember to listen to your body and rest as needed. Avoid pushing yourself to the point of exhaustion.

If you feel that you’re making changes to increase your stamina without getting any results, you may wish to see a doctor. Your doctor can determine if you have any underlying health issues that are affecting your performance. Stay focused on your ideal plan for overall well-being.

Details

Stamina describes a person’s ability to sustain physical and mental activity. People with low mental stamina may find it difficult to focus on tasks for long periods and become distracted easily. People with low physical stamina may tire when walking up a flight of stairs, for example.

Having low stamina often causes a person to feel tired after little exertion, and they may experience an overall lack of energy or focus. By increasing their stamina, a person can feel more energetic and complete daily tasks more easily.

There are ways to increase stamina naturally, and the following are some of the best ways to do so over time.

Caffeine

Caffeine is a stimulant. This means that it can increase a person’s heart rate and give them temporary energy boosts. Caffeine is present in many coffees, teas, and soft drinks.

In a small studyTrusted Source, a group of nine top male swimmers took 3 milligrams (mg) of caffeine 1 hour before performing freestyle sprints. They consistently made better times than when they had taken a placebo, and the researchers observed no differences in heart rate. The implication is that caffeine can give people a boost when they are feeling fatigued.

For maximum effect, a person should limit their caffeine consumption. The body can become tolerant of caffeine, requiring an increasing amount to achieve the same effect.

Also, it is better to avoid drinks with lots of added sugars or fats, such as soft drinks and premade coffee drinks.

Meditation or yoga

People often practice yoga or meditation to help them relax or refocus. These activities, when done consistently, can help reduce stress and improve overall stamina.

For example, results of a small studyTrusted Source involving 27 medical students indicated that participating in some form of meditation or yoga could decrease stress levels and improve general well-being.

Anyone looking to increase their mental stamina may benefit from practicing yoga or meditation regularly.

Exercise

Exercise can help a person improve their physical and mental stamina. People who exercise often feel more energized during both mental and physical tasks.

One study showed that following a workout program led to lower levels of work-related fatigue. The results also indicated that the program helped decrease stress levels and improve the participants’ sense of well-being.

Anyone looking to reduce mental and physical fatigue should try to exercise regularly. This could include taking a walk or getting more intense exercise before or after work.

Ashwagandha

Ashwagandha is a natural herb available as a supplement. Taking ashwagandha may have the following effects:

* increasing overall energy
* boosting cognitive function
* reducing stress
* improving general health

In a small study, 25 athletes took 300 mg of ashwagandha twice a day for 12 weeks. They showed improved cardiovascular endurance, compared with an otherwise matched group who had taken a placebo.

maintain-proper-posture.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1908 2023-09-22 00:09:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1912) Meditation

Gist

Meditation is a practice that involves focusing or clearing your mind using a combination of mental and physical techniques. Depending on the type of meditation you choose, you can meditate to relax, reduce anxiety and stress, and more.

Summary

Meditation is private devotion or mental exercise encompassing various techniques of concentration, contemplation, and abstraction, regarded as conducive to heightened self-awareness, spiritual enlightenment, and physical and mental health.

Meditation has been practiced throughout history by adherents of all the world’s religions. In Roman Catholicism, for example, meditation consists of active, voluntary, and systematic thinking about a biblical or theological topic. Mental images are cultivated and efforts are made to empathize with God or with figures from the Bible. Eastern religious practices that involve thinking in a controlled manner have been described as meditation in the West since the 19th century. The Hindu philosophical school of Yoga, for example, prescribes a highly elaborate process for the purification of body, mind, and soul. One aspect of Yoga practice, dhyana (Sanskrit: “concentrated meditation”), became the focus of the Buddhist school known as Chan in China and later as Zen in Japan. In the late 1960s the British rock group the Beatles sparked a vogue in the West for Hindu-oriented forms of meditation, and in the following decade Transcendental Meditation (TM) became the first of a variety of commercially successful South and East Asian meditative techniques imported by the West. Academic psychological studies of TM and other forms of meditation followed rapidly.

In numerous religions, spiritual purification may be sought through the verbal or mental repetition of a prescribed efficacious syllable, word, or text (e.g., the Hindu and Buddhist mantra, the Islamic dhikr, and the Eastern Christian Jesus Prayer). The focusing of attention upon a visual image (e.g., a flower or a distant mountain) is a common technique in informal contemplative practice and has been formalized in several traditions. Tibetan Buddhists, for example, regard the mandala (Sanskrit: “circle”) diagram as a collection point of universal forces, accessible to humans by meditation. Tactile and mechanical devices, such as the rosary and the prayer wheel, along with music, play a highly ritualized role in many contemplative traditions.

Most meditative practices concentrate attention in order to induce mystical experiences. Others are mindful of the mental character of all contents of consciousness and utilize this insight to detach the practitioner either from all thoughts or from a selected group of thoughts—e.g., the ego (Buddhism) or the attractiveness of sin (Christianity). Meditation may also serve as a special, potent preparation for a physically demanding or otherwise strenuous activity, as in the case of the warrior before battle or the musician before performance.

The doctrinal and experiential truths claimed by different practices of meditation are often inconsistent with each other. Hinduism, for example, asserts that the self is divine, while other traditions claim that God alone exists (Sufism), that God is immediately present to the soul (Christianity and Judaism), and that all things are empty (Mahayana Buddhism).

In the West, scientific research on meditation, beginning in the 1970s, has focused on the psychological and physical effects and alleged benefits of meditation, especially of TM. Meditative techniques used by skilled practitioners have proved to be effective in controlling pulse and respiratory rates and in alleviating symptoms of migraine headache, hypertension, and hemophilia, among other conditions.

Disenchantment with materialistic values led to an awakening of interest in Indian, Chinese, and Japanese philosophy and practice among primarily young people in many Western countries in the 1960s and ’70s. The teaching and practice of numerous techniques of meditation, most based on Asian religious traditions, became a widespread phenomenon. For example, the practice of “mindfulness meditation,” an adaptation of Buddhist techniques, was popularized in the United States beginning in the 1980s. Its medical use as an adjunct to psychotherapy was widely embraced in the late 1990s, leading to its adoption in many psychiatric facilities.

Details

Meditation is a practice in which an individual uses a technique – such as mindfulness, or focusing the mind on a particular object, thought, or activity – to train attention and awareness, and achieve a mentally clear and emotionally calm and stable state.

Meditation is practiced in numerous religious traditions. The earliest records of meditation (dhyana) are found in the Upanishads, and meditation plays a salient role in the contemplative repertoire of Hinduism, Jainism and Buddhism. Since the 19th century, Asian meditative techniques have spread to other cultures where they have also found application in non-spiritual contexts, such as business and health.

Meditation may significantly reduce stress, anxiety, depression, and pain, and enhance peace, perception, self-concept, and well-being. Research is ongoing to better understand the effects of meditation on health (psychological, neurological, and cardiovascular) and other areas.

Etymology

The English meditation is derived from Old French meditacioun, in turn from Latin meditatio from a verb meditari, meaning "to think, contemplate, devise, ponder". In the Catholic tradition, the use of the term meditatio as part of a formal, stepwise process of meditation goes back to at least the 12th century monk Guigo II, before which the Greek word Theoria was used for the same purpose.

Apart from its historical usage, the term meditation was introduced as a translation for Eastern spiritual practices, referred to as dhyāna in Hinduism and Buddhism and which comes from the Sanskrit root dhyai, meaning to contemplate or meditate. The term "meditation" in English may also refer to practices from Islamic Sufism, or other traditions such as Jewish Kabbalah and Christian Hesychasm.

Definitions:

Difficulties in defining meditation:

No universally accepted definition

Meditation has proven difficult to define as it covers a wide range of dissimilar practices in different traditions. In popular usage, the word "meditation" and the phrase "meditative practice" are often used imprecisely to designate practices found across many cultures. These can include almost anything that is claimed to train the attention of mind or to teach calm or compassion. There remains no definition of necessary and sufficient criteria for meditation that has achieved universal or widespread acceptance within the modern scientific community. In 1971, Claudio Naranjo noted that "The word 'meditation' has been used to designate a variety of practices that differ enough from one another so that we may find trouble defining what meditation is."  A 2009 study noted a "persistent lack of consensus in the literature" and a "seeming intractability of defining meditation".

Separation of technique from tradition

Some of the difficulty in precisely defining meditation has been in recognizing the particularities of the many various traditions; and theories and practice can differ within a tradition. Taylor noted that even within a faith such as "Hindu" or "Buddhist", schools and individual teachers may teach distinct types of meditation.  Ornstein noted that "Most techniques of meditation do not exist as solitary practices but are only artificially separable from an entire system of practice and belief."  For instance, while monks meditate as part of their everyday lives, they also engage in the codified rules and live together in monasteries in specific cultural settings that go along with their meditative practices.

Dictionary definitions

Dictionaries give both the original Latin meaning of "think[ing] deeply about (something)"; as well as the popular usage of "focusing one's mind for a period of time", "the act of giving your attention to only one thing, either as a religious activity or as a way of becoming calm and relaxed", and "to engage in mental exercise (such as concentrating on one's breathing or repetition of a mantra) for the purpose of reaching a heightened level of spiritual awareness."

Scholarly definitions

In modern psychological research, meditation has been defined and characterized in various ways. Many of these emphasize the role of attention and characterize the practice of meditation as attempts to get beyond the reflexive, "discursive thinking" or "logic" mind[note 3] to achieve a deeper, more devout, or more relaxed state.

Bond et al. (2009) identified criteria for defining a practice as meditation "for use in a comprehensive systematic review of the therapeutic use of meditation", using "a 5-round Delphi study with a panel of 7 experts in meditation research" who were also trained in diverse but empirically highly studied (Eastern-derived or clinical) forms of meditation:

* three main criteria ... as essential to any meditation practice: the use of a defined technique, logic relaxation, and a self-induced state/mode.

* Other criteria deemed important [but not essential] involve a state of psychophysical relaxation, the use of a self-focus skill or anchor, the presence of a state of suspension of logical thought processes, a religious/spiritual/philosophical context, or a state of mental silence.

* ... It is plausible that meditation is best thought of as a natural category of techniques best captured by 'family resemblances' ... or by the related 'prototype' model of concepts."

Several other definitions of meditation have been used by influential modern reviews of research on meditation across multiple traditions:

* Walsh & Shapiro (2006): "Meditation refers to a family of self-regulation practices that focus on training attention and awareness in order to bring mental processes under greater voluntary control and thereby foster general mental well-being and development and/or specific capacities such as calm, clarity, and concentration"
* Cahn & Polich (2006): "Meditation is used to describe practices that self-regulate the body and mind, thereby affecting mental events by engaging a specific attentional set.... regulation of attention is the central commonality across the many divergent methods"
* Jevning et al. (1992): "We define meditation... as a stylized mental technique... repetitively practiced for the purpose of attaining a subjective experience that is frequently described as very restful, silent, and of heightened alertness, often characterized as blissful"
* Goleman (1988): "the need for the meditator to retrain his attention, whether through concentration or mindfulness, is the single invariant ingredient in... every meditation system"

Classifications:

Focused and open methods

In the West, meditation techniques have often been classified in two broad categories, which in actual practice are often combined: focused (or concentrative) meditation and open monitoring (or mindfulness) meditation:

Direction of mental attention... A practitioner can focus intensively on one particular object (so-called concentrative meditation), on all mental events that enter the field of awareness (so-called mindfulness meditation), or both specific focal points and the field of awareness.

Focused methods include paying attention to the breath, to an idea or feeling (such as mettā – loving-kindness), to a kōan, or to a mantra (such as in transcendental meditation), and single point meditation. Open monitoring methods include mindfulness, shikantaza and other awareness states.

Other possible typologies

Another typology divides meditation approaches into concentrative, generative, receptive and reflective practices:

* concentrative: focused attention, including breath meditation, TM, and visualizations;
* generative: developing qualities like loving kindness and compassion;
* receptive: open monitoring;
* reflective: systematic investigation, contemplation.

The Buddhist tradition often divides meditative practice into samatha, or calm abiding, and vipassana, insight. Mindfulness of breathing, a form of focused attention, calms down the mind; this calmed mind can then investigate the nature of reality, by monitoring the fleeting and ever-changing constituents of experience, by reflective investigation, or by "turning back the radiance," focusing awareness on awareness itself and discerning the true nature of mind as awareness itself.

Matko and Sedlmeier (2019) "call into question the common division into “focused attention” and “open-monitoring” practices." They argue for "two orthogonal dimensions along which meditation techniques could be classified," namely "activation" and "amount of body orientation," proposing seven clusters of techniques: "mindful observation, body-centered meditation, visual concentration, contemplation, affect-centered meditation, mantra meditation, and meditation with movement."

Jonathan Shear argues that transcendental meditation is an "automatic self-transcending" technique, different from focused attention and open monitoring. In this kind of practice, "there is no attempt to sustain any particular condition at all. Practices of this kind, once started, are reported to automatically “transcend” their own activity and disappear, to be started up again later if appropriate." Yet, Shear also states that "automatic self-transcending" also applies to the way other techniques such as from Zen and Qigong are practiced by experienced meditators "once they had become effortless and automatic through years of practice."

Jackie+Stewart+meditating+outside?format=1500w


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1909 2023-09-23 00:14:08

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1913) Comedian

Gist

A comedian is a a person whose job is to entertain people and make them laugh, for example by telling jokes.

Summary

Comedy is a type of drama or other art form the chief object of which, according to modern notions, is to amuse. It is contrasted on the one hand with tragedy and on the other with farce, burlesque, and other forms of humorous amusement.

The classic conception of comedy, which began with Aristotle in ancient Greece of the 4th century BCE and persists through the present, holds that it is primarily concerned with humans as social beings, rather than as private persons, and that its function is frankly corrective. The comic artist’s purpose is to hold a mirror up to society to reflect its follies and vices, in the hope that they will, as a result, be mended. The 20th-century French philosopher Henri Bergson shared this view of the corrective purpose of laughter; specifically, he felt, laughter is intended to bring the comic character back into conformity with his society, whose logic and conventions he abandons when “he slackens in the attention that is due to life.”

Here comedy is considered primarily as a literary genre. The wellsprings of comedy are dealt with in the article humour. The comic impulse in the visual arts is discussed in the articles caricature and cartoon and comic strip.

Details

A comedian or comic is a person who seeks to entertain an audience by making them laugh. This might be through jokes or amusing situations, or acting foolish (as in slapstick), or employing prop comedy. A comedian who addresses an audience directly is called a stand-up comedian.

A popular saying often attributed to Ed Wynn attempts to differentiate the two terms:

"A comic says funny things; a comedian says things funny."

This draws a distinction between how much of the comedy can be attributed to verbal content and how much to acting and persona.

Since the 1980s, a new wave of comedy, called alternative comedy, has grown in popularity with its more offbeat and experimental style. This normally involves more experiential, or observational reporting (e.g., Alexei Sayle, Daniel Tosh, Malcolm Hardee). As far as content is concerned, comedians such as Tommy Tiernan, Des Bishop, Kevin Hart, and Dawn French draw on their background to poke fun at themselves, while others such as Jon Stewart, Ben Elton and Sarah Silverman have very strong political and cultural undertones.

Many comics achieve a cult following while touring famous comedy hubs such as the Just for Laughs festival in Montreal, the Edinburgh Fringe, and Melbourne Comedy Festival in Australia. Often a comic's career advances significantly when they win a notable comedy award, such as the Edinburgh Comedy Award (formerly the Perrier comedy award). Comics sometimes foray into other areas of entertainment, such as film and television, where they become more widely known (e.g., Eddie Izzard, Lee Evans). A comic's stand-up success does not always correlate to a film's critical or box-office success.

History:

Ancient Greeks

Comedians can be dated back to 425 BC, when Aristophanes, a comic author, and playwright, wrote ancient comedic plays. He wrote 40 comedies, 11 of which survive and are still being performed. Aristophanes' comedy style took the form of satyr plays.

Shakespearean comedy

The English poet and playwright William Shakespeare wrote many comedies. A Shakespearean comedy is one that has a happy ending, usually involving marriages between the unmarried characters, and a tone and style that is more light-hearted than Shakespeare's other plays.

Modern era

American performance comedy has its roots in the 1840s from the three-act, variety show format of minstrel shows (via blackface performances of the Jim Crow character); Frederick Douglass criticized these shows for profiting from and perpetuating racism. Minstrelsy monologists performed second-act, stump-speech monologues from within minstrel shows until 1896. American standup also emerged in vaudeville theatre from the 1880s to the 1930s, with such comics as W. C. Fields, Buster Keaton and the Marx Brothers.

British performance comedy has its roots in 1850 music hall theatres, where Charlie Chaplin, Stan Laurel, and Dan Leno first performed, mentored by comedian and theatre impresario Fred Karno, who developed a form of sketch comedy without dialogue in the 1890s and also pioneered slapstick comedy.

Media

In the modern era, as technology produced forms of mass communications media, these were adapted to entertainment and comedians adapted to the new media, sometimes switching to new forms as they were introduced.

Stand-up

Stand-up comedy is a comic monologue performed standing on a stage. Bob Hope became the most popular stand-up comedian of the 20th century in a nearly 80-year career that included numerous comedy film roles over a five-decade span in radio, television, and entertaining armed-service troops through the USO. Other noted stand-up comedians include Lenny Bruce, Billy Connolly, George Carlin, Richard Pryor, Victoria Wood, Joan Rivers, Whoopi Goldberg and Jo Brand.

Audio recording

Some of the earliest commercial sound recordings were made by standup comedians such as Cal Stewart, who recorded collections of his humorous monologues on Edison Records as early as 1898, and other labels until his death in 1919.

Bandleader Spike Jones recorded 15 musical comedy albums satirizing popular and classical music from 1950 to his death in 1965. Tom Lehrer wrote and recorded five albums of songs satirizing political and social issues from 1953 to 1965. Musician Peter Schickele, inspired by Jones, parodied classical music with 17 albums of his music which he presented as written by "P.D.Q. Bach" (fictional son of Johann Sebastian Bach) from 1965 through 2007.

In 1968, radio surreal comedy group The Firesign Theatre revolutionized the concept of the spoken comedy album by writing and recording elaborate radio plays employing sound effects and multitrack recording, which comedian Robin Williams called "the audio equivalent of a Hieronymous Bosch painting." Comedy duo Cheech and Chong recorded comedy albums in a similar format from 1971 through 1985.

Film

Karno took Chaplin and Laurel on two trips to the United States to tour the vaudeville circuit. On the second one, they were recruited by the fledgling silent film industry. Chaplin became the most popular screen comedian of the first half of the 20th century. Chaplin and Stan Laurel were protégés of Fred Karno, the English theatre impresario of British music hall, and in his biography Laurel stated, "Fred Karno didn't teach Charlie [Chaplin] and me all we know about comedy. He just taught us most of it". Chaplin wrote films such as Modern Times and The Kid. His films still have a major impact on comedy in films today.

Laurel met Oliver Hardy in the US and teamed up as Laurel and Hardy. Keaton also started making silent comedies.

Fields appeared in Broadway musical comedies, three silent films in 1915 and 1925, and in sound films starting in 1926. The Marx brothers also made the transition to film in 1929, by way of two Broadway musicals.

Many other comedians made sound films, such as Bob Hope (both alone, and in a series of "Road to ..." comedies with partner Bing Crosby), ventriloquist Edgar Bergen, and Jerry Lewis (both with and without partner Dean Martin).

Some comedians who entered film expanded their acting skills to become dramatic actors, or started as actors specializing in comic roles, such as Dickinson Van Dyke, Paul Lynde, Michael Keaton, Bill Murray and Denis Leary.

Radio

Radio comedy began in the United States when Raymond Knight launched The Cuckoo Hour on NBC in 1930, along with the 1931 network debut of Stoopnagle and Budd on CBS. Most of the Hollywood comedians who did not become dramatic actors (e.g. Bergen, Fields, Groucho and Chico Marx, Red Skelton, Jack Benny, Fred Allen, Judy Canova, Hope, Martin and Lewis), transitioned to United States radio in the 1930s and 1940s.

Without a Hollywood supply of comedians to draw from, radio comedy did not begin in the United Kingdom until a generation later, with such popular 1950s shows as The Goon Show and Hancock's Half Hour. Later, radio became a proving-ground for many later United Kingdom comedians. Chris Morris began his career in 1986 at Radio Cambridgeshire, and Ricky Gervais began his comedy career in 1997 at London radio station XFM. The League of Gentlemen, Mitchell and Webb and The Mighty Boosh all transferred to television after broadcasting on BBC Radio 4.

Television

On television there are comedy talk shows where comedians make fun of current news or popular topics. Such comedians include Jay Leno, Conan O'Brien, Graham Norton, Jim Jefferies, James Corden, John Oliver, Jonathan Ross, David Letterman, and Chelsea Handler. There are sketch comedies, such as Mr. Show with Bob and David and Monty Python who created their sketch comedy show Monty Python's Flying Circus (a BBC show that influenced Saturday Night Live), and sitcoms, such as Roseanne, Only Fools and Horses, and Not Going Out, as well as popular panel shows like The Big Fat Quiz of the Year, Have I Got News for You, and Celebrity Juice. The most acclaimed sitcoms include Seinfeld and The Big Bang Theory.

Internet

Comedy is increasingly enjoyed online. Comedians with popular long-running podcasts series include Kevin Smith and Joe Rogan. Comedians streaming videos of their stand-up include Bridget Christie, Louis C.K. and Daniel Kitson.

Jokes

There are many established formats for jokes. One example is the pun or double-entendre, where similar words are interchanged. The Two Ronnies often used puns and double-entendre. Stewart Francis and Tim Vine are examples of current comedians who deploy numerous puns. Jokes based on puns tend to be very quick and easy to digest, which sometimes leads to other joke forms being overlooked, for example in the Funniest Joke of the Fringe awards. Other jokes may rely on confounding an audience's expectations through a misleading setup (known as a 'pull back and reveal' in the UK and a 'leadaway' in the US). Ed Byrne is an example of a comedian who has used this technique. Some jokes are based on ad absurdum extrapolations, for example much of Richard Herring and Ross Noble's standup. In ironic humour there is an intentional mismatch between a message and the form in which it is conveyed (for example the work of Danielle Ward). Other joke forms include observation (Michael McIntyre), whimsy (David O'Doherty), self-deprecation (Robin Williams) and parody (Diane Morgan).

Personality traits

In a January 2014 study, conducted in the British Journal of Psychiatry, scientists found that comedians tend to have high levels of psychotic personality traits. In the study, researchers analyzed 404 male and 119 female comedians from Australia, Britain, and the United States. The participants were asked to complete an online questionnaire designed to measure psychotic traits in healthy people. They found that comedians scored "significantly higher on four types of psychotic characteristics compared to a control group of people who had non-creative jobs." Gordon Claridge, a professor of experimental psychology at the University of Oxford and leader of the study claimed, "the creative elements needed to produce humor are strikingly similar to those characterizing the cognitive style of people with psychosis—both schizophrenia and bipolar disorder." However, labeling comedians' personality traits as "psychotic" does not mean that individual is a psychopath, since psychopathy is distinct from psychosis, and neither does it mean their behavior is necessarily pathological.

Highest-paid comedians

Forbes publishes an annual list of the most financially successful comedians in the world, similarly to their Celebrity 100 list. Their data sources include Nielsen Media Research, Pollstar, Box Office Mojo and IMDb. The list was topped by Jerry Seinfeld from 2006 until 2015, who lost the title to Kevin Hart in 2016. In that year, the eight highest paid comedians were from the United States, including Amy Schumer, who became the first woman to be listed in the top ten.

imago0087570842h.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1910 2023-09-24 00:09:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1914) Cartoonist

Gist

A person who draws cartoons.

Summary

Cartoons are drawings that make a point, tell a joke, or tell a story. Cartoons can be about almost anything. Many cartoons are about the things that everyday people say and do. Others are about the news, government leaders, or historical events. Many cartoons try to make people laugh.

Types of Cartoons

Comic strips are a popular type of cartoon. A comic strip usually has four or more drawings in a row that tell a connected story. Comic strips feature a cast of characters, such as the children in the comic strip “Peanuts.”

Other types of cartoons include political cartoons, gag panels, and animated cartoons. Political cartoons show what is wrong with the government or make fun of it. They are usually single drawings, but there are some political comic strips. Gag panels are single drawings that make fun of everyday life. Animated cartoons are cartoons filmed as movies or television shows.

Cartoons may be found in newspapers, magazines, and books. Comic books and graphic novels are books filled with many comic strips or cartoons.

History

People have been using pictures to tell stories since prehistoric times. Prehistoric artists drew pictures of animals on the walls of caves. In ancient Egypt, Greece, and Rome, artists painted pictures on vases and walls. These pictures recorded historical events, the lives of important people, and legends.

From the 1500s to the 1700s people got the news through short printed works that had many pictures. Many of these pictures were early forms of political cartoons. Political cartoons became common throughout Europe and the United States during the 1800s.

During the 1900s funny gag panels and comic strips became more popular than political cartoons. Popular cartoons of the late 20th and early 21st centuries included “The Far Side,” “Calvin and Hobbes,” “Bloom County,” and “Get Fuzzy.”

Details

A cartoonist specializes in creating cartoons, which are visual representations or illustrations that often accompany written content to convey a message or tell a story. Cartoonists employ various artistic techniques, such as drawing, painting, and digital illustration, to bring their ideas to life. They use humor, satire, and exaggeration to capture the essence of their subject matter and engage viewers.

Cartoonists play a significant role in the world of entertainment, journalism, advertising, and communication. In entertainment, cartoonists create animated characters and storylines for cartoons, television shows, and movies, entertaining audiences of all ages. In journalism, editorial cartoonists use their artistic skills to provide social and political commentary through visual satire. They use symbolism, caricature, and clever wordplay to convey complex ideas in a concise and engaging manner. Cartoonists also contribute to advertising by creating memorable characters and illustrations that promote products and brands.

What does a cartoonist do?

Cartoonists are talented visual storytellers who utilize their creativity and artistic abilities to entertain, educate, and provoke thought in their audiences.

Duties and Responsibilities

The duties and responsibilities of a cartoonist can vary depending on their specific role and industry. However, here are some common duties and responsibilities associated with being a cartoonist:

* Idea Generation: Cartoonists are responsible for brainstorming and generating creative ideas for their cartoons. They need to develop concepts, characters, and storylines that effectively convey their intended message or story.
* Artistic Execution: Cartoonists bring their ideas to life through artistic execution. They use various techniques such as drawing, painting, digital illustration, or even animation to create visually appealing and engaging cartoons. They have a keen eye for detail, composition, and color to enhance the impact of their work.
* Storytelling: Cartoonists are skilled visual storytellers. They need to effectively communicate narratives, messages, or emotions through their cartoons. They often use humor, satire, and symbolism to engage viewers and convey complex ideas in a simplified and accessible manner.
* Research and Content Creation: Depending on the subject matter, cartoonists may need to conduct research to gather information and stay up to date with current events or trends. This helps them create relevant and timely cartoons that resonate with their target audience.
* Collaboration: Cartoonists often work collaboratively with writers, editors, and other artists to develop and refine their cartoons. They may receive feedback and incorporate changes to ensure the final product meets the desired objectives.
* Meeting Deadlines: Cartoonists work within deadlines, whether they are creating daily editorial cartoons, weekly comic strips, or contributing to animation projects. Meeting these deadlines requires excellent time management skills and the ability to work efficiently without compromising quality.
* Adaptability and Innovation: Cartoonists need to adapt to evolving technologies and trends in their field. They should stay open to experimenting with new styles, techniques, and tools to continually improve their work and stay relevant in the industry.

Types of Cartoonists

There are various types of cartoonists, each specializing in different areas and utilizing their skills in unique ways. Here are some common types of cartoonists:

* Editorial Cartoonist: Editorial cartoonists create cartoons that provide social and political commentary. They often work for newspapers, magazines, or online publications, and their cartoons are typically published alongside editorial articles. Editorial cartoonists use satire, caricature, and symbolism to express their opinions and provoke thought on current events, political figures, or social issues.
* Comic Strip Cartoonist: Comic strip cartoonists create serialized cartoons that appear in newspapers, magazines, or online platforms. They develop characters and storylines that unfold over a series of panels, entertaining readers with humorous or dramatic narratives. Famous examples include "Garfield," "Calvin and Hobbes," and "Peanuts."
* Animation Cartoonist: Animation cartoonists specialize in creating cartoons that come to life through movement. They work in the animation industry, whether it's for television, film, or online platforms. Animation cartoonists may be involved in various stages of the animation process, including character design, storyboarding, layout, and keyframe animation.
* Children's Book Illustrator: Some cartoonists focus on illustrating cartoons for children's books. They create colorful and engaging visuals that accompany the written text, helping to bring stories to life and capture the attention of young readers. Children's book illustrators often work closely with authors and publishers to create captivating illustrations that resonate with the target audience.
* Advertising Cartoonist: Advertising cartoonists create cartoons for advertising and marketing purposes. They develop characters, illustrations, and cartoons that promote products, services, or brands. Their work aims to capture attention, convey brand messages, and create a memorable impression on consumers.
* Gag Cartoonist: Gag cartoonists specialize in creating single-panel cartoons that deliver a punchline or humorous observation. These cartoons are often found in magazines, newspapers, greeting cards, or online platforms. Gag cartoonists rely on clever wordplay, visual puns, and situational humor to entertain viewers in a concise and witty manner.

What is the workplace of a Cartoonist like?

The workplace of a cartoonist can vary depending on their specific role and work environment. For many cartoonists, their workplace is a studio or office space that they have set up to cater to their creative needs. This can be a dedicated room in their home or a separate workspace where they have all the necessary tools and materials at their disposal. Within this personal space, cartoonists can immerse themselves in their work, surrounded by drawing tablets, art supplies, reference materials, and computer software. They have the freedom to create and experiment with their artistic ideas, working at their own pace and without distractions.

In contrast, some cartoonists work in newsrooms or publication offices, especially those involved in editorial cartooning. These cartoonists collaborate closely with writers, editors, and journalists, contributing visual commentary on current events and social issues. In these bustling environments, they may have a designated workspace or a shared area where they engage in discussions, receive feedback, and brainstorm ideas with their colleagues. Being in a newsroom or publication office allows cartoonists to stay up to date with the latest news and have access to resources that aid in their research and creative process.

For cartoonists involved in animation, their workplace is often an animation studio. Here, they work alongside a team of animators, storyboard artists, and directors. The studio is equipped with advanced technology, animation software, and resources to facilitate the animation production process. Cartoonists in animation studios may have their own workstations, attend meetings and reviews, and collaborate closely with other team members to ensure the visual elements align with the overall vision of the project.

In recent years, remote work has become increasingly popular for cartoonists. With the availability of digital art tools and online collaboration platforms, many cartoonists can work from anywhere. Remote work offers flexibility and freedom, allowing cartoonists to create cartoons from the comfort of their own homes or any location of their choice. They can communicate with clients or collaborators through virtual meetings, share their work electronically, and maintain a flexible schedule.

Additional Information

A cartoonist is a visual artist who specializes in both drawing and writing cartoons (individual images) or comics (sequential images). Cartoonists differ from comics writers or comics illustrators/artists in that they produce both the literary and graphic components of the work as part of their practice.

Cartoonists may work in a variety of formats, including booklets, comic strips, comic books, editorial cartoons, graphic novels, manuals, gag cartoons, storyboards, posters, shirts, books, advertisements, greeting cards, magazines, newspapers, webcomics, and video game packaging.

Terminology

A cartoonist's discipline encompasses both authorial and drafting disciplines (see interdisciplinary arts). The terms "comics illustrator", "comics artist", or "comic book artist" refer to the picture-making portion of the discipline of cartooning.  While every "cartoonist" might be considered a "comics illustrator", "comics artist", or a "comic book artist", not every "comics illustrator", "comics artist", or a "comic book artist" is a "cartoonist".

Ambiguity might arise when illustrators and writers share each other's duties in authoring a work.

194747-050-6FBBC6FF.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1911 2023-09-25 00:16:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1915) Physiotherapy

Gist

Physiotherapy helps to restore movement and function when someone is affected by injury, illness or disability. It can also help to reduce your risk of injury or illness in the future. It takes a holistic approach that involves the patient directly in their own care.

Summary

Physiotherapy helps to restore movement and function when someone is affected by injury, illness or disability. It can also help to reduce your risk of injury or illness in the future.

It takes a holistic approach that involves the patient directly in their own care.

When is physiotherapy used?

Physiotherapy can be helpful for people of all ages with a wide range of health conditions, including problems affecting the:

* bones, joints and soft tissue – such as back pain, neck pain, shoulder pain and sports injuries
* brain or nervous system – such as movement problems resulting from a stroke, multiple sclerosis (MS) or Parkinson's disease
* heart and circulation – such as rehabilitation after a heart attack
* lungs and breathing – such as chronic obstructive pulmonary disease (COPD) and cystic fibrosis

Physiotherapy can improve your physical activity while helping you to prevent further injuries.

Physiotherapists

Physiotherapy is provided by specially trained and regulated practitioners called physiotherapists.

Physiotherapists often work as part of a multidisciplinary team in various areas of medicine and settings, including:

* hospitals
* community health centres or clinics
* some GP surgeries
* some sports teams, clubs, charities and workplaces

Some physiotherapists can also offer home visits.

What physiotherapists do

Physiotherapists consider the body as a whole, rather than just focusing on the individual aspects of an injury or illness.

Some of the main approaches used by physiotherapists include:

* education and advice – physiotherapists can give general advice about things that can affect your daily lives, such as posture and correct lifting or carrying techniques to help prevent injuries
* movement, tailored exercise and physical activity advice – exercises may be recommended to improve your general health and mobility, and to strengthen specific parts of your body
* manual therapy – where the physiotherapist uses their hands to help relieve pain and stiffness, and to encourage better movement of the body

There are other techniques that may sometimes be used, such as exercises carried out in water (hydrotherapy or aquatic therapy) or acupuncture.

Finding a physiotherapist

Physiotherapy is available through the NHS or privately.

You may need a referral from your GP to have physiotherapy on the NHS, although in some areas it's possible to refer yourself directly.

To find out whether self-referral is available in your area, ask the reception staff at your GP surgery or contact your local hospital trust.

Waiting lists for NHS treatment can be long and some people choose to pay for private treatment. Most private physiotherapists accept direct self-referrals.

Details

Physical therapy (PT), also known as physiotherapy, is one of the allied health professions. It is provided by physical therapists who promote, maintain, or restore health through physical examination, diagnosis, management, prognosis, patient education, physical intervention, rehabilitation, disease prevention, and health promotion. Physical therapists are known as physiotherapists in many countries.

The career has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. PTs practice in many settings, both public and private.

In addition to clinical practice, other aspects of physical therapist practice include research, education, consultation, and health administration. Physical therapy is provided as a primary care treatment or alongside, or in conjunction with, other medical services. In some jurisdictions, such as the United Kingdom, physical therapists have the authority to prescribe medication.

Overview

Physical therapy addresses the illnesses or injuries that limit a person's abilities to move and perform functional activities in their daily lives. PTs use an individual's history and physical examination to arrive at a diagnosis and establish a management plan and, when necessary, incorporate the results of laboratory and imaging studies like X-rays, CT-scan, or MRI findings. Electrodiagnostic testing (e.g., electromyograms and nerve conduction velocity testing) may also be used.

PT management commonly includes prescription of or assistance with specific exercises, manual therapy, and manipulation, mechanical devices such as traction, education, electrophysical modalities which include heat, cold, electricity, sound waves, radiation, assistive devices, prostheses, orthoses, and other interventions. In addition, PTs work with individuals to prevent the loss of mobility before it occurs by developing fitness and wellness-oriented programs for healthier and more active lifestyles, providing services to individuals and populations to develop, maintain and restore maximum movement and functional ability throughout the lifespan. This includes providing treatment in circumstances where movement and function are threatened by aging, injury, disease, or environmental factors. Functional movement is central to what it means to be healthy.

Physical therapy is a professional career which has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. Neurological rehabilitation is, in particular, a rapidly emerging field. PTs practice in many settings, such as private-owned physical therapy clinics, outpatient clinics or offices, health and wellness clinics, rehabilitation hospitals facilities, skilled nursing facilities, extended care facilities, private homes, education, and research centers, schools, hospices, industrial and these workplaces or other occupational environments, fitness centers and sports training facilities.

Physical therapists also practice in the non-patient care roles such as health policy, health insurance, health care administration and as health care executives. Physical therapists are involved in the medical-legal field serving as experts, performing peer review and independent medical examinations.

Education varies greatly by country. The span of education ranges from some countries having little formal education to others having doctoral degrees and post-doctoral residencies and fellowships.

History

Physicians like Hippocrates and later Galen are believed to have been the first practitioners of physical therapy, advocating massage, manual therapy techniques and hydrotherapy to treat people in 460 BC. After the development of orthopedics in the eighteenth century, machines like the Gymnasticon were developed to treat gout and similar diseases by systematic exercise of the joints, similar to later developments in physical therapy.

The earliest documented origins of actual physical therapy as a professional group date back to Per Henrik Ling, "Father of Swedish Gymnastics," who founded the Royal Central Institute of Gymnastics (RCIG) in 1813 for manipulation, and exercise. Up until 2014, the Swedish word for a physical therapist was sjukgymnast = someone involved in gymnastics for those who are ill, but the title was then changed to fysioterapeut (physiotherapist), the word used in the other Scandinavian countries. In 1887, PTs were given official registration by Sweden's National Board of Health and Welfare. Other countries soon followed. In 1894, four nurses in Great Britain formed the Chartered Society of Physiotherapy. The School of Physiotherapy at the University of Otago in New Zealand in 1913, and the United States 1914 Reed College in Portland, Oregon, which graduated "reconstruction aides." Since the profession's inception, spinal manipulative therapy has been a component of the physical therapist practice.

Modern physical therapy was established towards the end of the 19th century due to events that affected on a global scale, which called for rapid advances in physical therapy. Soon following American orthopedic surgeons began treating children with disabilities and began employing women trained in physical education, and remedial exercise. These treatments were applied and promoted further during the Polio outbreak of 1916. During the First World War, women were recruited to work with and restore physical function to injured soldiers, and the field of physical therapy was institutionalized. In 1918 the term "Reconstruction Aide" was used to refer to individuals practicing physical therapy. The first school of physical therapy was established at Walter Reed Army Hospital in Washington, D.C., following the outbreak of World War I. Research catalyzed the physical therapy movement. The first physical therapy research was published in the United States in March 1921 in "The PT Review." In the same year, Mary McMillan organized the American Women's Physical Therapeutic Association (now called the American Physical Therapy Association (APTA). In 1924, the Georgia Warm Springs Foundation promoted the field by touting physical therapy as a treatment for polio. Treatment through the 1940s primarily consisted of exercise, massage, and traction. Manipulative procedures to the spine and extremity joints began to be practiced, especially in the British Commonwealth countries, in the early 1950s. Around the time that polio vaccines were developed, physical therapists became a normal occurrence in hospitals throughout North America and Europe. In the late 1950s, physical therapists started to move beyond hospital-based practice to outpatient orthopedic clinics, public schools, colleges/universities health-centres, geriatric settings (skilled nursing facilities), rehabilitation centers and medical centers. Specialization in physical therapy in the U.S. occurred in 1974, with the Orthopaedic Section of the APTA being formed for those physical therapists specializing in orthopedics. In the same year, the International Federation of Orthopaedic Manipulative Physical Therapists was formed, which has ever since played an important role in advancing manual therapy worldwide.

Additional Information

Physical therapy, also called physiotherapy, is a health profession that aims to improve movement and mobility in persons with compromised physical functioning. Professionals in the field are known as physical therapists.

History of physical therapy

Although the use of exercise as part of a healthy lifestyle is ancient in its origins, modern physical therapy appears to have originated in the 19th century with the promotion of massage and manual muscle therapy in Europe. In the early 20th century, approaches in physical therapy were used in the United States to evaluate muscle function in those affected by polio. Physical therapists developed programs to strengthen muscles when possible and helped polio patients learn how to use their remaining musculature to accomplish functional mobility activities. About the same time, physical therapists in the United States were also trained to work with soldiers returning from World War I; these therapists were known as “reconstruction aides.” Some worked in hospitals close to the battlefields in France to begin early rehabilitation of wounded soldiers. Typical patients were those with amputated limbs, head injuries, and spinal cord injuries. Physical therapists later practiced in a wide variety of settings, including private practices, hospitals, rehabilitation centres, nursing homes, public schools, and home health agencies. In each of those settings, therapists work with other members of the health care team toward common goals for the patient.

Patients of physical therapy

Often, persons who undergo physical therapy have experienced a decrease in quality of life as a result of physical impairments or functional limitations caused by disease or injury. Individuals who often are in need of physical therapy include those with back pain, elderly persons with arthritis or balance problems, injured athletes, infants with developmental disabilities, and persons who have had severe burns, strokes, or spinal cord injuries. Persons whose endurance for movement is affected by heart or lung problems or other illnesses are also helped by exercise and education to build activity tolerance and improve muscle strength and efficiency of movement during functional activities. Individuals with limb deficiencies are taught to use prosthetic replacement devices.

Patient management

Physical therapists complete an examination of the individual and work with him or her to determine goals that can be achieved primarily through exercise prescription and functional training to improve movement. Education is a key component of patient management. Adults with impairments and functional limitations can be taught to recover or improve movements impaired by disease and injury and to prevent injury and disability caused by abnormal posture and movement. Infants born with developmental disabilities are helped to learn movements they have never done before, with an emphasis on functional mobility for satisfying participation in family and community activities. Some problems, such as pain, may be addressed with treatments, including mobilization of soft tissues and joints, electrotherapy, and other physical agents.

Progress in physical therapy

New areas of practice are continually developing in the field of physical therapy. The scope of practice of a growing specialty in women’s health, for example, includes concerns such as incontinence, pelvic/vaginal pain, prenatal and postpartum musculosketelal pain, osteoporosis, rehabilitation following breast surgery, and lymphedema (accumulation of fluids in soft tissues). Females across the life span, from the young athlete to the childbearing, menopausal, or elderly woman, can benefit from physical therapy. Education for prevention, wellness, and exercise is another important area in addressing physical health for both men and women.

Physio-treatments.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1912 2023-09-26 00:06:27

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1916) Transcendental Meditation

Summary

Transcendental meditation involves repeating a mantra silently for 15-20 minutes (or longer) in a quiet, dimly-lit room with no distractions or music. Before starting, make sure to turn off your phone or put it on silent, and take steps to make sure that you won’t be disturbed by family members or pets during your meditation session. You can also light candles or burn incense to make your meditation space more comfortable.

Then follow these steps:

* Sit comfortably in a chair or on the floor with your hands on your lap.
* Close your eyes for a few seconds to a minute, take a few deep breaths, relax your body. Your eyes should remain closed during the 15- to 20-minute session.
* Silently repeat a mantra in your mind. This could be a Sanskrit sound you learned from a meditation teacher or a word or phrase of your choice.
* Focus on the mantra completely. If you feel yourself getting distracted, refocus your thoughts on the mantra.

After the session, open your eyes. Sit for a few more minutes until you feel ready to get up.

Many people recommend meditating at least once a day, although you may want to perform a session whenever you are feeling stressed throughout the day.

Details

Transcendental Meditation (TM) is a form of silent mantra meditation developed by Maharishi Mahesh Yogi and its form of hinduism practise. The TM technique involves the use of a silently-used sound called mantra, and is practiced for 15–20 minutes twice per day. It is taught by certified teachers through a standard course of instruction, which costs a fee that varies by country. According to the Transcendental Meditation movement, it is a non-religious method that promotes relaxed awareness, stress relief, self-development, and higher states of consciousness. The technique has been variously described as both religious and non-religious.

Maharishi began teaching the technique in India in the mid-1950s. Building on the teachings of his master Brahmananda Saraswati (known honorifically as Guru Dev), the Maharishi taught thousands of people during a series of world tours from 1958 to 1965, expressing his teachings in spiritual and religious terms. TM became more popular in the 1960s and 1970s, as the Maharishi shifted to a more technical presentation, and his meditation technique was practiced by celebrities, most prominently members of the Beatles and the Beach Boys. At this time, he began training TM teachers. The worldwide TM organization had grown to include educational programs, health products, and related services. Following the Maharishi's death in 2008, leadership of the TM organization passed to neuroscientist Tony Nader.

Research on TM began in the 1970s. A 2017 overview of systematic reviews and meta-analyses indicates TM practice may lower blood pressure, an effect comparable with other health interventions. Because of a potential for bias and conflicting findings more research is needed.  A 2012 meta-analysis on the psychological impact of meditation found that Transcendental Meditation had a similar overall effectiveness to other meditation techniques in improving general psychological variables.

History

The Transcendental Meditation program and the Transcendental Meditation movement originated with their founder Maharishi Mahesh Yogi and continued beyond his death in 2008. In 1955, "the Maharishi began publicly teaching a traditional meditation technique" learned from his master Brahmananda Saraswati that he called Transcendental Deep Meditation and later renamed Transcendental Meditation. The Maharishi initiated thousands of people, then developed a TM teacher training program as a way to accelerate the rate of bringing the technique to more people. He also inaugurated a series of tours that started in India in 1955 and went international in 1958 which promoted Transcendental Meditation. These factors, coupled with endorsements by celebrities who practiced TM and claims that scientific research had validated the technique, helped to popularize TM in the 1960s and 1970s. By the late 2000s, TM had been taught to millions of individuals and the Maharishi was overseeing a large multinational movement. Despite organizational changes and the addition of advanced meditative techniques in the 1970s, the Transcendental Meditation technique has remained relatively unchanged.

Among the first organizations to promote TM were the Spiritual Regeneration Movement and the International Meditation Society. In modern times, the movement has grown to encompass schools and universities that teach the practice, and includes many associated programs based on the Maharishi's interpretation of the Vedic traditions. In the U.S., non-profit organizations included the Students International Meditation Society, AFSCI, World Plan Executive Council, Maharishi Vedic Education Development Corporation, Global Country of World Peace, Transcendental Meditation for Women, and Maharishi Foundation. The successor to Maharishi Mahesh Yogi, and leader of the Global Country of World Peace, is Tony Nader.

Technique

The meditation practice involves the use of a silently-used mantra for 15–20 minutes twice per day while sitting with the eyes closed. It is reported to be one of the most widely practiced, and among the most widely researched, meditation techniques, with hundreds of published research studies. The technique is made available worldwide by certified TM teachers in a seven-step course, and fees vary from country to country. Beginning in 1965, the Transcendental Meditation technique has been incorporated into selected schools, universities, corporations, and prison programs in the US, Latin America, Europe, and India. In 1977 a US district court ruled that a curriculum in TM and the Science of Creative Intelligence (SCI) being taught in some New Jersey schools was religious in nature and in violation of the First Amendment of the United States Constitution. The technique has since been included in a number of educational and social programs around the world.

The Transcendental Meditation technique has been described as both religious and non-religious, as an aspect of a new religious movement, as rooted in Hinduism, and as a non-religious practice for self-development. The public presentation of the TM technique over its 50-year history has been praised for its high visibility in the mass media and effective global propagation, and criticized for using celebrity and scientific endorsements as a marketing tool. Also, advanced courses supplement the TM technique and include an advanced meditation program called the TM-Sidhi program.

Movement

The Transcendental Meditation movement consists of the programs and organizations connected with the Transcendental Meditation technique and founded by Maharishi Mahesh Yogi. Transcendental Meditation was first taught in the 1950s in India and has continued since the Maharishi's death in 2008. The organization was estimated to have 900,000 participants worldwide in 1977, a million by the 1980s, and 5 million in more recent years.

Programs include the Transcendental Meditation technique, an advanced meditation practice called the TM-Sidhi program ("Yogic Flying"), an alternative health care program called Maharishi Ayurveda, and a system of building and architecture called Maharishi Sthapatya Ved. The TM movement's past and present media endeavors include a publishing company (MUM Press), a television station (KSCI), a radio station (KHOE), and a satellite television channel (Maharishi Channel). During its 50-year history, its products and services have been offered through a variety of organizations, which are primarily nonprofit and educational. These include the Spiritual Regeneration Movement, the International Meditation Society, World Plan Executive Council, Maharishi Vedic Education Development Corporation, Transcendental Meditation for Women, the Global Country of World Peace, and the David Lynch Foundation.

The TM movement also operates a worldwide network of Transcendental Meditation teaching centers, schools, universities, health centers, herbal supplements, solar panel, and home financing companies, plus several TM-centered communities. The global organization is reported to have an estimated net worth of USD 3.5 billion. The TM movement has been characterized in a variety of ways and has been called a spiritual movement, a new religious movement, a millenarian movement, a world affirming movement, a new social movement, a guru-centered movement, a personal growth movement, a religion, and a cult. Additional sources contend that TM and its movement are not a cult. Participants in TM programs are not required to adopt a belief system; it is practiced by atheists, agnostics and people from a variety of religious affiliations. The organization has been the subject of controversies that includes being labelled a cult by several parliamentary inquiries or anti-cult movements in the world.

Some notable figures in pop-culture practicing TM include The Beatles, The Beach Boys, Kendall Jenner, Hugh Jackman, Tom Hanks, Jennifer Lopez, Mick Jagger, Eva Mendez, Moby, David Lynch, Jennifer Aniston, Nicole Kidman, Eric André, Jerry Seinfeld, Howard Stern, Julia Fox, Clint Eastwood, Martin Scorsese, Russell Brand, Nick Cave and Oprah Winfrey.

Additional Information

Transcendental Meditation, also called TM, is a technique of meditation in which practitioners mentally repeat a special Sanskrit word or phrase (mantra) with the aim of achieving a state of inner peacefulness and bodily calm. The technique was taught by the Hindu monk Swami Brahmananda Saraswati, also known as Guru Dev (died 1953), and was promoted internationally from the late 1950s by one of his disciples, the Maharishi Mahesh Yogi (1917?–2008), through the latter’s Spiritual Regeneration Movement. The Maharishi coined the term Transcendental Meditation to distinguish the technique from other meditative practices and to emphasize its independence from Hinduism (indeed from any religion). In the West, Transcendental Meditation eventually came to be taught and practiced as a secular path toward mental, emotional, and physical well-being. The popularity of Transcendental Meditation in the West increased significantly in the late 1960s, when the British rock group the Beatles and other celebrities joined the Maharishi’s following and began to meditate.

Through the repetition of a mantra, the practitioner of Transcendental Meditation aims to still the activity of thought and to experience a deep state of relaxation, which is said to lead to enhanced contentment, vitality, and creativity. The theoretical perspective behind Transcendental Meditation, called the Science of Creative Intelligence, is based on Vedanta philosophy, though practitioners do not need to subscribe to the philosophy in order to use the technique successfully.

To practice Transcendental Meditation, a person must first be initiated by a teacher. This involves sessions of formal instruction followed by a brief ceremony in which the person receives a mantra, which is selected by the teacher on the basis of the person’s temperament and occupation. There are three subsequent “checking” sessions, in which the person meditates under the teacher’s observation. The person then begins meditating independently twice a day for periods of 20 minutes each and continues to do so indefinitely. Further levels of training are available.

Many physiologists and psychologists have claimed, and many scientific studies have suggested, that Transcendental Meditation relaxes and vitalizes both the body and the mind, including by reducing stress and anxiety, lowering blood pressure (hypertension), enhancing creativity and other intellectual abilities, and relieving depression. However, other researchers have questioned the validity of such studies, asserting that they were poorly designed.

The early 1970s was a period of rapid growth in the popularity of Transcendental Meditation. The Maharishi founded a university in 1971. In 1975–76 a high-school course that incorporated the technique, “The Science of Creative Intelligence–Transcendental Meditation,” was introduced into five public schools in New Jersey. In 1977 a federal district court ruled that the course and its textbook were based on religious concepts, in violation of the establishment clause of the First Amendment, and consequently enjoined the teaching of the course. The decision was affirmed by a federal appeals court in 1979.

In 1972 the Maharishi announced his “world plan” for a new human future, which became the foundation for the World Plan Executive Council, the international organization that guided the spread of Transcendental Meditation worldwide. Each of the council’s divisions attempted to introduce meditation into a particular area of human life. In the mid-1970s the council introduced the siddha (“miraculous powers”) program, an advanced course that promised to teach students various supernormal abilities, especially levitation, a claim challenged by critics.

In 1987 a former instructor of Transcendental Meditation successfully sued the World Plan Executive Council–United States (since renamed Maharishi Foundation USA), a nonprofit organization that oversaw teaching of Transcendental Meditation in the country, alleging that the program had failed to deliver on its promises. However, the plaintiff’s claim of negligent infliction of psychological injury was dismissed on appeal, and his claims of physical injury and fraud were remanded for a new trial and eventually settled out of court.

During the 1990s the movement placed particular emphasis on disseminating Ayurveda, the traditional system of Indian medicine, in the West. By the early 21st century some six million people worldwide had taken classes in the meditation technique, but the number of formal members of Transcendental Meditation organizations and institutions, which continued to be led by the Maharishi until his death, was uncertain.

transcendental-meditation-window-header-1024x575.jpg?w=1155&h=1528


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1913 2023-09-27 00:02:29

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1917) Dentures

Gist

Dentures are removable false teeth made of acrylic (plastic), nylon or metal. They fit snugly over the gums to replace missing teeth and eliminate potential problems caused by gaps.

Summary

Dentures (also known as false teeth) are prosthetic devices constructed to replace missing teeth, supported by the surrounding soft and hard tissues of the oral cavity. Conventional dentures are removable (removable partial denture or complete denture). However, there are many denture designs, some of which rely on bonding or clasping onto teeth or dental implants (fixed prosthodontics). There are two main categories of dentures, the distinction being whether they fit onto the mandibular arch or on the maxillary arch.

Medical uses

Dentures can help people via:

* Mastication: chewing ability is improved by the replacement of edentulous (lacking teeth) areas with denture teeth.
Aesthetics: the presence of teeth gives a natural appearance to the face, and wearing a denture to replace missing teeth provides support for the lips and cheeks and corrects the collapsed appearance that results from the loss of teeth.
* Pronunciation: replacing missing teeth, especially the anteriors, enables patients to speak better, enunciating more easily sibilants and fricatives in particular.
* Self-esteem: improved looks and speech boost confidence in patients' ability to interact socially.

Complications:

Stomatitis

Denture stomatitis is an inflammatory condition of the skin under the dentures. It can affect both partial and complete denture wearers, and is most commonly seen on the palatal mucosa. Clinically, it appears as simple localized inflammation (Type I), generalized erythema covering the denture-bearing area (Type II) and inflammatory papillary hyperplasia (Type III). People with denture stomatitis are more likely to have angular cheilitis. Denture stomatitis is caused by a mixed infection of Candida albicans (90%) and a number of bacteria such as Staphylococcus, Streptococcus, Fusobacterium and Bacteroides species. Acrylic resin is more susceptible for fungal colonization, adherence and proliferation. Denture trauma, poor denture hygiene and nocturnal denture wear are local risk factors for denture stomatitis. Systemic risk factors for denture stomatitis include nutritional deficiencies, immunosuppression, smoking, diabetes, use of steroid inhaler and xerostomia. A person should be investigated for any underlying systemic disease. Improve the fit of ill-fitting dentures to eliminate any dental trauma. Stress on the importance of good denture hygiene including cleaning of the denture, soaking the dentures in disinfectant solution and not wearing it during sleeping at night is the key to treating all types of denture stomatitis. Topical application and systemic use of antifungal agents can be used to treat denture stomatitis cases that fail to respond to local conservative measures.

Ulceration

Mouth ulceration is the most common lesion in people with dentures. It can be caused by repetitive minor trauma like poorly fitting dentures including over-extension of a denture. Pressure-indicating paste can be used to check the fitting of dentures. It allows the areas of premature contact to be distinguished from areas of physiologic tissue contact. Therefore, the particular area can be polished with an acrylic bur. Leaching of residual monomer methyl methacrylate from inadequately cured denture acrylic resin material can cause mucosal irritation and hence oral ulceration as well. Patients are advised to use warm salt water mouth rinses and a betamethasone rinse can heal ulcer. Review of persisting oral ulcerations for more than 3 weeks is recommended.

Details

Dentures are removable false teeth made of acrylic (plastic), nylon or metal. They fit snugly over the gums to replace missing teeth and eliminate potential problems caused by gaps.

Gaps left by missing teeth can cause problems with eating and speech, and teeth either side of the gap may grow into the space at an angle.

Sometimes all the teeth need to be removed and replaced.

You may therefore need either:

* complete dentures (a full set) – which replace all your upper or lower teeth, or
* partial dentures – which replace just 1 tooth or a few missing teeth

Dentures may help prevent problems with eating and speech. If you need complete dentures, they may also improve the appearance of your smile and give you confidence.

It's also possible that dentures might not give you the result you hope for. Discuss plans openly with your dentist before you agree to go ahead.

How dentures are fitted:

Complete dentures

A full denture will be fitted if all your upper or lower teeth need to be removed or you're having an old complete denture replaced.

The denture will usually be fitted as soon as your teeth are removed, which means you won't be without teeth. The denture will fit snugly over your gums and jawbone.

But if you have dentures fitted immediately after the removal of several teeth, the gums and bone will alter in shape fairly quickly and the dentures will probably need relining or remaking after a few months.

Occasionally, your gums may need to be left to heal and alter in shape for several months before dentures can be fitted.

You can either see a dentist or a qualified clinical dental technician to have your dentures made and fitted.

The difference between them is that a:

* dentist will take measurements and impressions (moulds) of your mouth, and then order your full or partial dentures from a dental technician
* clinical dental technician will provide a full set of dentures directly without you having to see your dentist (although you should still have regular dental check-ups with your dentist)

A trial denture will be created from the impressions taken of your mouth.

The dentist or clinical dental technician will try this in your mouth to assess the fit and for you to assess the appearance.

The shape and colour may be adjusted before the final denture is produced.

Partial dentures

A partial denture is designed to fill in the gaps left by one or more missing teeth. It's a plastic, nylon or metal plate with a number of false teeth attached to it.

It usually clips onto some of your natural teeth via metal clasps, which hold it securely in place in your mouth. It can easily be unclipped and removed.

Occasionally, the clips can be made of a tooth- or gum-coloured material, although this type of clip isn't always suitable because it tends to be more brittle than metal.

Your dentist can measure your mouth and order a partial denture for you, or you can see a qualified clinical dental technician, who can provide a partial denture for you directly after you have first seen your dentist for a treatment plan and certificate of oral health.

The Oral Health Foundation website has more information and advice about bridges and partial dentures, including which type of denture (metal or plastic) is best for you.

A fixed bridge is an alternative to a partial denture and may be suitable for some people.

Crowns are put on the teeth either side of the gap and joined together by a false tooth that's put in the gap.

Looking after your dentures

Dentures may feel a bit strange to begin with, but you'll soon get used to wearing them.

At first, you may need to wear your dentures all the time, including while sleeping.

Your dentist or clinical dental technician will advise you on whether you should remove your dentures before you go to sleep.

It isn't always necessary to remove your dentures at night, but doing so can allow your gums to rest as you sleep.

If you remove your dentures, they should be kept moist – for example, in water or a polythene bag with some dampened cotton wool in it, or in a suitable overnight denture-cleaning solution.

This will stop the denture material drying out and changing shape.

Dental hygiene

Keeping your mouth clean is just as important when you wear dentures.

You should brush your remaining teeth, gums and tongue every morning and evening with fluoride toothpaste to prevent tooth decay, gum disease and other dental problems.

Cleaning dentures

It's important to regularly remove plaque and food deposits from your dentures.

This is because unclean dentures can also lead to problems, such as bad breath, gum disease, tooth decay and oral thrush.

Clean your dentures as often as you would normal teeth (at least twice a day: every morning and night).

You should:

* brush your dentures with toothpaste or soap and water before soaking them to remove food particles
* soak them in a fizzy solution of denture-cleaning tablets to remove stains and bacteria (follow the manufacturer's instructions)
* brush them again as you would your normal teeth (but don't scrub them too hard)

Dentures may break if you drop them, so you should clean them over a bowl or sink filled with water, or something soft like a folded towel.

Eating with dentures

When you first start wearing dentures, you should eat soft foods cut into small pieces and chew slowly, using both sides of your mouth.

Avoid chewing gum and any food that's sticky, hard or has sharp edges.

You can gradually start to eat other types of food until you're back to your old diet. Never use toothpicks.

Denture adhesive

If your dentures fit properly, you shouldn't necessarily need to use denture fixative (adhesive).

But if your jawbone has shrunk significantly, adhesive may be the only way to help retain your dentures.

Your dentist or clinical dental technician will advise you if this is the case.

At first, some people feel more confident with their dentures if they use adhesive. Follow the manufacturer's instructions and avoid using excessive amounts.

Adhesive can be removed from the denture by brushing with soap and water.

Remnants of adhesive left in the mouth may need to be removed with some damp kitchen roll or a clean damp flannel.

When to see your dentist

You should continue to see your dentist regularly if you have dentures (even if you have complete dentures) so they can check for any problems.

Your dentures should last several years if you take good care of them.

But your gums and jawbone will eventually shrink, which means the dentures may not fit as well as they used to and can become loose, or they may become worn.

See your dentist as soon as possible if:

* your dentures click when you're talking
* your dentures tend to slip, or you feel they no longer fit properly
* your dentures feel uncomfortable
* your dentures are visibly worn
* you have signs of gum disease or tooth decay, such as bleeding gums or bad breath

If poorly fitting or worn dentures aren't replaced, they can cause great discomfort and lead to mouth sores, infections or problems eating and speaking.

Additional Information

A denture is an artificial replacement for one or more missing teeth and adjacent gum tissues. A complete denture replaces all the teeth of the upper or lower jaw. Partial dentures are commonly used to replace a single tooth or two or more adjacent teeth. The partial appliance may be removable or fixed; it usually relies on remaining teeth for stability.

Improved stability is sometimes obtained with overdentures, appliances that use remaining teeth and roots for support. An added advantage of overdentures is that the remaining roots help preserve the alveolar bone—the part of the jawbone that holds the teeth—in turn preserving important bone, nerve, and tissue that tend to degenerate in the presence of complete, full-mouth dentures.

A two-step system involving the surgical implantation of titanium fixtures—titanium bonds to human bone—and the later attachment of replacement teeth has also been developed. This method was particularly successful with individuals unable to wear dentures because of resorption (shrinkage of the jawbone).

denture.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1914 2023-09-28 00:14:35

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1918) Lock and Key

Summary

A lock is a mechanical or electronic fastening device that is released by a physical object (such as a key, keycard, fingerprint, RFID card, security token or coin), by supplying secret information (such as a number or letter permutation or password), by a combination thereof, or it may only be able to be opened from one side, such as a door chain.

A key is a device that is used to operate a lock (to lock or unlock it). A typical key is a small piece of metal consisting of two parts: the bit or blade, which slides into the keyway of the lock and distinguishes between different keys, and the bow, which is left protruding so that torque can be applied by the user. In its simplest implementation, a key operates one lock or set of locks that are keyed alike, a lock/key system where each similarly keyed lock requires the same, unique key.

The key serves as a security token for access to the locked area; locks are meant to only allow persons having the correct key to open it and gain access. In more complex mechanical lock/key systems, two different keys, one of which is known as the master key, serve to open the lock. Common metals include brass, plated brass, nickel silver, and steel.

History:

Premodern history

Medieval Gothic lock, from the 15th–16th centuries, made of iron, in the Metropolitan Museum of Art (New York City)
Locks have been in use for over 6000 years, with one early example discovered in the ruins of Nineveh, the capital of ancient Assyria. Locks such as this were developed into the Egyptian wooden pin lock, which consisted of a bolt, door fixture or attachment, and key. When the key was inserted, pins within the fixture were lifted out of drilled holes within the bolt, allowing it to move. When the key was removed, the pins fell part-way into the bolt, preventing movement.

The warded lock was also present from antiquity and remains the most recognizable lock and key design in the Western world. The first all-metal locks appeared between the years 870 and 900, and are attributed to English craftsmen. It is also said that the key was invented by Theodorus of Samos in the 6th century BC.

'The Romans invented metal locks and keys and the system of security provided by wards.'

Affluent Romans often kept their valuables in secure locked boxes within their households, and wore the keys as rings on their fingers. The practice had two benefits: It kept the key handy at all times, while signaling that the wearer was wealthy and important enough to have money and jewellery worth securing.

A special type of lock, dating back to the 17th-18th century, although potentially older as similar locks date back to the 14th century, can be found in the Beguinage of the Belgian city Lier. These locks are most likely Gothic locks, that were decorated with foliage, often in a V-shape surrounding the keyhole. They are often called drunk man's lock, however the reference to being drunk may be erroneous as these locks were, according to certain sources, designed in such a way a person can still find the keyhole in the dark, although this might not be the case as the ornaments might have been purely aesthetic. In more recent times similar locks have been designed.

Modern locks

With the onset of the Industrial Revolution in the late 18th century and the concomitant development of precision engineering and component standardization, locks and keys were manufactured with increasing complexity and sophistication.

The lever tumbler lock, which uses a set of levers to prevent the bolt from moving in the lock, was invented by Robert Barron in 1778. His double acting lever lock required the lever to be lifted to a certain height by having a slot cut in the lever, so lifting the lever too far was as bad as not lifting the lever far enough. This type of lock is still used today.

The lever tumbler lock was greatly improved by Jeremiah Chubb in 1818. A burglary in Portsmouth Dockyard prompted the British Government to announce a competition to produce a lock that could be opened only with its own key. Chubb developed the Chubb detector lock, which incorporated an integral security feature that could frustrate unauthorized access attempts and would indicate to the lock's owner if it had been interfered with. Chubb was awarded £100 after a trained lock-picker failed to break the lock after 3 months.

In 1820, Jeremiah joined his brother Charles in starting their own lock company, Chubb. Chubb made various improvements to his lock: his 1824 improved design did not require a special regulator key to reset the lock; by 1847 his keys used six levers rather than four; and he later introduced a disc that allowed the key to pass but narrowed the field of view, hiding the levers from anybody attempting to pick the lock. The Chubb brothers also received a patent for the first burglar-resisting safe and began production in 1835.

The designs of Barron and Chubb were based on the use of movable levers, but Joseph Bramah, a prolific inventor, developed an alternative method in 1784. His lock used a cylindrical key with precise notches along the surface; these moved the metal slides that impeded the turning of the bolt into an exact alignment, allowing the lock to open. The lock was at the limits of the precision manufacturing capabilities of the time and was said by its inventor to be unpickable. In the same year Bramah started the Bramah Locks company at 124 Piccadilly, and displayed the "Challenge Lock" in the window of his shop from 1790, challenging "...the artist who can make an instrument that will pick or open this lock" for the reward of £200. The challenge stood for over 67 years until, at the Great Exhibition of 1851, the American locksmith Alfred Charles Hobbs was able to open the lock and, following some argument about the circumstances under which he had opened it, was awarded the prize. Hobbs' attempt required some 51 hours, spread over 16 days.

The earliest patent for a double-acting pin tumbler lock was granted to American physician Abraham O. Stansbury in England in 1805, but the modern version, still in use today, was invented by American Linus Yale Sr. in 1848. This lock design used pins of varying lengths to prevent the lock from opening without the correct key. In 1861, Linus Yale Jr. was inspired by the original 1840s pin-tumbler lock designed by his father, thus inventing and patenting a smaller flat key with serrated edges as well as pins of varying lengths within the lock itself, the same design of the pin-tumbler lock which still remains in use today. The modern Yale lock is essentially a more developed version of the Egyptian lock.

Despite some improvement in key design since, the majority of locks today are still variants of the designs invented by Bramah, Chubb and Yale.

Details

A lock is a mechanical device for securing a door or receptacle so that it cannot be opened except by a key or by a series of manipulations that can be carried out only by a person knowing the secret or code.

Early history.

The lock originated in the Near East; the oldest known example was found in the ruins of the palace of Khorsabad near Nineveh. Possibly 4,000 years old, it is of the type known as a pin tumbler or, from its widespread use in Egypt, an Egyptian lock. It consists of a large wooden bolt, which secures the door, through which is pierced a slot with several holes in its upper surface. An assembly attached to the door contains several wooden pins positioned to drop into these holes and grip the bolt. The key is a large wooden bar, something like a toothbrush in shape; instead of bristles it has upright pegs that match the holes and the pins. Inserted in the large keyhole below the vertical pins it is simply lifted, raising the pins clear and allowing the bolt, with the key in it, to be slid back (Figure 1). Locks of this type have been found in Japan, Norway, and the Faeroe Islands and are still in use in Egypt, India, and Zanzibar. An Old Testament reference, in Isaiah, “And I will place on his shoulder the key of the house of David,” shows how the keys were carried. The falling-pin principle, a basic feature of many locks, was developed to the full in the modern Yale lock (Figure 2).

In a much more primitive device used by the Greeks, the bolt was moved by a sickle-shaped key of iron, often with an elaborately carved wooden handle. The key was passed through a hole in the door and turned, the point of the sickle engaging the bolt and drawing it back. Such a device could give but little security. The Romans introduced metal for locks, usually iron for the lock itself and often bronze for the key (with the result that keys are found more often today than locks). The Romans invented wards—i.e., projections around the keyhole, inside the lock, which prevent the key from being rotated unless the flat face of the key (its bit) has slots cut in it in such a fashion that the projections pass through the slots. For centuries locks depended on the use of wards for security, and enormous ingenuity was employed in designing them and in cutting the keys so as to make the lock secure against any but the right key (Figure 3). Such warded locks have always been comparatively easy to pick, since instruments can be made that clear the projections, no matter how complex. The Romans were the first to make small keys for locks—some so small that they could be worn on the fingers as rings. They also invented the padlock, which is found throughout the Near and Far East, where it was probably independently invented by the Chinese.

In the Middle Ages, great skill and a high degree of workmanship were employed in making metal locks, especially by the German metalworkers of Nürnberg. The moving parts of the locks were closely fitted and finished, and the exteriors were lavishly decorated. Even the keys were often virtual works of art. The security, however, was solely dependent on elaborate warding, the mechanism of the lock being developed hardly at all. One refinement was to conceal the keyhole by secret shutters, another was to provide blind keyholes, which forced the lock picker to waste time and effort. The 18th-century French excelled in making beautiful and intricate locks.

Development of modern types.

The first serious attempt to improve the security of the lock was made in 1778 when Robert Barron, in England, patented a double-acting tumbler lock. A tumbler is a lever, or pawl, that falls into a slot in the bolt and prevents it being moved until it is raised by the key to exactly the right height out of the slot; the key then slides the bolt. The Barron lock (see Figure 4) had two tumblers and the key had to raise each tumbler by a different amount before the bolts could be shot. This enormous advance in lock design remains the basic principle of all lever locks.

But even the Barron lock offered little resistance to the determined lock picker, and in 1818 Jeremiah Chubb of Portsmouth, Eng., improved on the tumbler lock by incorporating a detector, a retaining spring that caught and held any tumbler which, in the course of picking, had been raised too high. This alone prevented the bolt from being withdrawn and also showed that the lock had been tampered with.

In 1784 (between Barron’s lock and Chubb’s improvements on it) a remarkable lock was patented in England by Joseph Bramah. Working on an entirely different principle, it used a very small light key, yet gave an unprecedented amount of security. Bramah’s locks are very intricate (hence, expensive to make), and for their manufacture Bramah and his young assistant Henry Maudslay (later to become a famous engineer) constructed a series of machines to produce the parts mechanically. These were among the first machine tools designed for mass production. The Bramah key is a small metal tube that has narrow longitudinal slots cut in its end. When the key is pushed into the lock, it depresses a number of slides, each to the depth controlled by the slots. Only when all the slides are depressed to exactly the right distance can the key be turned and the bolt thrown (Figure 5). So confident was Bramah of the security of his lock that he exhibited one in his London shop and offered a reward of £200 to the first person who could open it. For more than 50 years it remained unpicked, until 1851 when a skilled American locksmith, A.C. Hobbs, succeeded and claimed the reward.

The lock industry was in its heyday in the mid-19th century. With the rapidly expanding economy that followed the Industrial Revolution, the demand for locks grew tremendously.

In this period lock patents came thick and fast. All incorporated ingenious variations on the lever or Bramah principles. The most interesting was Robert Newell’s Parautoptic lock, made by the firm of Day and Newell of New York City. Its special feature was that not only did it have two sets of lever tumblers, the first working on the second, but it also incorporated a plate that revolved with the key and prevented the inspection of the interior, an important step in thwarting the lock picker. It also had a key with interchangeable bits so that the key could be readily altered. Newell displayed an example in London in the Great Exhibition of 1851. Despite many attempts, there is no record that it has ever been picked.

In 1848 a far-reaching contribution was made by an American, Linus Yale, who patented a pin tumbler lock working on an adaptation of the ancient Egyptian principle. In the 1860s his son Linus Yale, Jr., evolved the Yale cylinder lock, with its small, flat key with serrated edge, now probably the most familiar lock and key in the world. Pins in the cylinder are raised to the proper heights by the serrations, making it possible to turn the cylinder. The number of combinations of heights of the pins (usually five), coupled with the warding effect of the crooked key and keyhole, give an almost unlimited number of variations (see Figure 2). It has come to be almost universally used for outside doors of buildings and automobile doors, although in the 1960s there was a trend toward supplementing it on house doors with the sturdy lever lock.

In the 1870s a new criminal technique swept the United States: robbers seized bank cashiers and forced them to yield keys or combinations to safes and vaults. To combat this type of crime, James Sargent of Rochester, N.Y., in 1873 devised a lock based on a principle patented earlier in Scotland, incorporating a clock that permitted the safe to be opened only at a preset time.

The keyless combination lock derives from the “letter-lock,” in use in England at the beginning of the 17th century. In it a number of rings (inscribed with letters or numbers) are threaded on a spindle; when the rings are turned so that a particular word or number is formed, the spindle can be drawn out because slots inside the rings all fall in line. Originally, these letter locks were used only for padlocks and trick boxes. In the last half of the 19th century, as developed for safes and strong-room doors, they proved to be the most secure form of closure. The number of possible combinations of letters or numbers is almost infinite and they have no keyholes into which an explosive charge can be placed. Furthermore, they are easy to manufacture.

A simple combination lock with four rings (tumblers, in the U.S.) and 100 numbers on the dial (i.e., 100 positions for each ring) presents 100,000,000 possible combinations. Figure 6 shows how the single knob can set all the wheels; in this case the lock has three rings, or wheels, giving 1,000,000 possible combinations. If, for example, the combination is 48, 15, 90, the knob is turned counterclockwise until the 48 comes opposite the arrow for the fourth time, a process that ensures that there is no play between the other wheels. The slot on the first wheel (on the left in the diagram) is then in the correct position for opening and it will not move in subsequent operations. The knob is then turned clockwise until the 15 is opposite the arrow for the third time; this sets the slot of the middle wheel in line with the first. Finally, the knob is turned counterclockwise to bring the 90 for the second time to the arrow. All three slots are then in line and a handle can be turned to withdraw the bolts. The combination can easily be changed, for the serrations shown on each wheel enable the slot to be set to a different position relative to the stud for that wheel.

It is frequently necessary, particularly in hotels and office buildings, for a manager or caretaker to have a master key that will open all the locks in the building. To design a set of single locks each of which can be opened by its own key, and also by the master key, requires a coordinated arrangement of the warding. The master key is so shaped as to avoid the wards of all the locks. Another method involves two keyholes, one for the normal key, the other for the master key, or two sets of tumblers or levers, or in the case of Yale locks, two concentric cylinders.

Present status of locks and safes.

Over the years, locks have been constructed with many specialized functions. Some have been designed to resist being blown open, others to shoot or stab intruders or seize their hands. Locks have been made that can be opened or closed by different keys but can be unlocked only by the key that closed them. So-called unpickable locks are usually devised to prevent a thief from exploring the positions of the lock parts from the keyhole or from sensing with his picking tool slight changes of resistance when pressure is applied to the bolt. The basic types, however, remain the Bramah, lever, Yale, and combination locks, though innumerable variations have been made, sometimes combining features of each. The Swiss Kaba lock, for example, employs the Yale principle but its key, instead of having a serrated edge, has flat sides marked with deep depressions into which four complete sets of pin-tumblers are pressed. The Finnish Abloy lock is a compact combination lock, but the rings, instead of being turned separately by hand, are moved to the correct positions by a single turn of a small key.

Magnetic forces can be used in locks working on the Yale principle. The key has no serrations; instead, it contains a number of small magnets. When the key is inserted into the lock, these magnets repel magnetized spring-loaded pins, raising them in the same way that the serrations on a Yale-type key raise them mechanically. When these pins are raised the correct height, the cylinder of the lock is free to rotate in the barrel.

The importance of locks as a protection against professional thieves declined after World War II, during which the knowledge and use of explosives was widely disseminated. As most safe locks and strong-room locks became almost unpickable, criminals tended to ignore the locks and to use explosives to blow them off. An attempt to blow up the mechanism of a lock by detonating an explosive in the keyhole can be foiled by introducing a second series of bolts, not connected to the lock mechanism, but automatically inserted by springs when an explosion occurs; the safe then cannot be opened except by cutting through the armour.

Another method used by criminals is to burn away the plating or hinge of a safe by an electric arc or an oxyacetylene flame, an operation requiring many hours’ work. To resist this type of entry, safe makers produced even more resistant materials and new methods of construction to carry away the heat of the cutting flame.

A key, in locksmithing, is an instrument, usually of metal, by which the bolt of a lock (q.v.) is turned.

The Romans invented metal locks and keys and the system of security provided by wards. This system was, for hundreds of years, the only method of ensuring that only the right key would rotate in the keyhole. The wards are projections around the keyhole (inside the lock) that make it impossible for a plain key to be turned in it. If, however, the key has slots cut in it that correspond to the projections, the slots clear the projections, the key can be turned, and the bolt is thrown back. Throughout the centuries immense ingenuity was exercised by locksmiths in the design of the wards, and, consequently, some keys are very complicated. All the same, it was not difficult to make an instrument that could be turned in spite of the wards, to achieve what is known as “picking” a lock.

Little progress was made in the mechanism of the lock and key until the 18th century, when a series of improvements began that led, in the 1860s, to the development of the Yale cylinder lock, with its thin, convenient key capable of many thousands of variations. The key is made in a number of different cross sections so that only a particular variety of key will fit into a particular keyhole; this, in effect, is a form of ward. The serrations on the edge of the key raise pin tumblers to exactly the correct height, allowing the cylinder of the lock to revolve and withdraw the bolt. Although not impossible to pick, these locks are convenient and compact and offer a reasonable degree of security. In the late 20th century they were the most usual form of fastening for an outside door and were made by locksmiths in all parts of the world.

A special system is that of the master key. This system is used when a number of locks (such as those securing bedrooms in a hotel), each having a different key, must all be opened by a landlord or caretaker using a single key. Where the only security is by wards, a skeleton key that avoids the wards may be the type of master key chosen. In other cases, many methods are employed; for instance, there may be two keyholes (one for the servant key, the other for the master), or two sets of tumblers or levers, or two concentric cylinders in a Yale lock.

Additional Information

A lock and key refers to the combination that enables a door to be securely closed. The combination relies upon the individual fit of a protruding object (the key) and a receptor (the lock). In biology, an analogous scheme determines the specific reaction of an antigen with an antibody, and between a protein receptor and the target molecule.

The lock originated in the Near East, and the earliest known lock to be operated by a key was the Egyptian lock. Possibly 4,000 years old, this large wooden lock was found in the ruins of the palace of Khorsabad near Nineveh, the ancient capital of Assyria. The Egyptian lock is also known as the pin-tumbler type, and it evolved as a practical solution to the problem of how to open a barred door from the outside. The first and simplest locks were probably just a bar of wood or a bolt across a door. To open it from the outside, a hand-size opening was made in the door. This evolved into a much smaller hole into which a long wooden or metal prodder was inserted to lift up the bar or bolt. The Egyptians improved this device by putting wooden pegs in the lock that fell into holes in a bolt, which meant that the bolt could not be moved until the pegs were lifted out. This was done by giving the long wooden key some corresponding pins that lifted out the pegs from the holes in the bolt so it could be drawn back. These locks were up to 2 ft (61 cm) long and their keys were long, wooden bars resembling a toothpick. It was this invention of tumblers—small, movable wooden pegs that fell by their own weight into the bolt—that would eventually form the basis of modern types of locks.

The ancient Romans built the first metal locks, and their iron locks and bronze keys are easily recognizable even today. They improved the Egyptian model by adding wards—projections or obstructions inside the lock—that the key must bypass in order to work. Besides these warded locks, the Romans also invented the portable padlock with a U-shaped bolt which is known to have been invented independently by the Chinese. Some Roman locks used springs to hold the tumblers in place, and the Romans made locks small enough that they could wear tiny keys on their fingers like rings. In medieval times, locks and keys changed little in design, with most of the effort directed at making them more elaborate and beautiful. It was during this time that lock making became a skilled trade, and although there were some design changes, like a pivoted tumbler and more complicated wards, medieval locks are characterized mainly by their high degree of lavish embellishment. Despite this high level of medieval craftsmanship, these medieval locks did not provide a great deal of security against the determined and skilled thief, and even with especially elaborate warding systems, they were still relatively easy to pick or open.

The modern age of the lock and key is usually said to have begun in 1778 in England when Robert Barron first patented his double-acting tumbler lock. Also called the multiple tumbler, this ingenious design was a major advance in lock security and established the principle of all lever locks. Barron’s new lock had two tumblers, which are really levers, that had to be raised to exactly the right height for the lock to open. Unless a properly notched key was used to raise each tumbler, the lock would not open. His lock could still be picked by a determined individual however, and in 1818 Jeremiah Chubb was able to improve Barron’s lock by adding a “convict-defying detector.” This was a spring or a special lever that was activated if any tumbler was raised too high. The lock would then jam, both preventing the bolt from releasing and showing the owner that the lock had been tampered with. Real lock security was not achieved however until English engineer Joseph Bramah (1748-1814) first introduced his pick-proof lock in 1784, after Barron’s lock but before Chubb. Bramah’s lock was exhibited in his shop window with a sign offering a substantial sum to anyone who could pick it. The offer outlived Bramah whose lock remained unopened for over 50 years until a skilled American mechanic finally picked it open after 51 hours of effort.

Bramah’s 4 in (10 cm), hand-made, iron padlock was impervious because of its extreme complexity, and he soon found that he could not produce enough locks to meet the growing demand by using traditional methods. His locks used a notched diaphragm plate and a number of spring-loaded radial slides that were pushed down by a notched key until they matched the notches on the diaphragm. Producing such precision instruments on a large scale necessitated precision machine tools, and with the help of English engineer Henry Maudslay (1771-1831), Bramah produced a series of machines that were among the first machine tools designed for mass production. Thus the simple lock and key were at the forefront of a revolution in manufacturing, heralding the standardization and interchangeability of parts and division of labor that would characterize modern methods of mass production.

By the mid-nineteenth century, the lock industry was in full force and was attempting to meet the growing demands of an economy spurred by the Industrial Revolution. In 1861, the American inventor Linus Yale Jr. (1821-1868) produced the Yale cylinder lock that was based on the pin-tumbler mechanism of the ancient Egyptians. This type of lock is still the most common type used today, and it uses a small, flat key whose serrated edges raise five pins in the cylinder to proper heights and make it possible to turn the cylinder. Varying the lengths of these five pins combined with other slight internal changes, allowed for millions of possible combinations, meaning that practically no two notched keys are alike. In an odd twist on conventional wisdom, it could be said that Yale took advantage of mass production methods to manufacture unidentical articles, since he made each set of lock and key slightly different from the one before it. While still not infallible, Yale cylinder locks are quite difficult to pick and offer reasonable security under ordinary circumstances. This style of lock and key is the most familiar and the most generally used to secure the outside doors of buildings and automobiles.

Keyless combination locks have been known since the sixteenth century. They contain a series of rings or tumblers threaded on a spindle that must be turned from the outside in such a way that all the rings line up. These rings usually have numbers or letters on them, and if a lock has three rings with 100 numbers on each, there are approximately one million possible combinations, only one of which will open the lock. Combination locks have no keyholes in which to pry or insert explosives, and they became popular for safes and vaults. They are often used in conjunction with time-lock devices, preventing a safe or door from being opened during certain hours even if the correct combination is used.

Altogether, today’s mechanical locks are variations of the three basic types of locks: the early Bramah lever, the Yale cylinder, and the combination lock. Sometimes a single lock may combine some features of each, such as a Finnish combination lock whose rings must be moved to the proper position by a the turn of a key. In the United States in the 1970s, electronic locks that worked on the same principle as the touch-tone phone became popular. When the correct sequence of spring-loaded buttons was pushed, the door would open. This system used no keys, proved to be as tamper-proof as any traditional combination lock, and allowed the touch-tone sequence to be changed at any time. Magnetism has also been used to operate a Yale-type lock. These locks had keys with no serrations but rather contained several small magnets. Insertion of the key allowed its magnets to repel magnetized spring-loaded pins inside the lock, which were raised to open it. The newest lock and key systems do not use anything recognizable as a traditional lock or key. Increasingly, today’s hotels are switching to special plastic cards with magnetic strips on them. Like a key, they are inserted, but only momentarily, into a slot usually just above the doorknob. Often a small green light flickers after withdrawal, and the door opens if the doorknob is turned. These cards open the door using electronic systems.

5-Interesting-Facts-About-Locks-Keys.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1915 2023-09-29 00:15:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1919) Textile Technology

Gist

Textile technology deals with the fabrication, manipulation, and assembly of fiber-shaped (i.e., line-shaped) materials. Textile techniques can not only be used to weave cloths from cotton fibers but also to hold wounded tissues together with surgical sutures.

Summary

Textile is any filament, fibre, or yarn that can be made into fabric or cloth, and the resulting material itself.

The term is derived from the Latin textilis and the French texere, meaning “to weave,” and it originally referred only to woven fabrics. It has, however, come to include fabrics produced by other methods. Thus, threads, cords, ropes, braids, lace, embroidery, nets, and fabrics made by weaving, knitting, bonding, felting, or tufting are textiles. Some definitions of the term textile would also include those products obtained by the papermaking principle that have many of the properties associated with conventional fabrics.

This article surveys the development of textiles and the history and development of the textile industry. It treats in some detail the processes involved in the conversion of fibres to yarn, fabric construction, finishing operations applied to textiles, uses of textile materials, and the relationship between the producer and the consumer. Information about specific natural and synthetic textile fibres such as wool, mohair, nylon, and polyester are treated in separate articles.

Details

Textile is an umbrella term that includes various fiber-based materials, including fibers, yarns, filaments, threads, different fabric types, etc. At first, the word "textiles" only referred to woven fabrics. However, weaving is not the only manufacturing method, and many other methods were later developed to form textile structures based on their intended use. Knitting and non-woven are other popular types of fabric manufacturing. In the contemporary world, textiles satisfy the material needs for versatile applications, from simple daily clothing to bulletproof jackets, spacesuits, and doctor's gowns.

Textiles are divided into two groups: consumer textiles for domestic purposes and technical textiles. In consumer textiles, aesthetics and comfort are the most important factors, while in technical textiles, functional properties are the priority.

Geotextiles, industrial textiles, medical textiles, and many other areas are examples of technical textiles, whereas clothing and furnishings are examples of consumer textiles. Each component of a textile product, including fiber, yarn, fabric, processing, and finishing, affects the final product. Components may vary among various textile products as they are selected based on their fitness for purpose.

Fiber is the smallest component of a fabric; fibers are typically spun into yarn, and yarns are used to manufacture fabrics. Fiber has a hair-like appearance and a higher length-to-width ratio. The sources of fibers may be natural, synthetic, or both. The techniques of felting and bonding directly transform fibers into fabric. In other cases, yarns are manipulated with different fabric manufacturing systems to produce various fabric constructions. The fibers are twisted or laid out to make a long, continuous strand of yarn. Yarns are then used to make different kinds of fabric by weaving, knitting, crocheting, knotting, tatting, or braiding. After manufacturing, textile materials are processed and finished to add value, such as aesthetics, physical characteristics, and increased usefulness. The manufacturing of textiles is the oldest industrial art. Dyeing, printing, and embroidery are all different decorative arts applied to textile materials.

Etymology:

Textile

The word 'textile' comes from the Latin adjective textilis, meaning 'woven', which itself stems from textus, the past participle of the verb texere, 'to weave'. Originally applied to woven fabrics, the term "textiles" is now used to encompass a diverse range of materials, including fibers, yarns, and fabrics, as well as other related items.

Fabric

A "fabric" is defined as any thin, flexible material made from yarn, directly from fibers, polymeric film, foam, or any combination of these techniques. Fabric has a broader application than cloth. Fabric is synonymous with cloth, material, goods, or piece goods. The word 'fabric' also derives from Latin, with roots in the Proto-Indo-European language. Stemming most recently from the Middle French fabrique, or "building," and earlier from the Latin fabrica ('workshop; an art, trade; a skillful production, structure, fabric'), the noun fabrica stems from the Latin faber" artisan who works in hard materials', which itself is derived from the Proto-Indo-European dhabh-, meaning 'to fit together'.

Cloth

Cloth is a flexible substance typically created through the processes of weaving, felting, or knitting using natural or synthetic materials. The word 'cloth' derives from the Old English clað, meaning "a cloth, woven, or felted material to wrap around one's body', from the Proto-Germanic kalithaz, similar to the Old Frisian klath, the Middle Dutch cleet, the Middle High German kleit and the German kleid, all meaning 'garment'.

Although cloth is a type of fabric, not all fabrics can be classified as cloth due to differences in their manufacturing processes, physical properties, and intended uses. Materials that are woven, knitted, tufted, or knotted from yarns are referred to as cloth, while wallpaper, plastic upholstery products, carpets, and nonwoven materials are examples of fabrics.

History

Textiles themselves are too fragile to survive across millennia; the tools used for spinning and weaving make up most of the prehistoric evidence for textile work. The earliest tool for spinning was the spindle, to which a whorl was eventually added. The weight of the whorl improved the thickness and twist of the spun thread. Later, the spinning wheel was invented. Historians are unsure where; some say China, others India.

The precursors of today's textiles include leaves, barks, fur pelts, and felted cloths.

The Banton Burial Cloth, the oldest existing example of warp ikat in Southeast Asia, is displayed at the National Museum of the Philippines. The cloth was most likely made by the native Asian people of northwest Romblon. The first clothes, worn at least 70,000 years ago and perhaps much earlier, were probably made of animal skins and helped protect early humans from the elements. At some point, people learned to weave plant fibers into textiles. The discovery of dyed flax fibers in a cave in the Republic of Georgia dated to 34,000 BCE suggests that textile-like materials were made as early as the Paleolithic era.

The speed and scale of textile production have been altered almost beyond recognition by industrialization and the introduction of modern manufacturing techniques.

Textile industry

The textile industry grew out of art and craft and was kept going by guilds. In the 18th and 19th centuries, during the industrial revolution, it became increasingly mechanized. In 1765, when a machine for spinning wool or cotton called the spinning jenny was invented in the United Kingdom, textile production became the first economic activity to be industrialised. In the 20th century, science and technology were driving forces. The textile industry exhibits inherent dynamism, influenced by a multitude of transformative changes and innovations within the domain. Textile operations can experience ramifications arising from shifts in international trade policies, evolving fashion trends, evolving customer preferences, variations in production costs and methodologies, adherence to safety and environmental regulations, as well as advancements in research and development.

The textile and garment industries exert a significant impact on the economic systems of numerous countries engaged in textile production.

Naming

Most textiles were called by their base fibre generic names, their place of origin, or were put into groups based loosely on manufacturing techniques, characteristics, and designs. Nylon, olefin, and acrylic are generic names for some of the more commonly used synthetic fibres.

Ministry-of-Textile-may-launch-Textile-Technology.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1916 2023-09-30 00:04:41

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1920) Cable car

Gist

A cable car is a vehicle for taking people up mountains or steep hills. It is pulled by a moving cable.

Summary

A cable car or cable railway is a mass transit system using rail cars that are propelled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable.

Operation

The cable is itself powered by a stationary motor or engine situated in a cable house or power house. The speed at which it moves is relatively constant, although affected by the current load.

The cable car begins moving when a clamping device, called a grip, is connected to the moving cable. Conversely the car is stopped by detaching it from the cable and applying the brakes. This gripping and ungripping action may be manual, as was the case in all early cable car systems, or automatic, as is the case in some recent cable operated people mover type systems. Gripping must be an even and gradual process in order to avoid bringing the car to cable speed too quickly and unacceptably jarring the passengers.

In the case of manual systems, the grip resembles a very large pair of pliers, and considerable strength and skill are required to operate the car. As many early cable car operators discovered the hard way, if the grip is not applied properly, it can damage the cable, or even worse, become entangled in the cable. In the latter case, the cable car may not be able to stop and can wreak havoc along its route until the cable house realizes what is going on and halts the cable.

One claimed advantage of the cable car is its relative energy efficiency, because of the economy of centrally-located power stations, and the ability for cars going down hill to transfer energy to cars going up. However this advantage is not unique to cable cars, as electric cars fitted with regenerative braking offer the same advantages, and in any case they must be offset against the cost of moving the cable.

Because of the constant and relatively low speed, cable cars can be underestimated in an accident. Even with a cable car traveling at only 9 miles per hour, the mass of the cable car and the combined strength of the cables can do quite a lot of harm to pedestrians if hit.

Details

Cable car (railway)

A cable car (usually known as a cable tram outside North America) is a type of cable railway used for mass transit in which rail cars are hauled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable.

History

Cable Driving Plant, Designed and Constructed by Poole & Hunt, Baltimore, MD. Drawing by P.F. Goist, circa 1882. The powerhouse has two horizontal single-cylinder engines. The lithograph shows a hypothetical prototype of a cable powerhouse, rather than any actual built structure. Poole & Hunt, machinists and engineers, was a major cable industry designer and contractor and manufacturer of gearing, sheaves, shafting and wire rope drums. They did work for cable railways in Baltimore, Chicago, Hoboken, Kansas City, New York, and Philadelphia.

The first cable-operated railway, employing a moving rope that could be picked up or released by a grip on the cars was the Fawdon Wagonway in 1826, a colliery railway line. The London and Blackwall Railway, which opened for passengers in east London, England, in 1840 used such a system. The rope available at the time proved too susceptible to wear and the system was abandoned in favour of steam locomotives after eight years. In America, the first cable car installation in operation probably was the West Side and Yonkers Patent Railway in New York City, as its first-ever elevated railway which ran from 1 July 1868 to 1870. The cable technology used in this elevated railway involved collar-equipped cables and claw-equipped cars, proving cumbersome. The line was closed and rebuilt, reopening with steam locomotives.

In 1869 P. G. T. Beauregard demonstrated a cable car at New Orleans and was issued U.S. Patent 97,343.

Other cable cars to use grips were those of the Clay Street Hill Railroad, which later became part of the San Francisco cable car system. The building of this line was promoted by Andrew Smith Hallidie with design work by William Eppelsheimer, and it was first tested in 1873. The success of these grips ensured that this line became the model for other cable car transit systems, and this model is often known as the Hallidie Cable Car.

In 1881 the Dunedin cable tramway system opened in Dunedin, New Zealand and became the first such system outside San Francisco. For Dunedin, George Smith Duncan further developed the Hallidie model, introducing the pull curve and the slot brake; the former was a way to pull cars through a curve, since Dunedin's curves were too sharp to allow coasting, while the latter forced a wedge down into the cable slot to stop the car. Both of these innovations were generally adopted by other cities, including San Francisco.

In Australia, the Melbourne cable tramway system operated from 1885 to 1940. It was one of the most extensive in the world with 1200 trams and trailers operating over 15 routes with 103 km (64 miles) of track. Sydney also had a couple of cable tram routes.

Cable cars rapidly spread to other cities, although the major attraction for most was the ability to displace horsecar (or mule-drawn) systems rather than the ability to climb hills. Many people at the time viewed horse-drawn transit as unnecessarily cruel, and the fact that a typical horse could work only four or five hours per day necessitated the maintenance of large stables of draft animals that had to be fed, housed, groomed, medicated and rested. Thus, for a period, economics worked in favour of cable cars even in relatively flat cities.

For example, the Chicago City Railway, also designed by Eppelsheimer, opened in Chicago in 1882 and went on to become the largest and most profitable cable car system. As with many cities, the problem in flat Chicago was not one of incline, but of transportation capacity. This caused a different approach to the combination of grip car and trailer. Rather than using a grip car and single trailer, as many cities did, or combining the grip and trailer into a single car, like San Francisco's California Cars, Chicago used grip cars to pull trains of up to three trailers.

In 1883 the New York and Brooklyn Bridge Railway was opened, which had a most curious feature: though it was a cable car system, it used steam locomotives to get the cars into and out of the terminals. After 1896 the system was changed to one on which a motor car was added to each train to maneuver at the terminals, while en route, the trains were still propelled by the cable.

On 25 September 1883, a test of a cable car system was held by Liverpool Tramways Company in Kirkdale, Liverpool. This would have been the first cable car system in Europe, but the company decided against implementing it. Instead, the distinction went to the 1884 Highgate Hill Cable Tramway, a route from Archway to Highgate, north London, which used a continuous cable and grip system on the 1 in 11 (9%) climb of Highgate Hill. The installation was not reliable and was replaced by electric traction in 1909. Other cable car systems were implemented in Europe, though, among which was the Glasgow District Subway, the first underground cable car system, in 1896. (London, England's first deep-level tube railway, the City & South London Railway, had earlier also been built for cable haulage but had been converted to electric traction before opening in 1890.) A few more cable car systems were built in the United Kingdom, Portugal, and France. European cities, having many more curves in their streets, were ultimately less suitable for cable cars than American cities.

Though some new cable car systems were still being built, by 1890 the cheaper to construct and simpler to operate electrically-powered trolley or tram started to become the norm, and eventually started to replace existing cable car systems. For a while hybrid cable/electric systems operated, for example in Chicago where electric cars had to be pulled by grip cars through the loop area, due to the lack of trolley wires there. Eventually, San Francisco became the only street-running manually operated system to survive—Dunedin, the second city with such cars, was also the second-last city to operate them, closing down in 1957.

Recent revival

In the last decades of the 20th-century, cable traction in general has seen a limited revival as automatic people movers, used in resort areas, airports (for example, Toronto Airport), huge hospital centers and some urban settings. While many of these systems involve cars permanently attached to the cable, the Minimetro system from Poma/Leitner Group and the Cable Liner system from DCC Doppelmayr Cable Car both have variants that allow the cars to be automatically decoupled from the cable under computer control, and can thus be considered a modern interpretation of the cable car.

Operation

The cable is itself powered by a stationary engine or motor situated in a cable house or power house. The speed at which it moves is relatively constant depending on the number of units gripping the cable at any given time.

The cable car begins moving when a clamping device attached to the car, called a grip, applies pressure to ("grip") the moving cable. Conversely, the car is stopped by releasing pressure on the cable (with or without completely detaching) and applying the brakes. This gripping and releasing action may be manual, as was the case in all early cable car systems, or automatic, as is the case in some recent cable operated people mover type systems. Gripping must be applied evenly and gradually in order to avoid bringing the car to cable speed too quickly and unacceptably jarring passengers.

In the case of manual systems, the grip resembles a very large pair of pliers, and considerable strength and skill are required to operate the car. As many early cable car operators discovered the hard way, if the grip is not applied properly, it can damage the cable, or even worse, become entangled in the cable. In the latter case, the cable car may not be able to stop and can wreak havoc along its route until the cable house realizes the mishap and halts the cable.

One apparent advantage of the cable car is its relative energy efficiency. This is due to the economy of centrally located power stations, and the ability of descending cars to transfer energy to ascending cars. However, this advantage is totally negated by the relatively large energy consumption required to simply move the cable over and under the numerous guide rollers and around the many sheaves. Approximately 95% of the tractive effort in the San Francisco system is expended in simply moving the four cables at 15.3 km/h (9.5 mph). Electric cars with regenerative braking do offer the advantages, without the problem of moving a cable. In the case of steep grades, however, cable traction has the major advantage of not depending on adhesion between wheels and rails. There is also the advantage that keeping the car gripped to the cable will also limit the downhill speed of the car to that of the cable.

Because of the constant and relatively low speed, a cable car's potential to cause harm in an accident can be underestimated. Even with a cable car traveling at only 14 km/h (9 mph), the mass of the cable car and the combined strength and speed of the cable can cause extensive damage in a collision.

Relation to funiculars

A cable car is superficially similar to a funicular, but differs from such a system in that its cars are not permanently attached to the cable and can stop independently, whereas a funicular has cars that are permanently attached to the propulsion cable, which is itself stopped and started. A cable car cannot climb as steep a grade as a funicular, but many more cars can be operated with a single cable, making it more flexible, and allowing a higher capacity. During the rush hour on San Francisco's Market Street Railway in 1883, a car would leave the terminal every 15 seconds.

A few funicular railways operate in street traffic, and because of this operation are often incorrectly described as cable cars. Examples of such operation, and the consequent confusion, are:

* The Great Orme Tramway in Llandudno, Wales.
* Several street funiculars in Lisbon, Portugal.

Even more confusingly, a hybrid cable car/funicular line once existed in the form of the original Wellington Cable Car, in the New Zealand city of Wellington. This line had both a continuous loop haulage cable that the cars gripped using a cable car gripper, and a balance cable permanently attached to both cars over an undriven pulley at the top of the line. The descending car gripped the haulage cable and was pulled downhill, in turn pulling the ascending car (which remained ungripped) uphill by the balance cable. This line was rebuilt in 1979 and is now a standard funicular, although it retains its old cable car name.

List of cable car systems:

Cities currently operating cable cars

Traditional cable car systems

The best-known existing cable car system is the San Francisco cable car system in the city of San Francisco, California. San Francisco's cable cars constitute the oldest and largest such system in permanent operation, and it is one of the few still functioning in the traditional manner, with manually operated cars running in street traffic. Other examples of cable powered systems can be found on the Great Orme in North Wales. and in Lisbon in Portugal. All of these however are slightly different to San Francisco in that the cars are permanently attached to the cable.

Modern cable car systems

Several cities operate a modern version of the cable car system. These systems are fully automated and run on their own reserved right of way. They are commonly referred to as people movers, although that term is also applied to systems with other forms of propulsion, including funicular style cable propulsion.

These cities include:

* Oakland, California, United States – The Oakland Airport Connector system between the BART rapid transit system and Oakland International Airport, based on Doppelmayr Cable Car's Cable Liner Pinched Loop
* Perugia, Italy – The Perugia People Mover, based on Leitner's MiniMetro
* Shanghai, China - The Bund Sightseeing Tunnel, based on Soulé's SK
* Caracas, Venezuela - The Cabletren Bolivariano, based on Doppelmayr Cable Car's Cable Liner Pinched Loop
* Zürich, Switzerland - The Skymetro connects the Zurich Airport's main Airside Center, Gates A, B and C with its mid-field Gates E, based on OTIS's Otis Hovair.

cable-cars.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1917 2023-10-01 00:07:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1921) Flight attendant

Gist

A person whose job is to serve and take care of passengers on an aircraft.

Summary

Overview

The role of a flight attendant is to "provide routine services and respond to emergencies to ensure the safety and comfort of airline passengers".

Typically flight attendants require holding a high school diploma or equivalent, and in the United States, the median annual wage for flight attendants was $50,500 in May 2017, higher than the median for all workers of $37,690.

The number of flight attendants required on flights is mandated by each country's regulations. In the US, for light planes with 19 or fewer seats, or, if weighing more than 7,500 pounds, 9 or fewer seats, no flight attendant is needed; on larger aircraft, one flight attendant per 50 passenger seats is required.

The majority of flight attendants for most airlines are female, though a substantial number of males have entered the industry since 1980.

Responsibilities

Prior to each flight, flight attendants and pilots go over safety and emergency checklists, the locations of emergency equipment and other features specific to that aircraft type. Boarding particulars are verified, such as special needs passengers, small children travelling alone, or VIPs. Weather conditions are discussed including anticipated turbulence. A safety check is conducted to ensure equipment such as life-vests, torches (flash lights) and firefighting equipment are on board and in proper condition. They monitor the cabin for any unusual smells or situations. They assist with the loading of carry-on baggage, checking for weight, size and dangerous goods. They make sure those sitting in emergency exit rows are willing and able to assist in an evacuation. They then give a safety demonstration or monitor passengers as they watch a safety video. They then must "secure the cabin" ensuring tray tables are stowed, seats are in their upright positions, armrests down and carry-ons stowed correctly and seat belts are fastened prior to take-off.

Once up in the air, flight attendants will usually serve drinks and/or food to passengers using an airline service trolley. When not performing customer service duties, flight attendants must periodically conduct cabin checks and listen for any unusual noises or situations. Checks must also be done on the lavatory to ensure the smoke detector has not been disabled or destroyed and to restock supplies as needed. Regular checks must be done to ensure the health and safety of the pilot(s). They must also respond to call lights dealing with special requests. During turbulence, flight attendants must ensure the cabin is secure. Prior to landing, all loose items, trays and rubbish must be collected and secured along with service and galley equipment. All hot liquids must be disposed of. A final cabin check must then be completed prior to landing. It is vital that flight attendants remain aware as the majority of emergencies occur during take-off and landing. Upon landing, flight attendants must remain stationed at exits and monitor the airplane and cabin as passengers disembark the plane. They also assist any special needs passengers and small children off the airplane and escort children, while following the proper paperwork and ID process to escort them to the designated person picking them up.

Flight attendants are trained to deal with a wide variety of emergencies, and are trained in first aid. More frequent situations may include a bleeding nose, illness, small injuries, intoxicated passengers, aggressive and anxiety stricken passengers. Emergency training includes rejected take-offs, emergency landings, cardiac and in-flight medical situations, smoke in the cabin, fires, depressurization, on-board births and deaths, dangerous goods and spills in the cabin, emergency evacuations, hijackings, and water landings.

Cabin chimes and overhead panel lights

On most commercial airliners, flight attendants receive various forms of notification on board the aircraft in the form of audible chimes and coloured lights above their stations. While the colours and chimes are not universal and may vary between airlines and aircraft types, these colours and chimes are generally the most commonly used:

* Pink (Boeing) or Red (Airbus): interphone calls to a flight attendant and/or interphone calls between two flight attendants, the latter case if a green light is not present or being used for the same purpose (steady with high-low chime), or all services emergency call (flashing with repeated high-low chime). On some airlines Airbus' aircraft (such as Delta Air Lines), this light is accompanied by a high-medium-low chime to indicate a call to all flight attendant stations. The Boeing 787 uses a separate red light to indicate a sterile flight deck while using pink for interphone calls.
* Blue: call from passenger in seat (steady with single high chime).
* Amber: call from passenger in lavatory (steady with single high chime), or lavatory smoke detector set off (flashing with repeated high chime).
* Green: on some aircraft (some airlines Airbus aircraft, and the Boeing 787), this colour is used to indicate interphone calls between two flight attendants, distinguishing them from the pink or red light used for interphone calls made from the flight deck to a flight attendant, and is also accompanied with a high-low chime like the pink or red light. On the Boeing 787, a flashing green light with a repeated high-low chime is used to indicate a call to all flight attendant stations.

Chief purser

The chief purser (CP), also titled as in-flight service manager (ISM), flight service manager (FSM), customer service manager (CSM) or cabin service director (CSD) is the senior flight attendant in the chain of command of flight attendants. While not necessarily the most-senior crew members on a flight (in years of service to their respective carrier), chief pursers can have varying levels of "in-flight" or "on board" bidding seniority or tenure in relation to their flying partners. To reach this position, a crew member requires some minimum years of service as flight attendant. Further training is mandatory, and chief pursers typically earn a higher salary than flight attendants because of the added responsibility and managerial role.

Purser

The purser is in charge of the cabin crew, in a specific section of a larger aircraft, or the whole aircraft itself (if the purser is the highest ranking). On board a larger aircraft, pursers assist the chief purser in managing the cabin. Pursers are flight attendants or a related job, typically with an airline for several years prior to application for, and further training to become a purser, and normally earn a higher salary than flight attendants because of the added responsibility and supervisory role.

Details:

What is a Flight Attendant?

A flight attendant is a member of an airline's cabin crew who is responsible for ensuring the safety and comfort of passengers during flights. Flight attendants perform a variety of tasks, including greeting passengers, demonstrating safety procedures, serving meals and drinks, and responding to passenger requests. They also play a critical role in emergency situations, providing first aid and directing passengers to safety.

Flight attendants are responsible for providing excellent customer service to passengers. They must have strong communication skills, a friendly demeanor, and the ability to remain calm under pressure. They work long and irregular hours, often spending several nights away from home. Despite the challenges, many people are attracted to the job due to the opportunity to travel, meet new people, and experience different cultures.

Flight attendants provide essential customer service, attending to passenger needs, ensuring comfort, and enhancing the overall flying experience. Their training in first aid, safety procedures, and security protocols equips them to handle diverse situations, making them indispensable in maintaining a safe and pleasant environment on aircraft.

Duties and Responsibilities

Flight attendants have a range of duties and responsibilities that they perform to ensure the safety, comfort, and well-being of passengers during flights. Here are some detailed duties and responsibilities of flight attendants:

* Pre-Flight Preparation: Before the flight, flight attendants participate in pre-flight briefings with the captain and other crew members. They review safety procedures, discuss the flight plan, and ensure they have the necessary supplies and equipment on board.
* Passenger Safety and Emergency Procedures: Flight attendants play a crucial role in ensuring passenger safety. They conduct pre-flight safety demonstrations, which include showing the proper use of seat belts, oxygen masks, and life jackets. They also demonstrate emergency procedures such as brace positions and evacuation protocols. During the flight, flight attendants remain vigilant, observing passengers and maintaining readiness to respond to any emergency situations that may arise.
* Cabin Preparation: Flight attendants ensure that the cabin is clean, well-stocked, and properly prepared for passengers. They check that emergency equipment, such as fire extinguishers and defibrillators, are in working order. They also ensure that the seating areas, overhead compartments, and lavatories are clean and functional.
* Passenger Service and Hospitality: Flight attendants provide personalized service to passengers, catering to their needs and ensuring their comfort throughout the flight. They assist passengers with finding their seats, storing their carry-on luggage, and settling in. They offer beverages, meals, and snacks, taking into account dietary restrictions and preferences. Flight attendants also provide information and answer questions about the flight, destination, and other travel-related inquiries.
* Conflict Resolution and Passenger Assistance: Flight attendants are skilled in conflict resolution and managing challenging situations. They handle passenger disputes or concerns with professionalism and diplomacy. They also provide assistance to passengers who may require special attention, such as unaccompanied minors, elderly passengers, or individuals with disabilities.
* In-Flight Security and Vigilance: Flight attendants are trained to be vigilant and observant during the flight, looking out for any potential security threats or suspicious behavior. They monitor passenger behavior and intervene if necessary to maintain a safe and secure environment on the aircraft.
* Communication and Coordination: Effective communication and coordination among the flight attendants and the flight deck crew are essential. Flight attendants relay important information, such as weather updates or turbulence alerts, to the captain and vice versa. They also work as a team, supporting each other and ensuring a smooth and efficient operation during the flight.
* Post-Flight Duties: After the flight, flight attendants conduct a post-flight inspection of the cabin, ensuring that all items are properly stowed and secured. They may complete necessary paperwork, such as incident reports or passenger feedback forms. They also assist with the disembarkation process, bidding farewell to passengers and ensuring a safe and orderly exit from the aircraft.

Types of Flight Attendants

There are different types of flight attendants who may have specialized roles and responsibilities based on their specific positions or the type of aircraft they work on. Here are some types of flight attendants and what they do:

* Inflight Service Manager/Purser: The inflight service manager, also known as the purser, is a senior flight attendant who leads the cabin crew and oversees the overall service delivery on the flight. They coordinate with the captain and other crew members, ensuring the smooth operation of the flight. They handle any issues that may arise during the flight and ensure that all passengers receive excellent service.
* Lead Flight Attendant: The lead flight attendant, also known as the lead cabin crew member or senior flight attendant, assists the inflight service manager and provides leadership to the cabin crew. They delegate tasks, coordinate cabin service, and assist with the management of any challenging situations that may arise.
* Cabin Crew Member: This is the general term for flight attendants who provide service on the aircraft. Cabin crew members handle various responsibilities such as greeting passengers, conducting safety demonstrations, serving meals and beverages, responding to passenger requests, and ensuring overall passenger comfort.
* First Class/ Business Class Flight Attendant: Flight attendants assigned to first class or business class cabins provide specialized service to passengers in these premium sections. They focus on delivering a higher level of personalized service, attending to individual needs, and creating a luxurious and comfortable experience for passengers in these cabin classes.
* Economy Class Flight Attendant: Flight attendants assigned to the economy class cabin cater to the needs of passengers traveling in the main cabin. They ensure passenger comfort, provide meal and beverage service, assist with seating arrangements, and respond to passenger inquiries or requests.
* Flight Attendant Instructors/Trainers: Some experienced flight attendants take on instructional roles, training and mentoring new or aspiring flight attendants. They teach safety procedures, service standards, emergency protocols, and customer service skills. These instructors play a vital role in shaping the next generation of flight attendants.
* Specialized Flight Attendants: Certain airlines or flights may require specialized flight attendants based on specific requirements. For example, long-haul flights may have dedicated rest area attendants who manage crew rest areas, ensuring that crew members get adequate rest during the flight. Some airlines also have language-specific flight attendants who are fluent in multiple languages to cater to diverse passenger needs.

Are you suited to be a flight attendant?

Flight attendants have distinct personalities. They tend to be enterprising individuals, which means they’re adventurous, ambitious, assertive, extroverted, energetic, enthusiastic, confident, and optimistic. They are dominant, persuasive, and motivational. Some of them are also social, meaning they’re kind, generous, cooperative, patient, caring, helpful, empathetic, tactful, and friendly.

What is the workplace of a Flight Attendant like?

The workplace of a flight attendant is primarily the aircraft cabin, where they perform their duties during flights. The cabin serves as their main workspace, and flight attendants spend a significant amount of time moving throughout its various sections. The size and layout of the cabin can vary depending on the type of aircraft and the airline's configuration. It typically consists of narrow aisles, seating arrangements, galley areas, and lavatories. Flight attendants are constantly on the move, ensuring the safety and comfort of passengers by providing assistance, serving meals and beverages, and conducting routine checks.

The cabin environment is a dynamic and fast-paced one. Flight attendants must be adaptable and able to handle different situations as they arise, including turbulence, medical emergencies, or passenger-related issues. They work closely with their fellow crew members, including pilots and other flight attendants, to maintain effective communication and coordination throughout the flight. Additionally, flight attendants are responsible for managing and operating essential safety equipment on board, such as emergency exits, life rafts, and first aid kits.

Beyond the aircraft, flight attendants also experience layovers in various destinations. During layovers, they stay in hotels provided by the airline, where they have the opportunity to rest, recharge, and prepare for their next flight. These layovers can range from a few hours to several days, depending on the flight schedule. It allows flight attendants to explore different cities or simply relax before returning to work.

9c257dbebb5e7a54148088a4800a169627dff6a5-1200x675.jpg?w=665&h=374&q=80&fit=max&auto=format


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1918 2023-10-02 00:16:52

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1922) Radio Personality

Gist

A Radio Jockey is a person who hosts programs and talk shows on radio channels. He/She is the 'Voice' of the assigned program. A Radio Jockey or RJ entertains the listeners and callers with fascinating stories and ideas.

A Radio Jockey represents the whole radio station and gets popular by identifying with the local populace. Hence, an RJ should always be aware of the target audience as his/ her anchoring skills can make or break the show.

A radio personality (American English) or radio presenter (British English) is a person who has an on-air position in radio broadcasting. A radio personality who hosts a radio show is also known as a radio host, and in India and Pakistan as a radio jockey.

Details

A radio personality (American English) or radio presenter (British English) is a person who has an on-air position in radio broadcasting. A radio personality who hosts a radio show is also known as a radio host, and in India and Pakistan as a radio jockey. Radio personalities who introduce and play individual selections of recorded music are known as disc jockeys or "DJs" for short. Broadcast radio personalities may include talk radio hosts, AM/FM radio show hosts, and satellite radio program hosts.

Description

A radio personality can be someone who introduces and discusses genres of music; hosts a talk radio show that may take calls from listeners; interviews celebrities or guests; or gives news, weather, sports, or traffic information. The radio personality may broadcast live or use voice-tracking techniques. Increasingly in the 2010s, radio personalities are expected to supplement their on-air work by posting information online, such as on a blog or on another web forum. This may be either to generate additional revenue or connect with listeners. With the exception of small or rural radio stations, much of music radio broadcasting is done by broadcast automation, a computer-controlled playlist airing MP3 audio files which contain the entire program consisting of music, commercials, and a radio announcer's pre-recorded comments.

History

In the past, the term "disc jockey" (or "DJ") was exclusively used to describe on-air radio personalities who played recorded music and hosted radio shows that featured popular music. Unlike the modern club DJ who uses beatmatching to mix transitions between songs to create continuous play, radio DJs played individual songs or music tracks while voicing announcements, introductions, comments, jokes, and commercials in between each song or short series of songs. During the 1950s, '60s and '70s, radio DJs exerted considerable influence on popular music, especially during the Top 40 radio era, because of their ability to introduce new music to the radio audience and promote or control which songs would be given airplay.

Although radio personalities who specialized in news or talk programs such as Dorothy Kilgallen and Walter Winchell have existed since the early days of radio, exclusive talk radio formats emerged and multiplied in the 1960s, as telephone call in shows, interviews, news, and public affairs became more popular. In New York, WINS (AM) switched to a talk format in 1965, and WCBS (AM) followed two years later. Early talk radio personalities included Bruce Williams and Sally Jesse Raphael. The growth of sports talk radio began in the 1960s, and resulted in the first all-sports station in the US, WFAN (AM) that would go on to feature many sports radio personalities such as Marv Albert and Howie Rose.

Types of radio personalities

FM/AM radio – AM/FM personalities play music, talk, or both. Some examples are Rick Dees, Elvis Duran, Big Boy, Kidd Kraddick, John Boy and Billy, The Bob and Tom Show, The Breakfast Club, and Rickey Smiley.
Talk radio – Talk radio personalities often discuss social and political issues from a particular political point of view. Some examples are Rush Limbaugh, Art Bell, George Noory, Brian Lehrer, and Don Geronimo.
Sports talk radio – Sports talk radio personalities are often former athletes, sports writers, or television anchors and discuss sports news. Some examples are Dan Patrick, Tony Kornheiser, Dan Sileo, Colin Cowherd, and Mike Francesa.
Satellite radio – Satellite radio personalities are subject to fewer government broadcast regulations and may be allowed to play explicit music. Howard Stern, Opie and Anthony, Dr. Laura, and Chris "Mad Dog" Russo are some of the notable personalities who have successfully made the move from terrestrial radio to satellite radio.
Internet radio - Internet radio personalities appear on internet radio stations that offer news, sports, talk, and various genres of music that are carried by streaming media outlets such as AccuRadio, Pandora Radio, Slacker Radio and Jango.

Notable radio personalities

Notable radio personalities include pop music radio hosts Wolfman Jack, Jim Pewter, Dickinson Clark, Casey Kasem, John Peel, Charlie Gillett, Walt Love, Alan Freed, The Real Son Steele and Charlie Tuna; sports talk hosts such as Mike Francesa; shock jocks and political talk hosts such as Don Imus, Howard Stern and Rush Limbaugh.

Career:

Education

Many radio personalities do not have a post-high school education, but some do hold degrees in audio engineering. If a radio personality has a degree it's typically a bachelor's degree level qualification in radio-television-film, mass communications, journalism, or English.

Training

Universities offer classes in radio broadcasting and often have a college radio station, where students can obtain on-the-job training and course credit. Prospective radio personalities can also intern at radio stations for hands-on training from professionals. Training courses are also available online.

Requirements

A radio personality position generally has the following requirements:

* Good clear voice with excellent tone and modulation
* Great communication skills and creativity to interact with listeners
* Knowledgeable on current affairs, news issues and social trends
* Creative thinking, to be able to think of new ideas or topics for show
* Able to improvise and think "on the spot"
* Ability to develop their own personal style
* A good sense of humor

Opportunities

Due to radio personalities' vocal training, opportunities to expand their careers often exist. Over time a radio personality could be paid to do voice-overs for commercials, television shows, and movies.

Salary in the US

Radio personality salaries are influenced by years of experience and education. In 2013, the median salary of a radio personality in the US was $28,400.

1–4 years: $15,200–39,400,
5–9 years: $20,600–41,700,
10–19 years: $23,200–51,200,
20 or more years: $26,300–73,000.

A radio personality with a bachelor's degree had a salary range of $19,600–60,400.

The salary of a local radio personality will differ from a national radio personality. National personality pay can be in the millions because of the increased audience size and corporate sponsorship. For example, Rush Limbaugh was reportedly paid $38 million annually as part of the eight-year $400 million contract he signed with Clear Channel Communications.

Additional Information

All radio personalities have one thing in common - they provide commentary or make announcements on radio shows, generally as hosts, co-hosts or members of announcing groups. Beyond that, however, their job duties can be as diverse as the stations on your radio dial.

The U.S. Bureau of Labor Statistics (BLS) classifies three main subfields of radio announcing, including disc jockeys (DJs), show hosts and public address (PA) announcers (www.bls.gov). Disc jockeys play and critique music, show hosts offer opinion and commentary on news, politics, sports or similar areas, and PA announcers provide live accounts of sporting or other entertainment events.

What Does Being A Radio Personality Entail?

If you want to be a radio personality, you should be aware that your job duties won't be limited to on-air discussion of the topic in which you specialize. Radio shows require advance preparation. Duties in this area may include brainstorming and researching materials to discuss on the air, writing commentary in advance, or preparing interview questions to ask guests on your show.

In some cases, disc jockeys make lists of songs to play; however, the BLS reports that playlist content is increasingly dictated by station managers. At some stations, announcers are also responsible for tasks such as manning the control boards, fielding calls from listeners and emceeing or appearing at station-sponsored events.

How Can I Prepare To Work In This Industry?

In the broadcasting industry, experience and aptitude for on-air radio announcing are often just as important as education, if not more so. According to O*NET OnLine, a career resource website, radio announcers usually need previous radio announcing experience and some sort of vocational or college degree in broadcasting.

You can often obtain the industry experience you need by interning at a local radio station or, if you're enrolled in a broadcasting degree program, working at your campus radio station.

You can earn an associate's or bachelor's degree in radio broadcasting or broadcast journalism. Programs specifically in radio broadcasting tend to be more widely available through 2-year colleges; 4-year schools typically offer broadcast journalism degrees. These programs teach relevant radio broadcasting skills such as writing and research for broadcasts, radio marketing and promotions, sound production, basic announcing and sports broadcasting.

You may also advance your career by joining a professional association, such as the National Association of Broadcasters (NAB), which offers educational seminars and networking events to members (www.nab.org).

What Can I Expect On The Job Front?

Broadcasting positions are often highly in demand, and even experienced entry-level candidates may face stiff competition for positions as radio announcers, hosts or DJs. In some cases, you may need to work as an assistant announcer or production technician before getting a full-time announcing position.

According to the BLS, most radio personalities begin their careers in small broadcast markets, and many eventually ascend to larger stations in big cities. Also, because most radio stations broadcast around the clock, working as a radio personality may require keeping strange hours. Early morning radio shows, for example, are common.

How Much Could I Earn?

The salary information website Payscale.com breaks down the median salaries for radio personalities by exact job descriptions. As of October 2016, Payscale.com reported a national median salary of $35,000 for radio announcers, $45,000 for show hosts and $38,000 for radio DJs.

The BLS reported that radio and television announcers as a whole made a median annual salary of $33,220 in May of 2018.

What Are Some Related Alternative Careers?

Reporters, correspondents, and broadcast news analysts, writers and authors and producers and directors are some of the related careers that require a bachelor's degree. Reporters, correspondents, and broadcast news analysts may work for radio, television or other media outlets. They are responsible for reporting on events and stories from the local to global levels. Writers and authors may also work for a variety of media as they produce the written content for things like books, blogs, advertisements and more. Producers and directors manage and oversee all the details of a performing arts production, such as movies, tv shows and plays.

shutterstock_324754328.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1919 2023-10-03 00:02:32

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1923) Video jockey

Gist

VJ stands for visual jockey or a video jockey, and both terms can mean a creator moving visual arts or an introducer of videos. Most of the time, such visual arts are in the form of a video played on big screens or projected with beamers. An introducer of videos is someone who talks before, between, or after one or multiple videos, with a common purpose to introduce one or more other videos.

Summary

A video jockey (abbreviated VJ or sometimes veejay) is an announcer or host who introduces music videos and live performances on commercial music television channels such as MTV, VH1, MuchMusic and Channel V.

Origins

The term "video jockey" comes from the term "disc jockey", "DJ" ("deejay") as used in radio. Music Television (MTV) popularized the term in the 1980s (see List of MTV VJs). The MTV founders got their idea for their VJ host personalities from studying Merrill Aldighieri's club. Aldighieri worked in the New York City nightclub Hurrah, which was the first to make a video installation as a prominent featured component of the club's design with multiple monitors hanging over the bar and dance floor. When Hurrah invited Aldighieri to show her experimental film, she asked if she could develop a video to complement the DJ music so that then her film would become part of a club ambiance and not be seen as a break in the evening. The experiment led to a full-time job there.

Several months later the future MTV founders patronized the club, interviewed her, and took notes. She told them she was a VJ, the term she invented with a staff member to put on her first pay slip. Her video jockey memoirs list the live music she documented during her VJ breaks.[ Her method of performing as a VJ consisted of improvising live clips using a video camera, projected film loops, and switching between two U-matic video decks. She solicited the public to collaborate. The club showcased many video artists, who contributed raw and finished works. Her work also incorporated stock footage. Aldighieri next worked at Danceteria, which had a video lounge and dance floor separate levels.

Sound & Vision, however, credits the creation of the VJ to comedian and former DJ himself Rick Moranis, who would introduce music clips on television under his Gerry Todd persona on Second City Television. The sketches ran before MTV debuted in the United States. "There had been no such thing" up until that point, confirmed Moranis' SCTV castmate Martin Short, so "the joke was that there would be such a thing."

Details

VJing (pronounced: VEE-JAY-ing) is a broad designation for realtime visual performance. Characteristics of VJing are the creation or manipulation of imagery in realtime through technological mediation and for an audience, in synchronization to music. VJing often takes place at events such as concerts, nightclubs, music festivals and sometimes in combination with other performative arts. This results in a live multimedia performance that can include music, actors and dancers. The term VJing became popular in its association with MTV's Video Jockey but its origins date back to the New York club scene of the 1970s. In both situations VJing is the manipulation or selection of visuals, the same way DJing is a selection and manipulation of audio.

One of the key elements in the practice of VJing is the realtime mix of content from a "library of media", on storage media such as VHS tapes or DVDs, video and still image files on computer hard drives, live camera input, or from computer generated visuals. In addition to the selection of media, VJing mostly implies realtime processing of the visual material. The term is also used to describe the performative use of generative software, although the word "becomes dubious ... since no video is being mixed".

Common technical setups

A significant aspect of VJing is the use of technology, be it the re-appropriation of existing technologies meant for other fields, or the creation of new and specific ones for the purpose of live performance. The advent of video is a defining moment for the formation of the VJ (video jockey).

Often using a video mixer, VJs blend and superimpose various video sources into a live motion composition. In recent years, electronic musical instrument makers have begun to make specialty equipment for VJing.

VJing developed initially by performers using video hardware such as videocameras, video decks and monitors to transmit improvised performances with live input from cameras and even broadcast TV mixed with pre-recorded elements. This tradition lives on with many VJs using a wide range of hardware and software available commercially or custom made for and by the VJs.

VJ hardware can be split into categories -

* Source hardware generates a video picture which can be manipulated by the VJ, e.g. video cameras and Video Synthesizers.
* Playback hardware plays back an existing video stream from disk or tape-based storage mediums, e.g. VHS tape players and DVD players.
* Mixing hardware allows the combining of multiple streams of video e.g. a video mixer or a computer utilizing VJ software.
* Effects hardware allows the adding of special effects to the video stream, e.g. Colour Correction units
* Output hardware is for displaying the video signal, e.g. Video projector, LED display, or Plasma Screen.

There are many types of software a VJ may use in their work, traditional NLE production tools such as Adobe Premiere, After Effects, and Apple's Final Cut Pro are used to create content for VJ shows. Specialist performance software is used by VJs to play back and manipulate video in real-time.

VJ performance software is highly diverse, and include software which allows a computer to replace the role of an analog video mixer and output video across extended canvasses composed of multiple screens or projectors. Small companies producing dedicated VJ software such as Modul8 and Magic give VJs a sophisticated interface for real-time processing of multiple layers of video clips combined with live camera inputs, giving VJs a complete off the shelf solution so they can simply load in the content and perform. Some popular titles which emerged during the 2000s include Resolume, NuVJ.

Some VJs prefer to develop software themselves specifically to suit their own performance style. Graphical programming environments such as Max/MSP/Jitter, Isadora, and Pure Data have developed to facilitate rapid development of such custom software without needing years of coding experience.

Additional Information

Overview

Video jockeys are curators and creators who specialize in supporting live events and performances with visual media experiences.

What Does a Professional Video Jockey (VJ) Do?

In the early days of music television channels like MTV and VH1, the term video jockey—adapted from disc jockey—referred to the medium's newly minted stars who hosted shows, introduced videos, and curated music video playlists. Today, video jockeys (or VJs) are working visual artists who create, curate, and improvise videos for live events and performances—particularly those that involve music in some way. There are many different ways to approach this hybrid art form; some VJs incorporate other visual arts in their work, like collage, animation, 3D digital design, and lighting design.

Aspiring VJs should aim to build exceptional videography and video editing skills, knowledge of a wide range of music, and a large collection of images, videos, and effects to remix live.

Video jockeys have become integral to the clubbing and festival scene, where they might create videos that enhance the DJ's playlist or shoot and manipulate video live as a form of performance. VJs might also create video experiences to support an artist's concert tour, a play or piece of performance art, a contemporary staging of a ballet or opera, a fashion show, or other events. In addition, they might display work in a gallery or other fine art setting.

Work Life

Video jockeys lead varying lifestyles based on the types of work they pursue. Those who work mostly in the nightlife scene work, unsurprisingly, in the late afternoon and evening—although they're likely to spend part of the day searching for existing footage, shooting original video, and preparing a plan for the show, if not creating a video beforehand. On the other hand, those who primarily create (and not perform) videos for concerts, tours, and other performances can work wherever and whenever they choose, so long as they complete the project by the deadline.

Community

Video jockeys are cutting-edge visual artists who know a little bit about everything, from film editing and graphic design to music, fine art, and pop culture. They're usually interested in archival footage, nightclub culture, and remixing as an art form. Possessing an original visual style or approach is essential for finding work, as are strong networking and self-promotion skills.

Career Path

Video jockeys come from a number of educational backgrounds, including film, music, graphic design, and visual art. Most start out working nightclubs and bars, but may book gigs at festivals or even tour alongside a musical artist as they become more established. They could also start taking on contract work for high-profile recording artists going on tour. Additionally, VJs are well positioned to work in the film industry as video editors or music video directors, in the theater industry as lighting designers, or in a diverse range of industries as interactive media specialists.

Finding Work

Video jockeys are freelancers who may be contracted or employed regularly by nightclubs, music venues, recording artists, tour and festival production companies, event management companies, lighting designers, theatrical directors, and more.

Aspiring VJs should aim to build exceptional videography and video editing skills, knowledge of a wide range of music, and a large collection of images, videos, and effects to remix live. A great reputation and large professional network are helpful for finding jobs, but in lieu of that, a sample video treatment of an album, concert setlist, or DJ playlist can serve well to demonstrate a VJ's skills to a potential employer.

Professional Skills

* Video editing software
* Analog video manipulation
* Broad musical knowledge
* Comfort with a wide range of video styles (ambient, dance, psychedelic, etc.)
* Lighting design
* Animation
* 3D digital design
* Networking
* Live performance

Industries

* Journalism.

ss-arena.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1920 2023-10-04 00:06:14

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1924) Open Data Institute

Gist

The Open Data Institute (ODI) is a non-profit private company limited by guarantee, based in the United Kingdom. Founded by Sirs Tim Berners-Lee and Nigel Shadbolt in 2012, the ODI’s mission is to connect, equip and inspire people around the world to innovate with data.

Summary

The Open Data Institute (ODI) is a nonprofit based in London, United Kingdom, with a mission to work with companies and governments to build an open, trustworthy data ecosystem. The ODI works with a range of organizations, governments, public bodies, and civil society to create a world where data works for everyone.

The ODI was co-founded in 2012 by the inventor of the web Sir Tim Berners-Lee and artificial intelligence expert Sir Nigel Shadbolt to show the value of open data, and to advocate for the innovative use of open data to effect positive change across the globe.

Details

The Open Data Institute (ODI) is a non-profit private company limited by guarantee, based in the United Kingdom. Founded by Sir Tim Berners-Lee and Sir Nigel Shadbolt in 2012, the ODI's mission is to connect, equip and inspire people around the world to innovate with data.

The ODI's global network includes individuals, businesses, startups, franchises, collaborators and governments who help to achieve the mission.

Learning

The Open Data Institute provides in-house and online, free and paid-for training courses. ODI courses and learning materials cover theory and practice surrounding data publishing and use, from introductory overviews to courses for specific subject areas.

ODI 'Friday lunchtime lectures' cover a different theme each week surrounding the communication and application of data, and usually feature an external speaker.

ODI themes

In order to bring open data's benefits to specific areas of society and industry, the ODI focuses much of its research, publications and projects around specific themes and sectors.

Data infrastructure

Since its inception in 2012, the ODI has championed open data as a public good, stressing the need for effective governance models to protect it. In 2015, the ODI was instrumental in beginning a global discussion around the need to define and strengthen data infrastructure. In ‘Who owns our data infrastructure’, a discussion paper launched at the International Open Data Conference in Ottawa, the ODI explored what data ownership looked like and what we could expect from those who manage data that is fundamental to a functioning society.

The ODI is developing common definitions to describe how data is used via a ‘Data Lexicon’, and ‘Data Spectrum’ visualisation that shows how they fit together across the spectrum of closed, shared and open data. Definitions in the lexicon include:

Data that is closed (only accessible by its subject, owner or holder); data that is shared (with named access – data that is shared only with named people or organisations, group-based access – data that is available to specific groups who meet certain criteria, and public access – data that is available to anyone under terms and conditions that are not ‘open’); and data that is open (data that anyone can access, use and share).

According to the ODI, for data to be considered ‘open’, it must be accessible, which usually means published on the World Wide Web; be available in a machine-readable format and have a licence that permits anyone to access, use and share it – commercially and non-commercially.

Data as culture

The ODI's Data as Culture art programme engages artists to explore the use of data as an art material, to question its deep and wide implications on culture, and to challenge our understanding of what data is and its impact on people and society, our economy and businesses, and the environment.

ODI Associate Curator, Hannah Redler, selected ‘Data Anthropologies’ as Data as Culture's 2015–2016 theme, placing people at the centre of emerging data landscapes. For it the ODI commissioned Artists in Residence, Thomson & Craighead, Natasha Caruana and Alex McLean to exhibit work and create new data-driven pieces.

Global development

The ODI promotes data as a tool for global development, delivering support programmes in developing countries, conducting research, and helping to develop recommended practices and policies when applying open data to development challenges.

The ODI has supported open data leaders in governments around the world to boost economies, innovation, social impact and transparency using open data. As part of the Open Data for Development Network, funded by the International Development Research Centre, the ODI created the Open Data Leaders Network – a space for peer-learning.

In 2015, the ODI worked with the Burkina Faso Open Data Initiative, who used open data to ensure that citizens had access to real-time, open results data for their freest and fairest presidential elections in nearly three decades.

ODI sectors:

Finance

The ODI focuses on highlighting how data can enhance FinTech and banking and bring broad benefits to customers, regulators and industry. As part of a joint industry and government Open Banking Working Group, the institute created a framework for designing and implementing the Open Banking Standard. This highlights how banking customers can have more control over their data, and how to create an environment that maximises data reuse.

In ‘Data sharing and open data for banks’, a report for HM Treasury and Cabinet Office, the ODI explains why making data more accessible, and sharing transactional data via open APIs, could improve competition and consumer experience in UK banking. The paper focuses on key technologies and how they can support data sharing via APIs that preserve privacy.

The ODI's 2013 ‘Show me the money’ report focused on the UK peer-to-peer lending (P2P) market, revealing ‘lending by region’ using data from P2P platforms.

Agriculture and nutrition

Through research, open discussion and sector-focused events, the ODI is identifying challenges, solutions and global priorities in improving agriculture and nutrition with open data.

‘How can we improve agriculture, food and nutrition with open data?’, an ODI report written in partnership with the Global Open Data for Agriculture Initiative, presents 14 use cases showing open data use in agriculture, food production and consumption.

Open cities

The ODI runs an Open Data for Smart Cities training course, and works closely with relevant ODI Members to highlight opportunities for urban planners, entrepreneurs and city residents.

Global network:

Members

ODI Members are organisations and individuals, from large corporations to students, who explore, demonstrate and share the value of data.

The ODI grew its network of businesses, startups, academic establishments and individuals to over 1,300 in 2015, and launched student membership in line with its goal to help provide lifelong data expertise for young people around the world.

ODI Members (whether sponsors, partners or supporters) are all committed to unlocking the value of data, and are key to developing the ODI's professional network in the UK and internationally.

New member companies in 2015, included Deutsche Bank, Ocado Technology, SAP and The Bulmer Foundation.

Startups

Each year the ODI invites new applicants onto its ODI Startup programme in order to support them to develop a sustainable business, from idea to product to growth.

ODI Startups are provided with coaching and mentoring from external mentors, ad-hoc office space, discounted training courses, and access to other members of the ODI global network for networking and peer learning. The ODI assesses startups for the programme based on the strength of their idea and team, market opportunity and timing, potential scale, use of open data, and potential impact.

30 ODI Startups have joined the programme, which between them employ 185 people and have secured over £10m in contracts and investments.

Nodes

ODI Nodes are franchises of the ODI.

Hosted by existing (for-profit or not-for-profit) organisations, ODI Nodes operate locally and are connected globally as part of the ODI Node network. Each node adopts the ODI Charter, an open codification of the guiding principles and rules under which the ODI operates. ODI HQ (based in London) charges ODI Nodes to be part of the network.

ODI Node types include pioneer nodes, learning nodes, community nodes and story nodes.

Pioneer nodes are ambassadors for the ODI's global network. They work collaboratively with HQ to help ensure the node network is sustainable, lead the delivery of quality services to market and develop initiatives that can scale across the network.

Learning nodes establish local training via ODI Registered Trainers, and focus on growing their reach by tailoring ODI Learning to local demand.

Community nodes convene local individuals and organisations interested in open innovation, delivering local events and workshops. They raise awareness of data's economic, social and environmental benefits, and encourage local collaboration.

Story nodes raise awareness, share challenges and promote best practice in harnessing data's economic, social and environmental benefits via blogs from their perspectives within their local contexts, across sectors and themes.

Advisory

The ODI provides consultancy, training and research and development advisory to help governments, organisations and businesses to use open data to create economic, environmental and social value. The ODI assesses how open data can impact organisations, implement open data strategies and innovate with open data to solve problems and create new opportunities.

Software

The ODI Labs team creates tools, techniques and standards for open data publishing. Flagship ODI Labs products include Open Data Certificates, which show that data has been published in a sustainable and reusable way, and an Open Data Maturity Model and associated Open Data Pathway tool for organisations to assess their data practices (developed in collaboration with The Department for Environment, Food and Rural Affairs (Defra).

ODI Labs also focus on implementing the World Wide Web Consortium's CSV on the Web recommendations via CSVlint, its validator for CSV files.

Evidence and research:

ODI Stories

The ODI is committed to demonstrating evidence for open data's social, economic and environmental benefits with open data stories and long-form publications. These are generated from ODI research, the work of the ODI's global network of startups, members and nodes, and the ODI Showcase programme, which supports projects to achieve open data impact.

ODI Research

The ODI undertakes research on a broad range of areas related to open data. This includes exploring the evidence for the impact of open data; research and development of tools and standards to assist producers, publishers and users of open data; examining the implications, challenges and opportunities of deploying open data at web scale; and applications of open data to address or illuminate real-world problems.

Ongoing projects include: Mapping and understanding the scale of open data's potential value in business, with reports to date analysing open data companies that create products and services, and how three big businesses – Thomson Reuters, Arup Group and Syngenta create value with open innovation.

Data-and-Platform-as-a-Service (DaPaaS), which simplifies the consumption of open (and linked) data, by delivering a platform for publishing, consuming and reusing open data, as well as deploying open data applications.

OpenDataMonitor, which provides users with an online monitoring and analytics platform for open data in Europe. It will provide insights into open data availability and publishing platforms by developing and delivering an analysis and visualisation platform that harvests and analyses multilingual metadata from local, regional and national data catalogues.

Share-PSI is the European network for the exchange of experience and ideas around implementing open data policies in the public sector. It brings together 45 partners covering 26 countries with representatives from government departments, standards bodies, academic institutions, commercial organisations, trade associations and interest groups.

DaPaaS and OpenDataMonitor are co-funded by the Seventh Framework Programme for research and technological development (FP7). Share PSI is co-funded by the European Commission under the ICT Policy Support Programme (ICT PSP) as part of the Competitiveness and Innovation Framework Programme.

The UK Parliament's Public Accounts Committee noted in 2012 that the ODI would have a role in assessing what economic and public services benefits could be secured through making data freely available.

Board and senior leadership

The institute is led by:

* Jeni Tennison, chief executive officer
* Louise Burke, Finance and Compliance director
* Tim Berners-Lee, President and co-founder
* Nigel Shadbolt, Chairman and co-founder
* Roger Hampson, ODI Board member
* Martha Lane-Fox, ODI Board member
* Martin Tisné, ODI Board member
* Neelie Kroes, ODI Board member
* Richard Marsh, ODI Board member

Funding

The ODI is part core-grant and part income-backed. £10m of public funds were pledged by the UK Technology Strategy Board to the ODI in 2012, (£2m/year over five years). A further $4,850,000 of funding has been secured via Omidyar Network. ODI derives its income from training, membership, research and development, services and events. In 2015, the balance between core-grant and income was approximately 50:50.

More detail can be found on the ODI's public dashboards.

Additional Information

The Open Data Institute (ODI) works with companies and governments to build an open, trustworthy data ecosystem, where people can make better decisions using data and manage any harmful impacts.

ODI works with companies and governments to build an open, trustworthy data ecosystem through three key activities:

Sector programs – coordinating organizations to tackle a social or economic problem with data and an open approach.
Practical advocacy – working as a critical friend with businesses and government, and creating products they can use to support change.
Peer networks – bringing together peers in similar situations to learn together.

The ODI was co-founded in 2012 by the inventor of the web Sir Tim Berners-Lee and AI expert Sir Nigel Shadbolt to address today’s global challenges using the web of data.

The ODI is an independent, non-profit, non-partisan company headquartered in London, with an international reach, hundreds of members, thousands of people trained, dozens of startups incubated, and a convening space based in the heart of London’s thriving Shoreditch area. The ODI invites everyone interested in developing with data – whether on an individual, organizational or global level – to get in touch.

Priorities as a partner of the Global Partnership for Sustainable Development Data

The ODI commits to bringing open data’s benefits to specific areas of society and industry. We are currently focusing our attention on six main sector themes: agriculture and nutrition, finance, global development, open cities (or smart cities), Data as Culture, and data infrastructure.

ODI-ODC-1-684x513.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1921 2023-10-05 00:31:00

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1925) Blood bank

Gist

A blood bank is a place where blood is collected and stored before it is used for transfusions. Blood banking takes place in the lab. This is to make sure that donated blood and blood products are safe before they are used. Blood banking also determines the blood type.

Summary

Blood bank is an organization that collects, stores, processes, and transfuses blood. During World War I it was demonstrated that stored blood could safely be used, allowing for the development of the first blood bank in 1932. Before the first blood banks came into operation, a physician determined the blood types of the patient’s relatives and friends until the proper type was found, performed the crossmatch, bled the donor, and gave the transfusion to the patient. In the 1940s the discovery of many blood types and of several crossmatching techniques led to the rapid development of blood banking as a specialized field and to a gradual shift of responsibility for the technical aspects of transfusion from practicing physicians to technicians and clinical pathologists. The practicality of storing fresh blood and blood components for future needs made possible such innovations as artificial kidneys, heart-lung pumps for open-heart surgery, and exchange transfusions for infants with erythroblastosis fetalis.

Whole blood is donated and stored in units of about 450 ml (slightly less than one pint). Whole blood can be stored only for a limited time, but various components (e.g., red blood cells and plasma) can be frozen and stored for a year or longer. Therefore, most blood donations are separated and stored as components by the blood bank. These components include platelets to control bleeding; concentrated red blood cells to correct anemia; and plasma fractions, such as fibrinogen to aid clotting, immune globulins to prevent and treat a number of infectious diseases, and serum albumin to augment the blood volume in cases of shock. Thus, it is possible to serve the varying needs of five or more patients with a single blood donation.

Despite such replacement programs, many blood banks face continual problems in obtaining sufficient donations. The chronic shortage of donors has been alleviated somewhat by the development of apheresis, a technique by which only a desired blood component is taken from the donor’s blood, with the remaining fluid and blood cells immediately transfused back into the donor. This technique allows the collection of large amounts of a particular component, such as plasma or platelets, from a single donor.

Details

A blood bank is a center where blood gathered as a result of blood donation is stored and preserved for later use in blood transfusion. The term "blood bank" typically refers to a department of a hospital usually within a Clinical Pathology laboratory where the storage of blood product occurs and where pre-transfusion and Blood compatibility testing is performed. However, it sometimes refers to a collection center, and some hospitals also perform collection. Blood banking includes tasks related to blood collection, processing, testing, separation, and storage.

For blood donation agencies in various countries, see list of blood donation agencies and list of blood donation agencies in the United States.

Types of blood transfused

Several types of blood transfusion exist:

* Whole blood, which is blood transfused without separation.

Red blood cells or packed cells is transfused to patients with anemia/iron deficiency. It also helps to improve the oxygen saturation in blood. It can be stored at 2.0 °C-6.0 °C for 35–45 days.

* Platelet transfusion is transfused to those with low platelet count. Platelets can be stored at room temperature for up to 5–7 days. Single donor platelets, which have a more platelet count but it is bit expensive than regular.

* Plasma transfusion is indicated to patients with liver failure, severe infections or serious burns. Fresh frozen plasma can be stored at a very low temperature of -30 °C for up to 12 months. The separation of plasma from a donor's blood is called plasmapheresis.

History

While the first blood transfusions were made directly from donor to receiver before coagulation, it was discovered that by adding anticoagulant and refrigerating the blood it was possible to store it for some days, thus opening the way for the development of blood banks. John Braxton Hicks was the first to experiment with chemical methods to prevent the coagulation of blood at St Mary's Hospital, London, in the late 19th century. His attempts, using phosphate of soda, however, were unsuccessful.

The first non-direct transfusion was performed on March 27, 1914, by the Belgian doctor Albert Hustin, though this was a diluted solution of blood. The Argentine doctor Luis Agote used a much less diluted solution in November of the same year. Both used sodium citrate as an anticoagulant.

First World War

The First World War acted as a catalyst for the rapid development of blood banks and transfusion techniques. Inspired by the need to give blood to wounded soldiers in the absence of a donor,  Francis Peyton Rous at the Rockefeller University (then The Rockefeller Institute for Medical Research) wanted to solve the problems of blood transfusion. With a colleague, Joseph R. Turner, he made two critical discoveries: blood typing was necessary to avoid blood clumping (coagulation) and blood samples could be preserved using chemical treatment. Their report in March 1915 to identify possible blood preservative was of a failure. The experiments with gelatine, agar, blood serum extracts, starch and beef albumin proved useless.

In June 1915, they made the first important report in the Journal of the American Medical Association that agglutination could be avoided if the blood samples of the donor and recipient were tested before. They developed a rapid and simple method for testing blood compatibility in which coagulation and the suitability of the blood for transfusion could be easily determined. They used sodium citrate to dilute the blood samples, and after mixing the recipient's and donor's blood in 9:1 and 1:1 parts, blood would either clump or remain watery after 15 minutes. Their result with a medical advice was clear:

[If] clumping is present in the 9: 1 mixture and to a less degree or not at all in the 1 : 1 mixture, it is certain that the blood of the patient agglutinates that of the donor and may perhaps hemolyze it. Transfusion in such cases is dangerous. Clumping in the 1 : 1 mixture with little or none in the 9: 1 indicates that the plasma of the prospective donor agglutinates the cells of the prospective recipient. The risk from transfusing is much less under such circumstances, but it may be doubted whether the blood is as useful as one which does not and is not agglutinated. A blood of the latter kind should always be chosen if possible.

Rous was well aware that Austrian physician Karl Landsteiner had discovered blood types a decade earlier, but the practical usage was not yet developed, as he described: "The fate of Landsteiner's effort to call attention to the practical bearing of the group differences in human bloods provides an exquisite instance of knowledge marking time on technique. Transfusion was still not done because (until at least 1915), the risk of clotting was too great." In February 1916, they reported in the Journal of Experimental Medicine the key method for blood preservation. They replaced the additive, gelatine, with a mixture sodium citrate and glucose (dextrose) solution and found: "in a mixture of 3 parts of human blood, 2 parts of isotonic citrate solution (3.8 per cent sodium citrate in water), and 5 parts of isotonic dextrose solution (5.4 per cent dextrose in water), the cells remain intact for about 4 weeks." A separate report indicates the use of citrate-saccharose (sucrose) could maintain blood cells for two weeks. They noticed that the preserved bloods were just like fresh bloods and that they "function excellently when reintroduced into the body." The use of sodium citrate with sugar, sometimes known as Rous-Turner solution, was the main discovery that paved the way for the development of various blood preservation methods and blood bank.

Canadian Lieutenant Lawrence Bruce Robertson was instrumental in persuading the Royal Army Medical Corps (RAMC) to adopt the use of blood transfusion at the Casualty Clearing Stations for the wounded. In October 1915, Robertson performed his first wartime transfusion with a syringe to a patient who had multiple shrapnel wounds. He followed this up with four subsequent transfusions in the following months, and his success was reported to Sir Walter Morley Fletcher, director of the Medical Research Committee.

Robertson published his findings in the British Medical Journal in 1916, and—with the help of a few like minded individuals (including the eminent physician Edward William Archibald)—was able to persuade the British authorities of the merits of blood transfusion. Robertson went on to establish the first blood transfusion apparatus at a Casualty Clearing Station on the Western Front in the spring of 1917.

Oswald Hope Robertson, a medical researcher and U.S. Army officer, worked with Rous at the Rockefeller between 1915 and 1917, and learned the blood matching and preservation methods. He was attached to the RAMC in 1917, where he was instrumental in establishing the first blood banks, with soldiers as donors, in preparation for the anticipated Third Battle of Ypres. He used sodium citrate as the anticoagulant, and the blood was extracted from punctures in the vein, and was stored in bottles at British and American Casualty Clearing Stations along the Front. He also experimented with preserving separated red blood cells in iced bottles. Geoffrey Keynes, a British surgeon, developed a portable machine that could store blood to enable transfusions to be carried out more easily.

Expansion

The world's first blood donor service was established in 1921 by the secretary of the British Red Cross, Percy Lane Oliver. Volunteers were subjected to a series of physical tests to establish their blood group. The London Blood Transfusion Service was free of charge and expanded rapidly. By 1925, it was providing services for almost 500 patients and it was incorporated into the structure of the British Red Cross in 1926. Similar systems were established in other cities including Sheffield, Manchester and Norwich, and the service's work began to attract international attention. Similar services were established in France, Germany, Austria, Belgium, Australia and Japan.

Vladimir Shamov and Sergei Yudin in the Soviet Union pioneered the transfusion of cadaveric blood from recently deceased donors. Yudin performed such a transfusion successfully for the first time on March 23, 1930, and reported his first seven clinical transfusions with cadaveric blood at the Fourth Congress of Ukrainian Surgeons at Kharkiv in September. Also in 1930, Yudin organized the world's first blood bank at the Nikolay Sklifosovsky Institute, which set an example for the establishment of further blood banks in different regions of the Soviet Union and in other countries. By the mid-1930s the Soviet Union had set up a system of at least 65 large blood centers and more than 500 subsidiary ones, all storing "canned" blood and shipping it to all corners of the country.

One of the earliest blood banks was established by Frederic Durán-Jordà during the Spanish Civil War in 1936. Duran joined the Transfusion Service at the Barcelona Hospital at the start of the conflict, but the hospital was soon overwhelmed by the demand for blood and the paucity of available donors. With support from the Department of Health of the Spanish Republican Army, Duran established a blood bank for the use of wounded soldiers and civilians. The 300–400 ml of extracted blood was mixed with 10% citrate solution in a modified Duran Erlenmeyer flask. The blood was stored in a sterile glass enclosed under pressure at 2 °C. During 30 months of work, the Transfusion Service of Barcelona registered almost 30,000 donors, and processed 9,000 liters of blood.

In 1937 Bernard Fantus, director of therapeutics at the Cook County Hospital in Chicago, established one of the first hospital blood banks in the United States. In creating a hospital laboratory that preserved, refrigerated and stored donor blood, Fantus originated the term "blood bank". Within a few years, hospital and community blood banks were established across the United States.

Frederic Durán-Jordà fled to Britain in 1938, and worked with Janet Vaughan at the Royal Postgraduate Medical School at Hammersmith Hospital to create a system of national blood banks in London. With the outbreak of war looking imminent in 1938, the War Office created the Army Blood Supply Depot (ABSD) in Bristol headed by Lionel Whitby and in control of four large blood depots around the country. British policy through the war was to supply military personnel with blood from centralized depots, in contrast to the approach taken by the Americans and Germans where troops at the front were bled to provide required blood. The British method proved to be more successful at adequately meeting all requirements and over 700,000 donors were bled over the course of the war. This system evolved into the National Blood Transfusion Service established in 1946, the first national service to be implemented.

Medical advances

A blood collection program was initiated in the US in 1940 and Edwin Cohn pioneered the process of blood fractionation. He worked out the techniques for isolating the serum albumin fraction of blood plasma, which is essential for maintaining the osmotic pressure in the blood vessels, preventing their collapse.

The use of blood plasma as a substitute for whole blood and for transfusion purposes was proposed as early as 1918, in the correspondence columns of the British Medical Journal, by Gordon R. Ward. At the onset of World War II, liquid plasma was used in Britain. A large project, known as 'Blood for Britain' began in August 1940 to collect blood in New York City hospitals for the export of plasma to Britain. A dried plasma package was developed, which reduced breakage and made the transportation, packaging, and storage much simpler.

The resulting dried plasma package came in two tin cans containing 400 cc bottles. One bottle contained enough distilled water to reconstitute the dried plasma contained within the other bottle. In about three minutes, the plasma would be ready to use and could stay fresh for around four hours. Charles R. Drew was appointed medical supervisor, and he was able to transform the test tube methods into the first successful mass production technique.

Another important breakthrough came in 1939–40 when Karl Landsteiner, Alex Wiener, Philip Levine, and R.E. Stetson discovered the Rh blood group system, which was found to be the cause of the majority of transfusion reactions up to that time. Three years later, the introduction by J.F. Loutit and Patrick L. Mollison of acid-citrate-dextrose (ACD) solution, which reduced the volume of anticoagulant, permitted transfusions of greater volumes of blood and allowed longer-term storage.

Carl Walter and W.P. Murphy Jr. introduced the plastic bag for blood collection in 1950. Replacing breakable glass bottles with durable plastic bags allowed for the evolution of a collection system capable of safe and easy preparation of multiple blood components from a single unit of whole blood.

Further extending the shelf life of stored blood up to 42 days was an anticoagulant preservative, CPDA-1, introduced in 1979, which increased the blood supply and facilitated resource-sharing among blood banks.

Collection and processing

In the U.S., certain standards are set for the collection and processing of each blood product. "Whole blood" (WB) is the proper name for one defined product, specifically unseparated venous blood with an approved preservative added. Most blood for transfusion is collected as whole blood. Autologous donations are sometimes transfused without further modification, however whole blood is typically separated (via centrifugation) into its components, with red blood cells (RBC) in solution being the most commonly used product. Units of WB and RBC are both kept refrigerated at 33.8 to 42.8 °F (1.0 to 6.0 °C), with maximum permitted storage periods (shelf lives) of 35 and 42 days respectively. RBC units can also be frozen when buffered with glycerol, but this is an expensive and time-consuming process, and is rarely done. Frozen red cells are given an expiration date of up to ten years and are stored at −85 °F (−65 °C).

The less-dense blood plasma is made into a variety of frozen components, and is labeled differently based on when it was frozen and what the intended use of the product is. If the plasma is frozen promptly and is intended for transfusion, it is typically labeled as fresh frozen plasma. If it is intended to be made into other products, it is typically labeled as recovered plasma or plasma for fractionation. Cryoprecipitate can be made from other plasma components. These components must be stored at 0 °F (−18 °C) or colder, but are typically stored at −22 °F (−30 °C). The layer between the red cells and the plasma is referred to as the buffy coat and is sometimes removed to make platelets for transfusion. Platelets are typically pooled before transfusion and have a shelf life of 5 to 7 days, or 3 days once the facility that collected them has completed their tests. Platelets are stored at room temperature (72 °F or 22 °C) and must be rocked/agitated. Since they are stored at room temperature in nutritive solutions, they are at relatively high risk for growing bacteria.

Some blood banks also collect products by apheresis. The most common component collected is plasma via plasmapheresis, but red blood cells and platelets can be collected by similar methods. These products generally have the same shelf life and storage conditions as their conventionally-produced counterparts.

Donors are sometimes paid; in the U.S. and Europe, most blood for transfusion is collected from volunteers while plasma for other purposes may be from paid donors.

Most collection facilities as well as hospital blood banks also perform testing to determine the blood type of patients and to identify compatible blood products, along with a battery of tests (e.g. disease) and treatments (e.g. leukocyte filtration) to ensure or enhance quality. The increasingly recognized problem of inadequate efficacy of transfusion is also raising the profile of RBC viability and quality. Notably, U.S. hospitals spend more on dealing with the consequences of transfusion-related complications than on the combined costs of buying, testing/treating, and transfusing their blood.

Storage and management

Routine blood storage is 42 days or 6 weeks for stored packed red blood cells (also called "StRBC" or "pRBC"), by far the most commonly transfused blood product, and involves refrigeration but usually not freezing. There has been increasing controversy about whether a given product unit's age is a factor in transfusion efficacy, specifically on whether "older" blood directly or indirectly increases risks of complications. Studies have not been consistent on answering this question, with some showing that older blood is indeed less effective but with others showing no such difference; nevertheless, as storage time remains the only available way to estimate quality status or loss, a first-in-first-out inventory management approach is standard presently. It is also important to consider that there is large variability in storage results for different donors, which combined with limited available quality testing, poses challenges to clinicians and regulators seeking reliable indicators of quality for blood products and storage systems.

Transfusions of platelets are comparatively far less numerous, but they present unique storage/management issues. Platelets may only be stored for 7 days, due largely to their greater potential for contamination, which is in turn due largely to a higher storage temperature.

RBC storage lesion

Insufficient transfusion efficacy can result from red blood cell (RBC) blood product units damaged by so-called storage lesion—a set of biochemical and biomechanical changes which occur during storage. With red cells, this can decrease viability and ability for tissue oxygenation. Although some of the biochemical changes are reversible after the blood is transfused, the biomechanical changes are less so, and rejuvenation products are not yet able to adequately reverse this phenomenon.

Current regulatory measures are in place to minimize RBC storage lesion—including a maximum shelf life (currently 42 days), a maximum auto-hemolysis threshold (currently 1% in the US), and a minimum level of post-transfusion RBC survival in vivo (currently 75% after 24 hours). However, all of these criteria are applied in a universal manner that does not account for differences among units of product; for example, testing for the post-transfusion RBC survival in vivo is done on a sample of healthy volunteers, and then compliance is presumed for all RBC units based on universal (GMP) processing standards. RBC survival does not guarantee efficacy, but it is a necessary prerequisite for cell function, and hence serves as a regulatory proxy. Opinions vary as to the best way to determine transfusion efficacy in a patient in vivo. In general, there are not yet any in vitro tests to assess quality deterioration or preservation for specific units of RBC blood product prior to their transfusion, though there is exploration of potentially relevant tests based on RBC membrane properties such as erythrocyte deformability and erythrocyte fragility (mechanical).

Many physicians have adopted a so-called "restrictive protocol"—whereby transfusion is held to a minimum—due in part to the noted uncertainties surrounding storage lesion, in addition to the very high direct and indirect costs of transfusions, along with the increasing view that many transfusions are inappropriate or use too many RBC units.

Platelet storage lesion

Platelet storage lesion is a very different phenomenon from RBC storage lesion, due largely to the different functions of the products and purposes of the respective transfusions, along with different processing issues and inventory management considerations.

Alternative inventory and release practices

Although as noted the primary inventory-management approach is first in, first out (FIFO) to minimize product expiration, there are some deviations from this policy—both in current practice as well as under research. For example, exchange transfusion of RBC in neonates calls for use of blood product that is five days old or less, to "ensure" optimal cell function. Also, some hospital blood banks will attempt to accommodate physicians' requests to provide low-aged RBC product for certain kinds of patients (e.g. cardiac surgery).

More recently, novel approaches are being explored to complement or replace FIFO. One is to balance the desire to reduce average product age (at transfusion) with the need to maintain sufficient availability of non-outdated product, leading to a strategic blend of FIFO with last in, first out (LIFO).

Long-term storage

"Long-term" storage for all blood products is relatively uncommon, compared to routine/short-term storage. Cryopreservation of red blood cells is done to store rare units for up to ten years. The cells are incubated in a glycerol solution which acts as a cryoprotectant ("antifreeze") within the cells. The units are then placed in special sterile containers in a freezer at very low temperatures. The exact temperature depends on the glycerol concentration.

Blood-bank-employee-blood.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1922 2023-10-06 00:21:48

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1926) School

Gist

A school is an educational institution designed to provide learning spaces and learning environments for the teaching of students under the direction of teachers. Most countries have systems of formal education, which is sometimes compulsory. In these systems, students progress through a series of schools.

Summary

A school is a division of the school system consisting of students comprising one or more grade groups or other identifiable groups, organized as one unit with one or more teachers to give instruction of a defined type and housed in one or more buildings. More than one school may be housed in one building or compound, as is the case when elementary and secondary schools are housed in the same building or compound.

Details

A school is an educational institution designed to provide learning spaces and learning environments for the teaching of students under the direction of teachers. Most countries have systems of formal education, which is sometimes compulsory. In these systems, students progress through a series of schools. The names for these schools vary by country but generally include primary school for young children and secondary school for teenagers who have completed primary education. An institution where higher education is taught is commonly called a university college or university.

In addition to these core schools, students in a given country may also attend schools before and after primary (elementary in the U.S.) and secondary (middle school in the U.S.) education. Kindergarten or preschool provide some schooling to very young children (typically ages 3–5). University, vocational school, college or seminary may be available after secondary school. A school may be dedicated to one particular field, such as a school of economics or dance. Alternative schools may provide nontraditional curriculum and methods.

Non-government schools, also known as private schools, may be required when the government does not supply adequate or specific educational needs. Other private schools can also be religious, such as Christian schools, gurukula (Hindu schools), madrasa (Arabic schools), hawzas (Shi'i Muslim schools), yeshivas (Jewish schools), and others; or schools that have a higher standard of education or seek to foster other personal achievements. Schools for adults include institutions of corporate training, military education and training and business schools.

Critics of school often accuse the school system of failing to adequately prepare students for their future lives, of encouraging certain temperaments while inhibiting others, of prescribing students exactly what to do, how, when, where and with whom, which would suppress creativity, and of using extrinsic measures such as grades and homework, which would inhibit children's natural curiosity and desire to learn.

In homeschooling and distance education, teaching and learning take place independent from the institution of school or in a virtual school outside a traditional school building, respectively. Schools are organized in several different organizational models, including departmental, small learning communities, academies, integrated, and schools-within-a-school.

Etymology

The word school derives from Greek, originally meaning "leisure" and also "that in which leisure is employed", but later "a group to whom lectures were given, school".

History and development

The concept of grouping students together in a centralized location for learning has existed since Classical antiquity. Formal schools have existed at least since ancient Greece, ancient Rome, ancient India (Gurukul), and ancient China. The Byzantine Empire had an established schooling system beginning at the primary level. According to Traditions and Encounters, the founding of the primary education system began in 425 AD and "... military personnel usually had at least a primary education ...". The sometimes efficient and often large government of the Empire meant that educated citizens were a must. Although Byzantium lost much of the grandeur of Roman culture and extravagance in the process of surviving, the Empire emphasized efficiency in its war manuals. The Byzantine education system continued until the empire's collapse in 1453 AD.

In Western Europe, a considerable number of cathedral schools were founded during the Early Middle Ages in order to teach future clergy and administrators, with the oldest still existing, and continuously operated, cathedral schools being The King's School, Canterbury (established 597 CE), King's School, Rochester (established 604 CE), St Peter's School, York (established 627 CE) and Thetford Grammar School (established 631 CE). Beginning in the 5th century CE, monastic schools were also established throughout Western Europe, teaching religious and secular subjects.

In Europe, universities emerged during the 12th century; here, scholasticism was an important tool, and the academicians were called schoolmen. During the Middle Ages and much of the Early Modern period, the main purpose of schools (as opposed to universities) was to teach the Latin language. This led to the term grammar school, which in the United States informally refers to a primary school, but in the United Kingdom means a school that selects entrants based on ability or aptitude. The school curriculum has gradually broadened to include literacy in the vernacular language and technical, artistic, scientific, and practical subjects.

Obligatory school attendance became common in parts of Europe during the 18th century. In Denmark-Norway, this was introduced as early as in 1739–1741, the primary end being to increase the literacy of the almue, i.e., the "regular people". Many of the earlier public schools in the United States and elsewhere were one-room schools where a single teacher taught seven grades of boys and girls in the same classroom. Beginning in the 1920s, one-room schools were consolidated into multiple classroom facilities with transportation increasingly provided by kid hacks and school buses.

Islam was another culture that developed a school system in the modern sense of the word. Emphasis was put on knowledge, which required a systematic way of teaching and spreading knowledge and purpose-built structures. At first, mosques combined religious performance and learning activities. However, by the 9th century, the madrassa was introduced, a school that was built independently from the mosque, such as al-Qarawiyyin, founded in 859 CE. They were also the first to make the Madrassa system a public domain under Caliph's control.

Under the Ottomans, the towns of Bursa and Edirne became the main centers of learning. The Ottoman system of Külliye, a building complex containing a mosque, a hospital, madrassa, and public kitchen and dining areas, revolutionized the education system, making learning accessible to a broader public through its free meals, health care, and sometimes free accommodation.

Regional terms

The term school varies by country, as do the names of the various levels of education within the country.

United Kingdom and Commonwealth of Nations

In the United Kingdom, the term school refers primarily to pre-university institutions, and these can, for the most part, be divided into pre-schools or nursery schools, primary schools (sometimes further divided into infant school and junior school), and secondary schools. Various types of secondary schools in England and Wales include grammar schools, comprehensives, secondary moderns, and city academies. While they may have different names in Scotland, there is only one type of secondary school. However, they may be funded either by the state or independently funded. Scotland's school performance is monitored by Her Majesty's Inspectorate of Education. Ofsted reports on performance in England and Estyn reports on performance in Wales.

In the United Kingdom, most schools are publicly funded and known as state schools or maintained schools in which tuition is provided for free. There are also private schools or private schools that charge fees. Some of the most selective and expensive private schools are known as public schools, a usage that can be confusing to speakers of North American English. In North American usage, a public school is publicly funded or run.

In much of the Commonwealth of Nations, including Australia, New Zealand, India, Pakistan, Bangladesh, Sri Lanka, South Africa, Kenya, and Tanzania, the term school refers primarily to pre-university institutions.

India

In ancient India, schools were in the form of Gurukuls. Gurukuls were traditional Hindu residential learning schools, typically the teacher's house or a monastery. Schools today are commonly known by the Sanskrit terms Vidyashram, Vidyalayam, Vidya Mandir, Vidya Bhavan in India. In southern languages, it is known as Pallikoodam or PaadaSaalai. During the Mughal rule, Madrasahs were introduced in India to educate the children of Muslim parents. British records show that indigenous education was widespread in the 18th century, with a school for every temple, mosque, or village in most regions. The subjects taught included Reading, Writing, Arithmetic, Theology, Law, Astronomy, Metaphysics, Ethics, Medical Science, and Religion.

Under British rule, Christian missionaries from England, the United States, and other countries established missionary and boarding schools in India. Later as these schools gained popularity, more were started, and some gained prestige. These schools marked the beginning of modern schooling in India. The syllabus and calendar they followed became the benchmark for schools in modern India. Today most schools follow the missionary school model for tutoring, subject/syllabus, and governance, with minor changes.

Schools in India range from large campuses with thousands of students and hefty fees to schools where children are taught under a tree with a small / no campus and are free of cost. There are various boards of schools in India, namely Central Board for Secondary Education (CBSE), Council for the Indian School Certificate Examinations (CISCE), Madrasa Boards of various states, Matriculation Boards of various states, State Boards of various boards, Anglo Indian Board, among others. Today's typical syllabus includes language(s), mathematics, science – physics, chemistry, biology, geography, history, general knowledge, and information technology/computer science. Extracurricular activities include physical education/sports and cultural activities like music, choreography, painting, and theatre/drama.

Europe

In much of continental Europe, the term school usually applies to primary education, with primary schools that last between four and nine years, depending on the country. It also applies to secondary education, with secondary schools often divided between Gymnasiums and vocational schools, which again, depending on country and type of school, educate students for between three and six years. In Germany, students graduating from Grundschule are not allowed to progress into a vocational school directly. Instead, they are supposed to proceed to one of Germany's general education schools such as Gesamtschule, Hauptschule, Realschule or Gymnasium. When they leave that school, which usually happens at age 15–19, they may proceed to a vocational school. The term school is rarely used for tertiary education, except for some upper or high schools (German: Hochschule), which describe colleges and universities.

In Eastern Europe modern schools (after World War II), of both primary and secondary educations, often are combined. In contrast, secondary education might be split into accomplished or not. The schools are classified as middle schools of general education. For the technical purposes, they include "degrees" of the education they provide out of three available: the first – primary, the second – unaccomplished secondary, and the third – accomplished secondary. Usually, the first two degrees of education (eight years) are always included. In contrast, the last one (two years) permits the students to pursue vocational or specialized educations.

North America and the United States

In North America, the term school can refer to any educational institution at any level and covers all of the following: preschool (for toddlers), kindergarten, elementary school, middle school (also called intermediate school or junior high school, depending on specific age groups and geographic region), high school (or in some cases senior high school), college, university, and graduate school.

In the United States, school performance through high school is monitored by each state's department of education. Charter schools are publicly funded elementary or secondary schools that have been freed from some of the rules, regulations, and statutes that apply to other public schools. The terms grammar school and grade school are sometimes used to refer to a primary school due to British colonial legacies. In addition, there are tax-funded magnet schools which offer different programs and instruction not available in traditional schools.

Africa

In West Africa, "school" can also refer to "bush" schools, Quranic schools, or apprenticeships. These schools include formal and informal learning.

Bush schools are training camps that pass down cultural skills, traditions, and knowledge to their students. Bush schools are semi-similar to traditional western schools because they are separated from the larger community. These schools are located in forests outside of the towns and villages, and the space used is solely for these schools. Once the students have arrived in the forest, they cannot leave until their training is complete. Visitors are prohibited from these areas.

Instead of being separated by age, Bush schools are separated by gender. Women and girls cannot enter the boys' bush school territory and vice versa. Boys receive training in cultural crafts, fighting, hunting, and community laws among other subjects. Girls are trained in their own version of the boys' bush school. They practice domestic affairs such as cooking, childcare, and being a good wife. Their training is focused on how to be a proper woman by societal standards.

Qur'anic schools are the principal way of teaching the Quran and knowledge of the Islamic faith. These schools also fostered literacy and writing during the time of colonization. Today, the emphasis is on the different levels of reading, memorizing, and reciting the Quran. Attending a Qur'anic school is how children become recognized members of the Islamic faith. Children often attend state schools and a Qur'anic school.

In Mozambique, specifically, there are two kinds of Qur'anic schools. They are the tariqa based and the Wahhabi-based schools. What makes these schools different is who controls them. Tariqa schools are controlled at the local level. In contrast, the Wahhabi are controlled by the Islamic Council. Within the Qur'anic school system, there are levels of education. They range from a basic level of understanding, called chuo and kioni in local languages, to the most advanced, which is called ilimu.

In Nigeria, the term school broadly covers daycares, nursery schools, primary schools, secondary schools and tertiary institutions. Primary and secondary schools are either privately funded by religious institutions and corporate organisations or government-funded. Government-funded schools are commonly referred to as public schools. Students spend six years in primary school, three years in junior secondary school, and three years in senior secondary school. The first nine years of formal schooling is compulsory under the Universal Basic Education Program (UBEC). Tertiary institutions include public and private universities, polytechnics, and colleges of education. Universities can be funded by the federal government, state governments, religious institutions, or individuals and organisations.

dreamstime_xl_33206236-custom.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1923 2023-10-07 00:02:58

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1927) Hairline Fracture

Gist

A fracture that appears as a narrow crack along the surface of a bone.

Summary

A stress fracture is a fatigue-induced bone fracture caused by repeated stress over time. Instead of resulting from a single severe impact, stress fractures are the result of accumulated injury from repeated submaximal loading, such as running or jumping. Because of this mechanism, stress fractures are common overuse injuries in athletes.

Stress fractures can be described as small cracks in the bone, or hairline fractures. Stress fractures of the foot are sometimes called "march fractures" because of the injury's prevalence among heavily marching soldiers. Stress fractures most frequently occur in weight-bearing bones of the lower extremities, such as the tibia and fibula (bones of the lower leg), metatarsal and navicular bones (bones of the foot). Less common are stress fractures to the femur, pelvis, and sacrum. Treatment usually consists of rest followed by a gradual return to exercise over a period of months.

Signs and symptoms

Stress fractures are typically discovered after a rapid increase in exercise. Symptoms usually have a gradual onset, with complaints that include isolated pain along the shaft of the bone and during activity, decreased muscular strength and cramping. In cases of fibular stress fractures, pain occurs proximal to the lateral malleolus, that increases with activity and subsides with rest. If pain is constantly present it may indicate a more serious bone injury. There is usually an area of localized tenderness on or near the bone and generalized swelling in the area. Pressure applied to the bone may reproduce symptoms and reveal crepitus in well-developed stress fractures. Anterior tibial stress fractures elicit focal tenderness on the anterior tibial crest, while posterior medial stress fractures can be tender at the posterior tibial border.

Details

A hairline fracture typically results from injury and can cause swelling and tenderness. Treatment may involve applying ice to the affected area.

A hairline fracture, also known as a stress fracture, is a small crack or severe bruise within a bone. This injury is most common in athletes, especially athletes of sports that involve running and jumping. People with osteoporosis can also develop hairline fractures.

Hairline fractures are often caused by overuse or repetitive actions when microscopic damage is done to the bone over time. Not allowing yourself enough time to heal between activities is often a factor in the probability of getting this injury.

The bones of the foot and leg are especially prone to hairline fractures. These bones absorb a lot of stress during running and jumping. Within the foot, the second and third metatarsals are most commonly affected. This is because they’re thin bones and the point of impact when pushing off on your foot in order to run or jump. It’s also common to experience a hairline fracture in your:

* heel
* ankle bones
* navicular, a bone on the top of the midfoot

What are the symptoms of a hairline fracture?

The most common symptom of a hairline fracture is pain. This pain can gradually get worse over time, especially if you don’t stop weight-bearing activity. Pain is usually worse during activity and lessens during rest. Other symptoms include:

* swelling
* tenderness
* bruising

What causes a hairline fracture?

Most hairline fractures are caused from either overuse or repetitive activity. An increase in either the duration or frequency of activity can result in a hairline fracture. This means that even if you are used to running, suddenly increasing either your distance or the number of times per week you run can cause this injury.

Another similar cause of a hairline fracture is changing the type of exercise you do. For example, if you’re an excellent swimmer, it’s still possible to sustain an injury from suddenly engaging in another intense activity like running, no matter how good of shape you may be in.

Bones adapt to increased forces put on them through various activities, where new bones form to replace old bone. This process is called remodeling. When the breakdown happens more rapidly than new bone can form, you increase your likelihood of a hairline fracture.

Who’s most at risk for developing a hairline fracture?

There are also a number of risk factors that increase your chances of getting a hairline fracture:

* Certain sports: Participants in high-impact sports, such as track and field, basketball, tennis, dance, ballet, long-distance runners, and gymnastics, increase their chances of getting a hairline fracture.
* gender: Women, especially women with absent menstrual periods, are at increased risk of hairline fractures. In fact, female athletes may be at a greater risk because of a condition called the “female athlete triad.” This is where extreme dieting and exercise may result in eating disorders, menstrual dysfunction, and premature osteoporosis. As this develops, so does a female athlete’s chance of injury.
* Foot problems: Problematic footwear can cause injuries. So can high arches, rigid arches, or flat feet.
* Weakened bones: Conditions such as osteoporosis, or medications that affect bone density and strength, can cause hairline fractures even when performing normal, daily activities.
* Previous hairline fractures: Having one hairline fracture increases your chances of having another.
* Lack of nutrients: Lack of vitamin D or calcium can make your bones more susceptible to fracture. People with eating disorders are also at risk for this reason. Additionally, there can be a greater risk of this injury in the winter months when you may not be getting enough vitamin D.
* Improper technique: Blisters, bunions, and tendonitis can affect how you run, altering which bones are impacted by certain activities.
* Change in surface: Changes in playing surfaces can cause undue stress to the bones of the feet and legs. For example, a tennis player moving from a grass court to a hard court may develop injuries.
* Improper equipment: Poor running shoes can contribute to your likelihood of getting a hairline fracture.

How’s a hairline fracture diagnosed?

If you believe you have a hairline fracture, it’s important to seek treatment from your doctor as soon as possible.

Your doctor will ask about your medical history and general health. They’ll also ask questions about your diet, medications, and other risk factors. Then, they may perform several exams, including:

* Physical examination: Your doctor will inspect the painful area. They’ll probably apply gentle pressure to see if it causes pain. Pain in response to pressure is often the key for your doctor to diagnose a hairline fracture.
* MRI: The best imaging test for determining hairline fractures is an MRI. This test uses magnets and radio waves to provide images of your bones. An MRI will determine a fracture before an X-ray can. It’ll do a better job of determining the type of fracture as well.
* X-ray: Hairline fractures often aren’t visible on X-rays immediately after the injury. The fracture may become visible a few weeks after the injury takes place, when a callus has formed around the healing area.
* Bone scan: A bone scan involves receiving a small dose of radioactive material through a vein. This substance accumulates in areas where bones are repairing. But because this test will indicate an increased blood supply to a particular area, it won’t specifically prove there’s a hairline fracture. It’s suggestive but not diagnostic of a hairline fracture, as other conditions can cause an abnormal bone scan.

Can other conditions develop if hairline fractures aren’t treated?

Ignoring the pain caused by a hairline fracture can actually result in the bone breaking completely. Complete breaks will take longer to heal and involve more complicated treatments. It’s important to seek out help from your doctor and treat a hairline fracture as soon as possible.

How are hairline fractures treated?

If you suspect you have a hairline fracture, there are a number of first aid treatments you can perform before you go to the doctor.

Home treatments

Follow the RICE method:

* rest
* ice
* compression
* elevation

Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen (Advil, Motrin) and aspirin (Bayer) can help with pain and swelling.

It’s important to seek further treatment from your doctor if the pain becomes severe or doesn’t get better with rest. How your doctor chooses to treat you will depend on both the severity and location of your injury.

Medical treatments

Your doctor may recommend that you use crutches to keep weight off an injured foot or leg. You can also wear protective footwear or a cast.

Because it usually takes up to six to eight weeks to completely heal from a hairline fracture, it’s important to modify your activities during that time. Cycling and swimming are great alternatives to more high-impact exercises.

Some hairline fractures will require surgery, where bones are supported by the addition of a type of fastener using pins or screws to hold bones together during the healing process.

What’s the outlook for someone with a hairline fracture?

It’s important to avoid high-impact activities during the healing process. Returning to high-impact activities — especially the one that caused the injury in the first place — won’t only delay healing but increase the risk of a complete fracture in the bone.

Your doctor may advise taking another X-ray to ensure healing before allowing you to return to your previous activities. Even after the hairline fracture is healed, it’s important to gradually return to exercise.

In rare instances, hairline fractures won’t heal properly. This results in chronic, long-term pain. It’s important to talk to your doctor to prevent pain and worsening injuries.

Additional Information

Stress fracture is any overuse injury that affects the integrity of bone. Stress fractures were once commonly described as march fractures, because they were reported most often in military recruits who had recently increased their level of impact activities. The injuries have since been found to be common in both competitive and recreational athletes who participate in repetitive activities, such as running, jumping, marching, and skating.

Stress fractures result from microdamage that accumulates during exercise, exceeding the body’s natural ability to repair the damage. Microdamage accumulation can cause pain, weaken the bone, and lead to a stress fracture. The vast majority of stress fractures occur in the lower extremities and most commonly involve the tibia or fibula of the lower leg or the metatarsals or navicular bone of the foot or ankle, respectively. Treatment of a stress fracture depends on both the site and the severity of the injury.

During activities that involve repeated weight-bearing or repeated impact, repetitive loading cycles, bones are exposed to mechanical stresses that can lead to microdamage, which occurs primarily in the form of microscopic cracks. When given adequate time for recovery, the body has the ability to heal microdamage and further strengthen bones through remodeling and repair mechanisms. The healing mechanisms depend on many factors, including hormonal, nutritional, and genetic factors. Under certain conditions, however, such as starting a new training program or increasing the volume of a current program, the bone damage can be enough to overwhelm the body’s ability to repair. In those circumstances, there can be an accumulation of cracks and inflammation that leaves the bones at risk of fatiguing and fracturing. The fatigue failure event results in a stress fracture. The severity of the injury is determined by the location of the stress fracture and the extent to which the fracture propagates across the involved bone.

Diagnosis

Physical examination of the patient and knowledge of patient history are fundamental for the practitioner in promptly diagnosing a stress fracture. Patients typically present with an insidious onset of localized pain at or around the site of the injury. Initially, the pain from a stress fracture is only experienced during strenuous activities, such as running and jumping. However, as the injury worsens, pain may be present during activities of daily living, such as walking or even sitting. Physical examination classically reveals a focal area of bony tenderness at the site of the fracture. Soreness in the surrounding joint and muscle is common, and, in severe cases, palpable changes to the bone at the site of injury may be present.

Multiple imaging techniques are routinely used in diagnosing stress fractures. Plain radiography (X-ray) is the most commonly used test to diagnose a stress fracture. However, within the first few weeks of injury, X-rays often will not reveal the presence of the fracture. More sensitive approaches for early diagnosis include bone scans and magnetic resonance imaging (MRI). MRI is useful particularly because it can show damage to both bone and nearby structures, such as muscles or ligaments.

Classification

Stress fractures can be classified as high- or low-risk injuries based on their location. This classification allows a practitioner to quickly implement treatment for each stress fracture. Low-risk sites include the medial tibias (inner sides of the shins), the femoral shafts (thighbones), the first four metatarsals of the foot, and the ribs. Those locations tend to heal well and have a relatively low likelihood of recurrence or completion (worsening). Conversely, high-risk stress fracture sites have a comparatively high complication rate and require prolonged recovery or surgery before the individual can resume repetitive physical activity. Common high-risk sites include the femoral neck (hip joint), anterior tibia (front of the shin), medial malleolus (inner side of the ankle), patella (kneecap), navicular bone (front of the lower ankle), sesamoid bones (ball of the foot), and proximal fifth metatarsal (outer side of the foot).

Treatment

The treatment of stress fractures varies with the location of the injury, the severity of the injury, and treatment goals. Low-risk stress fractures generally heal faster and have a lower incidence of poor outcomes than high-risk stress fractures. Depending on the particular injury, treatment may include discontinuation of the precipitating activity only, discontinuation of all training activities, or, for more serious injuries, crutches or surgery. For minor injuries, healing may take as little as three to six weeks, with avoidance of the precipitating activity and with continued cross-training followed by a gradual return to the pre-injury level of participation. More severe injuries require more aggressive treatment and often take two to three months to heal. While treating the stress fracture, it is important to evaluate and modify risk factors that may predispose the athlete to future injuries, including anatomic abnormalities, biomechanical forces, hormonal imbalances, and nutritional deficiencies. A return to play after a stress fracture usually is granted once the athlete is pain-free with activities, is non-tender to palpation, and, for high-risk sites, shows evidence of healing on imaging.

Locations-of-a-stress-fracture.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1924 2023-10-08 00:18:39

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1928) Basket

Gist

While baskets are usually used for harvesting, storage and transport, specialized baskets are used as sieves for a variety of purposes, including cooking, processing seeds or grains, tossing gambling pieces, rattles, fans, fish traps, and laundry.

Summary

A basket is a container that is traditionally constructed from stiff fibers and can be made from a range of materials, including wood splints, runners, and cane. While most baskets are made from plant materials, other materials such as horsehair, baleen, or metal wire can be used. Baskets are generally woven by hand. Some baskets are fitted with a lid, while others are left open on top.

Uses

Baskets serve utilitarian as well as aesthetic purposes. Some baskets are ceremonial, that is religious, in nature. While baskets are usually used for harvesting, storage and transport, specialized baskets are used as sieves for a variety of purposes, including cooking, processing seeds or grains, tossing gambling pieces, rattles, fans, fish traps, and laundry.

History

Prior to the invention of woven baskets, people used tree bark to make simple containers. These containers could be used to transport gathered food and other items, but crumbled after only a few uses. Weaving strips of bark or other plant material to support the bark containers would be the next step, followed by entirely woven baskets. The last innovation appears to be baskets so tightly woven that they could hold water.

Depending on soil conditions, baskets may or may not be preserved in the archaeological record. Sites in the Middle East show that weaving techniques were used to make mats and possibly also baskets, circa 8000 BCE. Twined baskets date back to 7000  in Oasisamerica. Baskets made with interwoven techniques were common at 3000 BCE.

Baskets were originally designed as multi-purpose vessels to carry and store materials and to keep stray items about the home. The plant life available in a region affects the choice of material, which in turn influences the weaving technique. Rattan and other members of the Arecaceae or palm tree family, the thin grasses of temperate regions, and broad-leaved tropical bromeliads each require a different method of twisting and braiding to be made into a basket. The practice of basket making has evolved into an art. Artistic freedom allows basket makers a wide choice of colors, materials, sizes, patterns, and details.

The carrying of a basket on the head, particularly by rural women, has long been practiced. Representations of this in Ancient Greek art are called Canephorae.

Figurative and literary usage

The phrase "to hell in a handbasket" means to deteriorate rapidly. The origin of this use is unclear. "Basket" is sometimes used as an adjective for a person who is born out of wedlock. This occurs more commonly in British English. The word “basket” is frequently used in the colloquial “don’t put all your eggs in one basket.” In this sense, the basket is a metaphor for a chance at success.

Details

Basketry is an art and craft of making interwoven objects, usually containers, from flexible vegetable fibres, such as twigs, grasses, osiers, bamboo, and rushes, or from plastic or other synthetic materials. The containers made by this method are called baskets.

The Babylonian god Marduk “plaited a wicker hurdle on the surface of the waters. He created dust and spread it on the hurdle.” Thus ancient Mesopotamian myth describes the creation of the earth using a reed mat. Many other creation myths place basketry among the first of the arts given to humans. The Dogon of West Africa tell how their first ancestor received a square-bottomed basket with a round mouth like those still used there in the 20th century. This basket, upended, served him as a model on which to erect a world system with a circular base representing the sun and a square terrace representing the sky.

Like the decorative motifs of any other art form, the geometric, stylized shapes may represent natural or supernatural objects, such as the snakes and pigeon eyes of Borneo, and the kachina (deified ancestral spirit), clouds, and rainbows of the Hopi Indians of Arizona. The fact that these motifs are given a name, however, does not always mean that they have symbolic significance or express religious ideas.

Sometimes symbolism is associated with the basket itself. Among the Guayaki Indians of eastern Paraguay, for example, it is identified with the female. The men are hunters, the women are bearers as they wander through the forest; when a woman dies, her last burden basket is ritually burned and thus dies with her.

Though it would appear that basketry might best be defined as the art or craft of making baskets, the fact is that the name is one of those the limits of which seem increasingly imprecise the more one tries to grasp it. The category basket may include receptacles made of interwoven, rather rigid material, but it may also include pliant sacks made of a mesh indistinguishable from netting—or garments or pieces of furniture made of the same materials and using the same processes as classical basketmaking. In fact, neither function nor appearance nor material nor mode of construction are of themselves sufficient to delimit the field of what common sense nevertheless recognizes as basketry.

The consistency of the materials used distinguishes basketry, which is handmade, from weaving, in which the flexibility of the threads requires the use of an apparatus to put tension on the warp, the lengthwise threads. What basketry has in common with weaving is that both are means of assembling separate fibres by twisting them together in various ways.

Materials and techniques

There is no region in the world, except in the northernmost and southernmost parts, where people do not have at their disposal materials—such as twigs, roots, canes, and grasses—that lend themselves to the construction of baskets. The variety and quality of materials available in a particular region bears on the relative importance of basketry in a culture and on the types of basketry produced by the culture. Rainy, tropical zones, for example, have palms and large leaves that require plaiting techniques different from those required for the grass stalks that predominate in the dry, subtropical savanna regions or for the roots and stalks found in cold temperate zones. The interrelationship between materials and methods of construction might in part explain why the principal types of basketry are distributed in large areas that perhaps correspond to climatic zones as much as to cultural groups: the predominance of sewed coiling, for example, in the African savannas and in the arid zones of southern Eurasia and of North America; of spiral coiling and twining in temperate regions; and of various forms of plaiting in hot regions. There is also a connection between the materials used and the function of the basket, which determines whether rigid or soft materials—either as found in nature or specially prepared—are used. In East Asia, for example, twined basketry fashioned out of thin, narrow strips (called laths) of bamboo is effective for such objects as cages and fish traps that require solid partitions with openings at regular intervals. Soft and rigid fibres are often used together: the rigid fibres provide the shape of the object and soft ones act as a binder to hold the shape.

Finally, materials are chosen with a view toward achieving certain aesthetic goals; conversely, these aesthetic goals are limited by the materials available to the basket maker. The effects most commonly sought in a finished product are delicacy and regularity of the threads; a smooth, glossy surface or a dull, rough surface; and colour, whether natural or dyed. Striking effects can be achieved from the contrast between threads that are light and dark, broad and narrow, dull and shiny—contrasts that complement either the regularity or the decorative motifs obtained by the intricate work of plaiting.

Despite an appearance of almost infinite variety, the techniques of basketry can be grouped into several general types according to how the elements making up the foundation (the standards, which are analogous to the warp of cloth) are arranged and how the moving element (the thread) holds the standards by intertwining among them.

Coiled construction

The distinctive feature of this type of basketry is its foundation, which is made up of a single element, or standard, that is wound in a continuous spiral around itself. The coils are kept in place by the thread, the work being done stitch by stitch and coil by coil. Variations within this type are defined by the method of sewing, as well as by the nature of the coil, which largely determines the type of stitch.

Spiral coiling

The most common form is spiral coiling, in which the nature of the standard introduces two main subvariations: when it is solid, made up of a single whole stem, the thread must squeeze the two coils together binding each to the preceding one (giving a diagonal, or twilled, effect); with a double or triple standard the thread catches in each stitch one of the standards of the preceding coil. Many other variations of spiral coiling are possible. Distribution of this type of basketry construction extends in a band across northern Eurasia and into northwest North America; it is also found in the southern Pacific region (China and Melanesia) and, infrequently, in Africa (Rhodesia).

Sewed coiling

Sewed coiling has a foundation of multiple elements—a bundle of fine fibres. Sewing is done with a needle or an awl, which binds each coil to the preceding one by piercing it through with the thread. The appearance varies according to whether the thread conceals the foundation or not (bee-skep variety) or goes through the centre of the corresponding stitch on the preceding coil (split stitch, or furcate). This sewed type of coiled ware has a very wide distribution: it is almost the exclusive form in many regions of North and West Africa; it existed in ancient Egypt and occurs today in Arabia and throughout the Mediterranean basin as far as western Europe; it also occurs in North America, in India, and sporadically in the Asiatic Pacific. A variety of sewed coiling, made from a long braid sewed in a spiral, has been found throughout North Africa since ancient Egyptian times.

Half-hitch and knotted coiling

In half-hitch coiling, the thread forms half hitches (simple knots) holding the coils in place, the standard serving only as a support. There is a relationship between half-hitch coiling and the half-hitch net (without a foundation), the distribution of which is much more extensive. The half-hitch type of basketry appears to be limited to Australia, Tasmania, Tierra del Fuego in South America, and Pygmy territory in Africa. In knotted coiling, the thread forms knots around two successive rows of standards; many varieties can be noted in the Congo, in Indonesia, and among the Basket Makers, an ancient culture of the plateau area of southwestern United States, centred in parts of Arizona, New Mexico, Colorado, and Utah.

The half-hitch and knotted-coiling types of basketry each have a single element variety in which there is no foundation, the thread forming a spiral by itself analogous to the movement of the foundation in the usual type. An openwork variety of the single element half hitch (called cycloid coiling) comes from the Malay area; and knotted single-element basketry, from Tierra del Fuego and New Guinea.

Noncoiled construction

Compared to the coiled techniques, all other types of basketry have a certain unity of construction: the standards form a foundation that is set up when the work is begun and that predetermines the shape and dimensions of the finished article. Nevertheless, if one considers the part played by the standards and the threads, respectively, most noncoiled basketry can be divided into three main groups.

Wattle construction

A single layer of rigid, passive, parallel standards is held together by flexible threads in one of three ways, each representing a different subtype. (1) The bound, or wrapped, type, which is not very elaborate, has a widespread distribution, being used for burden baskets in the Andaman Islands in the Bay of Bengal, for poultry cages in different parts of Africa and the Near East, and for small crude baskets in Tierra del Fuego. (2) In the twined type, the threads are twisted in twos or threes, two or three strands twining around the standards and enclosing them. The twining may be close or openwork or may combine tight standards and spaced threads. Close twining mainly occurs in three zones: Central Africa, Australia, and western North America, where there are a number of variations such as twilled and braided twining and zigzag or honeycomb twining. The openwork subtype is found almost universally because it provides a perfect solution to the problem of maintaining rigid standards with even spacing for fish traps and hurdles (portable panels used for enclosing land or livestock). Using spaced threads, this subtype is also used for flexible basketry among the Ainu of northern Japan and the Kuril Islands and sporadically throughout the northern Pacific. (3) The woven type, sometimes termed wickerwork, is made of stiff standards interwoven with flexible threads. It is the type most commonly found in European and African basketry and is found sporadically in North and South America and in Near and Far Eastern Asia.

Lattice construction

In lattice construction a frame made of two or three layers of passive standards is bound together by wrapping the intersections with a thread. The ways of intertwining hardly vary at all and the commonest is also the simplest: the threads are wrapped in a spiral around two layers of standards. This method is widely used throughout the world in making strong, fairly rigid objects for daily use: partitions for dwellings, baskets to be carried on the back, cages, and fish traps (with a Mediterranean variety composed of three layers of standards and a knotted thread). The same method, moreover, can be adapted for decorative purposes, with threads—often of different colours—to form a variety of motifs similar to embroidery. This kind of lattice construction appears mainly among the Makah Indians of the U.S. Pacific Northwest and in Central and East Africa.

Matting or plaited construction

Standards and threads are indistinguishable in matting or plaited construction; they are either parallel and perpendicular to the edge (straight basketry) or oblique (diagonal basketry). Such basketry is closest to textile weaving. The materials used are almost always woven, using the whole gamut of weaving techniques (check, twill, satin, and innumerable decorative combinations). Depending on the material and on the technique used, this type of construction lends itself to a wide variety of forms, in particular to the finest tiny boxes and to the most artistic large plane surfaces. It is widely distributed but seems particularly well adapted to the natural resources and to the kind of life found in intertropical areas. The regions where it is most common are different from, and complementary with, those specializing in coiled and twined ware; that is, eastern and southeastern Asia (from Japan to Malaysia and Indonesia), tropical America, and the island of Madagascar off the east coast of Africa.

One variety of matting or plaited work consists of three or four layers of elements, which are in some cases completely woven and in others form an intermediate stage between woven and lattice basketry. The intermediate type (with two layered elements, one woven) is known as hexagonal openwork and is the technique most common in openwork basketry using flat elements. It has a very wide distribution: from Europe to Japan, southern Asia, Central Africa, and the tropical Americas. A closely woven fabric in three layers, forming a six-pointed star design, is found on a small scale in Indonesia and Malaysia.

Decorative devices

Clearly, a variety of decorative possibilities arise from the actual work of constructing basketry. These, combined with the possible contrasts of colour and texture, would seem to provide extensive decorative possibilities. Each particular type of basketry, however, imposes certain limitations, which may lead to convergent effects: hexagonal openwork, for example, forms the same pattern the world over, just as twilled weaving forms the same chevrons (vertical or horizontal). Each type, also, allows a certain range of freedom in the decoration within the basic restrictions imposed by the rigidity of the interlaced threads, which tends to impose geometric designs or at least to geometrize the motifs. In general, the two main types of basketry—plaited and coiled—lend themselves to two different kinds of decoration. Coiled basketry lends itself to radiating designs, generally star- or flower-shaped compositions or whirling designs sweeping from the centre to the outer edge. Plaited basketry, whether diagonal or straight, lends itself to over-all compositions of horizontal stripes and, in the detail, to intertwined shapes that result from the way two series of threads, usually in contrasting colours, appear alternately on the surface of the basket.

Other art forms have been influenced ornamentally by basketry’s plaited shapes and characteristic motifs. Because of their intrinsic decorative value—and not because the medium dictates it—these shapes and motifs have been reproduced in such materials as wood, metal, and clay. Some notable examples are the interlacing decorations carved on wood in the Central African Congo; basketry motifs engraved into metalwork and set off with inlayed silver by Frankish artisans in the Merovingian period (6th to 8th century); and osier patterns (molded basketwork designs) developed in 18th-century Europe to decorate porcelain.

Uses

Household basketry objects consist primarily of receptacles for preparing and serving food and vary widely in dimension, shape, and watertightness. Baskets are used the world over for serving dry food, such as fruit and bread, and they are also used as plates and bowls. Sometimes—if made waterproof by a special coating or by particularly close plaiting—they are used as containers for liquids. Such receptacles are found in various parts of Europe and Africa (Chad, Rwanda, Ethiopia) and among several groups of North American Indians. By dropping hot stones into the liquid, the Hupa Indians of northwestern California even boil water or food in baskets.

Openwork, which is permeable and can be made with mesh of various sizes, is used for such utensils as sieves, strainers, and filters. Such basketry objects are used in the most primitive cultures as well as in the most modern (the tea strainers used in Japan, for example). The flexibility of work done on the diagonal is put to particularly ingenious use by the Africans in beer making and, above all, by Amazonian Indians in extracting the toxic juices from manioc pulp (a long basketwork cylinder is pulled down at the bottom by ballasting and, as it gets longer, compresses the pulp with which it had previously been filled).

Finally, basketry plays an important part as storage containers. For personal possessions, there are baskets, boxes, and cases of all kinds—nested boxes from Madagascar, for example, which are made in a graduated series so that they fit snugly one within another, or caskets with multiple compartments from Indonesia. For provisions, there are baskets in various sizes that can be hung up out of the reach of predators, and there are baskets so large that they are used as granaries. In Sudan in Africa, as in southern Europe, these are usually raised off the ground on a platform and sheltered by a large roof or stored in the house, particularly in Mediterranean regions; for preserving cereals they are sometimes caulked with clay.

Some of these granaries are not far from being houses. Basketry used in house construction, however, usually consists of separately made elements that are later assembled; partitions of varying degrees of rigidity used as walls or to fence in an enclosure; roofs made of great basketry cones (in Chad, for example); and, above all, mats, which have numerous uses in the actual construction as well as in the equipping of a house. Probably the oldest evidence of basketry is the mud impressions of woven mats that covered the floors of houses in the Neolithic (c. 7000 BCE) village of Jarmo in northern Iraq. Mats were used in ancient Egypt to cover floors and walls and were also rolled up and unrolled in front of doorways, as is shown by stone replicas decorating the doorways of tombs dating from the Old Kingdom, c. 2686–2160 BCE. It is known from paintings that they were made of palm leaves and were decorated with polychrome (multicoloured) stripes, much like the mats found in Africa and the Near East.

Two notable examples of modern mats are the pliant ones, made of pandanus leaves, found in southern Asia and Oceania and the tatami, which provide the unit of measurement of the surface area of Japanese dwellings. Just as basketry has been used for making containers and mats, so from ancient times to modern it has been used for making such pieces of furniture as cradles, beds, tables, and various kinds of seats and cabinets.

In addition to the use of basketry for skirts and loincloths (particularly common in Oceania), supple diagonal plaiting has even been used to make dresses (Madagascar). Plaited raincoats exist throughout eastern Asia as well as Portugal. Basketry most frequently is used for shoes (particularly sandals, some of which come close to covering the foot and are plaited in various materials), and, of course, for hats—the conical hat particularly common in eastern Asia, for example, and the skullcaps and brimmed hats found in Africa, the Americas, and much of Europe.

To protect head and body against weapons, thick, strong basketry has been used in the form of helmets (Africa, the Assam region in India, and Hawaii); armour (for example, armour of coconut palm fibre for protection against weapons made of sharks’ teeth by the Micronesia inhabitants of the Gilbert Islands); and shields, for which basketry is eminently suitable because of its lightness. In addition to clothes themselves, there are numerous basketry accessories: small purses, combs, headdresses, necklaces, bracelets, and anklets. In West Africa there are even chains made of fine links and pendants plaited in a beautiful, bright yellow straw in imitation of gold jewelry. Many objects are plaited just for decoration or amusement such as ornaments like those used for Christmas trees or for harvest festivals and scale models and little animal or human figurines that sometimes serve as children’s toys.

There is often no very clear distinction between accessories and ritual ornaments, as in the ephemeral headdresses made for initiation rites by the young Masa people in the Cameroon; dance accessories; ornaments for masks, such as the leaf masks that the Bobo of Upper Volta make with materials from the bush.

More clearly ritual in nature are the palms (woven into elaborate geometric shapes and liturgical symbols) carried in processions on Palm Sunday by Christians in various Mediterranean regions; some, like those from Elche in Spain, are over six feet (nearly two metres) high and take days to make. In Bali an infinite variety of plaiting techniques are involved in the preparation of ritual offerings, which is a permanent occupation for the women, a hundred of whom may work for a month or two preparing for certain great festivals.

Baskets are used throughout the world as snares and fish traps, which allow the catch to enter but not to leave. They are often used in conjunction with a corral (on land) or a weir (an enclosure set in the water), which are themselves made either of pliable nets or panels of basketry. In Africa as well as in eastern Asia a basketry object is used for fishing in shallow water; open at top and bottom, this object is deposited sharply on the bottom of shallow rivers or ponds, and, when a fish is trapped, it is retrieved by putting a hand in through the opening at the top.

Basketry is also used in harvesting foodstuffs; for example, in the form of winnowing trays (from whose French name, van, the French word for basketry, vannerie, is derived). One basket, found in the Sahel region south of the Sahara, is swung among wild grasses and in knocking against the stalks collects the grain.

Baskets are used as transport receptacles; they are made easier to carry by the addition of handles or straps depending on whether the basket is carried by hand, on a yoke, or on the back. The two-handled palm-leaf basket, common in North Africa and the Middle East, existed in ancient Mesopotamia; in Europe and eastern Asia, the one-handled basket, which comes in a variety of shapes, sizes, and types of plaiting, is common; in Africa, however, where burdens are generally carried on the head, there is no difference between baskets used for transporting goods and those used for storing.

Burden baskets are large, deep baskets in which heavy loads can be carried on the back; they are provided either with a headband that goes across the forehead (especially American Indian, southern Asia), or with two straps that go over the shoulders (especially in Southeast Asia and Indonesia). There are three fairly spectacular types of small basketry craft found in regions as far apart as Peru, Ireland, and Mesopotamia: the balsa (boats) of Lake Titicaca, made of reeds and sometimes fitted out with a sail also made of matting; the British coracle, the basketry framework of which is covered with a skin sewn onto the edge; and the gufa of the Tigris, which is round like the coracle and made of plaited reeds caulked with bitumen.

Origins and centres of development

Something about the prehistoric origins of basketry can be assumed from archaeological evidence. The evidence that does exist from Neolithic times onward has been preserved because of conditions of extreme dryness (Egypt, Peru, southern Spain) or extreme humidity (peat bogs in northern Europe, lake dwellings in Switzerland); because it had been buried in volcanic ash (Oregon); or because, like the mats at Jarmo, it left impressions in the mud or on a pottery base that had originally been molded onto a basketry foundation. More recently, when written and pictorial documentation is available, an activity as humble and banal as basketry is not systematically described but appears only by chance in narratives, inventories, or pictures in which basketry objects figure as accessories.

On the evidence available, researchers have concluded that the salient characteristics of basketry are the same today as they were before the 3rd millennium BCE. Then, as now, there was a wide variety of types (and a wide distribution of most types): coiled basketry either spiral or sewed, including furcate and sewed braid (mainly in Europe and the Near East as far as the Indus valley); wattlework with twined threads (America, Europe, Egypt) and with woven threads (Jarmo, Peru, Egypt); and plaited construction with twilled weaving (Palestine, Europe).

To list the centres of production would almost be to list all human cultural groups. Some regions, however, stand out for the emphasis their inhabitants place on basketry or for the excellence of workmanship there.

American Indian basketry

In western North America the art of basketry has attained one of its highest peaks of perfection and has occupied a preeminent place in the equipment of all the groups who practice it. North American Indians are particularly noted for their twined and coiled work. The Chilkat and the Tlingit of the Pacific Northwest are known for the extreme delicacy of their twined basketry; the California Indians, for the excellence of their work with both types; and the Apache and the Hopi and other Pueblo Indians of the southwestern interior of the United States, for coiled basketry remarkable for its bold decoration and delicate technique.

Central and South American basketry is similar in materials and plaiting processes. The notable difference lies in the finishes used, and in this the Guyana Indians of northeastern South America excel, using a technique of fine plaiting with a twill pattern.

Oceanic basketry

Various plaiting processes have been highly developed in Oceania, not merely for making utilitarian articles but also for ceremonial items and items designed to enhance prestige, such as finely twined cloaks in New Zealand, statues in Polynesia, masks in New Guinea, and decorated shields in the Solomon Islands. In Oceania, as in southern Asia, there is a vegetal civilization, in which basketry predominates over such arts as metalwork and pottery. Particular mention should be made of the Senoi of the Malay Peninsula and of the Australian Aborigines, whose meagre equipment includes delicate basketry done by the women. The Senoi use various plaiting techniques, and the Australians use tight twining.

African basketry

Africa presents an almost infinite variety of basket types and uses. In such regions as Chad and Cameroon, basketry is in evidence everywhere—edging the roads, roofing the houses, decorating the people, and providing the greater part of domestic equipment. The delicate twill plaited baskets of the Congo region are notable for their clever patterning. In the central and eastern Sudanese zone the rich decorative effect of the sewed, coiled baskets is derived from the interplay of colours. People living in the lake area of the Great Rift Valley produce elegant coiled and twined basketry of restrained decoration and careful finishing.

East Asian basketry

People of the temperate zones of East Asia produce a variety of work. Bamboo occupies a particularly important place both in functional basketry equipment and in aesthetic objects (Japanese flower baskets, for example). The production of decorative objects is one feature that distinguishes East Asian basketry from the primarily utilitarian basketry of the Near East and Africa. Southeast Asia, together with Madagascar, are among the places known for their fine decorative plaiting techniques.

European basketry

In Europe almost the whole range of basketry techniques is used, chiefly in making utilitarian objects (receptacles for domestic and carrying purposes and household furniture) but also in making objects primarily for decorative use.

Modern basketry

Even in the modern industrial world, there seems to be a future for basketry. Because of its flexibility, lightness, permeability, and solidity, it will probably remain unsurpassed for some utilitarian ends; such articles, however, because they are entirely handmade, will gradually become luxury items. As a folk art, on the other hand, basketry needs no investment of money: the essential requirements remain a simple awl, nimble fingers, and patience.

deccanherald%2Fimport%2Fsites%2Fdh%2Ffiles%2Farticle_images%2F2013%2F07%2F13%2F344602.gif?auto=format%2Ccompress&fmt=webp&fit=max&format=webp&q=70&w=900&dpr=1.3


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#1925 2023-10-09 00:06:54

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 48,410

Re: Miscellany

1929) Hide-and-seek

Gist

Hide-an-seek is a children's game in which a group of children hide in secret places and then one child has to go to look for them.

Summary

Hide-and-seek is an old and popular children’s game in which one player closes his or her eyes for a brief period (often counting to 100) while the other players hide. The seeker then opens his eyes and tries to find the hiders; the first one found is the next seeker, and the last is the winner of the round. In one of many forms of the game, the hiders try to run back to “home base” while the seeker is away looking for them; if all of the hiders return safely, the seeker repeats as seeker in the next round.

The game is played differently in various regions; sometimes the seeker may be helped by those he finds. Alternatively, only one child hides and is sought by all the rest, as in sardines, where the hider is joined by seekers surreptitiously as they find him (the name of the game coming from the crowded condition of the hiding place). Hide-and-seek appears to be equivalent to the game apodidraskinda, described by the 2nd-century Greek writer Julius Pollux. In modern Greece hide-and-seek is called kryfto.

The game is played throughout the world. In Spain the game is called el escondite, in France jeu de cache-cache, in Israel machboim, in South Korea sumbaggoggil, and, in Romania de-av-ati ascunselea. Hide-and-seek is known throughout South and Central America under such names as tuja (Bolivia), escondidas (Ecuador and Chile), and cucumbè (Honduras and El Salvador).

There are many variants on the game. For instance, the Igbo children in Nigeria play oro, a combination of hide-and-seek and tag in which the seeker stands in the centre of a large circle that has been drawn in the sand and tells other players to hide. The seeker then steps out of the circle, finds, and then chases the other children, who must run into the circle to be safe. The child touched before reaching the circle must be the next seeker.

Details

Hide-and-seek (sometimes known as hide-and-go-seek) is a popular children's game in which at least two players (usually at least three) conceal themselves in a set environment, to be found by one or more seekers. The game is played by one chosen player (designated as being "it") counting to a predetermined number with eyes closed while the other players hide. After reaching this number, the player who is "it" calls "Ready or not, here I come!" or "Coming, ready or not!" and then attempts to locate all concealed players.

The game can end in one of several ways. The most common way of ending is the player chosen as "it" locates all players; the player found first is the loser and is chosen to be "it" in the next game. The player found last is the winner. Another common variation has the seeker counting at "home base"; the hiders can either remain hidden or they can come out of hiding to race to home base; once they touch it, they are "safe" and cannot be tagged.

The game is an example of an oral tradition, as it is commonly passed by children.

Variants

Different versions of the game are played around the world, under a variety of names.

One variant is called "Sardines", in which only one person hides and the others must find him or her, hiding with him / her when they do so. The hiding places become progressively more cramped, like sardines in a tin. The last person to find the hiding group is the loser, and becomes the hider for the next round. A. M. Burrage calls this version of the game "Smee" in his 1931 ghost story of the same name.

In the Peanuts comic strip by Charles Schulz, a variation of Sardines called "Ha Ha Herman" is played, in which the seekers call out "ha ha', and the person hiding has to respond by saying "Herman".

In some versions of the game, after the first hider is caught or if no other players can be found over a period of time, the seeker calls out a previously-agreed phrase (such as "Olly olly oxen free", "Come out, come out wherever you are" or "All in, All in, Everybody out there all in free") to signal the other hiders to return to base for the next round. the seeker must return to "home base" after finding the hiders, before the hiders get back. Conversely, the hiders must get back to "home base" before the seeker sees them and returns. The hiders hide until they are spotted by the seeker, who chants, "Forty, forty, I see you" (sometimes shortened to "Forty, forty, see you"). Once spotted, the hider must run to "home base" (where the seeker was counting while the other players hid) and touch it before they are "tipped" (tagged, or touched) by the seeker. If tagged, that hider becomes the new seeker. Forty forty has many regional names including 'block one two three' in North East England and Scotland, 'relievo one two three' in Wilmslow, 'forty forty' in South East England, 'mob' in Bristol and South Wales, 'pom pom' in Norwich, 'I-erkey' in Leicester, 'hicky one two three' in Chester, 'rally one two three' in Coventry, ' Ackey 123' in Birmingham and '44 Homes' in Australia.

History

The original version of the game was called apodidraskinda. A second century Greek writer named Julius Pollux mentioned the game for the first time. Then as now it was played the same with one player closing their eyes and counting while the other players hide. This game was also found in an early painting discovered at Herculaneum, dating back to about the second century.

International competition

The Hide-and-Seek World Championship was an international hide-and-seek competition held from 2010 through 2017. The game is a derivative of the Italian version of hide-and-seek, "nascondino".

The championship was first held in 2010 in Bergamo, Italy, as an initiative of CTRL Magazine. Though it started out as a joke, the event grew year after year. The 2016 and 2017 competitions took place in Consonno, an abandoned ghost town or "The Italian Las Vegas", located in the district of Lecco, Lombardy.

The winning team was awarded "The Golden Fig Leaf", which is biblically the symbol of hiding, referring to the story of Adam and Eve.

Yasuo Hazaki, a graduate of Nippon Sport Science University, and professor of media studies at Josai University in Sakado, Japan, had set up a campaign in 2013 to promote hide-and-seek for the 2020 Olympics in Tokyo. The game Hazaki was promoting was a slightly different traditional Japanese game, more similar to a game of tag. Hazaki contacted the Nascondino World Championship organizers and said that nascondino rules were more appropriate to be a candidate to the Olympics.

Hide-and-seek.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB