You are not logged in.
1350) Polycarbonate
Summary
Polycarbonate (PC) is a tough, transparent synthetic resin employed in safety glass, eyeglass lenses, and compact discs, among other applications. PC is a special type of polyester used as an engineering plastic owing to its exceptional impact resistance, tensile strength, ductility, dimensional stability, and optical clarity. It is marketed under trademarks such as Lexan and Makrolon.
PC was introduced in 1958 by Bayer AG of Germany and in 1960 by the General Electric Company of the United States. As developed by these companies, PC is produced by a polymerization reaction between bisphenol A, a volatile liquid derived from benzene, and phosgene, a highly reactive and toxic gas made by reacting carbon monoxide with chlorine. The resultant polymers (long, multiple-unit molecules) are made up of repeating units containing two aromatic (benzene) rings and connected by ester (CO-O) groups.
Mainly by virtue of the aromatic rings incorporated into the polymer chain, PC has exceptional stiffness. It is also highly transparent, transmitting approximately 90 percent of visible light. Since the mid-1980s this property, in combination with the excellent flowing properties of the polymer when molten, has found growing application in the injection-molding of compact discs. Because PC has an impact strength considerably higher than most plastics, it is also fabricated into large carboys for water, shatterproof windows, safety shields, and safety helmets.
Details
Polycarbonates (PC) are a group of thermoplastic polymers containing carbonate groups in their chemical structures. Polycarbonates used in engineering are strong, tough materials, and some grades are optically transparent. They are easily worked, molded, and thermoformed. Because of these properties, polycarbonates find many applications. Polycarbonates do not have a unique resin identification code (RIC) and are identified as "Other", 7 on the RIC list. Products made from polycarbonate can contain the precursor monomer bisphenol A (BPA).
Structure
Structure of dicarbonate (PhOC(O)OC6H4 )2CMe2 derived from bis(phenol-A) and two equivalents of phenol. This molecule reflects a subunit of a typical polycarbonate derived from bis(phenol-A).
Carbonate esters have planar OC(OC)2 cores, which confers rigidity. The unique O=C bond is short (1.173 Å in the depicted example), while the C-O bonds are more ether-like (the bond distances of 1.326 Å for the example depicted). Polycarbonates received their name because they are polymers containing carbonate groups (−O−(C=O)−O−). A balance of useful features, including temperature resistance, impact resistance and optical properties, positions polycarbonates between commodity plastics and engineering plastics.
Properties and processing
Polycarbonate is a durable material. Although it has high impact-resistance, it has low scratch-resistance. Therefore, a hard coating is applied to polycarbonate eyewear lenses and polycarbonate exterior automotive components. The characteristics of polycarbonate compare to those of polymethyl methacrylate (PMMA, acrylic), but polycarbonate is stronger and will hold up longer to extreme temperature. Thermally processed material is usually totally amorphous, and as a result is highly transparent to visible light, with better light transmission than many kinds of glass.
Polycarbonate has a glass transition temperature of about 147 °C (297 °F), so it softens gradually above this point and flows above about 155 °C (311 °F). Tools must be held at high temperatures, generally above 80 °C (176 °F) to make strain-free and stress-free products. Low molecular mass grades are easier to mold than higher grades, but their strength is lower as a result. The toughest grades have the highest molecular mass, but are more difficult to process.
Unlike most thermoplastics, polycarbonate can undergo large plastic deformations without cracking or breaking. As a result, it can be processed and formed at room temperature using sheet metal techniques, such as bending on a brake. Even for sharp angle bends with a tight radius, heating may not be necessary. This makes it valuable in prototyping applications where transparent or electrically non-conductive parts are needed, which cannot be made from sheet metal. PMMA/Acrylic, which is similar in appearance to polycarbonate, is brittle and cannot be bent at room temperature.
Main transformation techniques for polycarbonate resins:
* extrusion into tubes, rods and other profiles including multiwall
* extrusion with cylinders (calenders) into sheets (0.5–20 mm (0.020–0.787 in)) and films (below 1 mm (0.039 in)), which can be used directly or manufactured into other shapes using thermoforming or secondary fabrication techniques, such as bending, drilling, or routing. Due to its chemical properties it is not conducive to laser-cutting.
* injection molding into ready articles
Polycarbonate may become brittle when exposed to ionizing radiation above 25 kGy (J/kg).
Applications:
Electronic components
Polycarbonate is mainly used for electronic applications that capitalize on its collective safety features. Being a good electrical insulator and having heat-resistant and flame-retardant properties, it is used in various products associated with electrical and telecommunications hardware. It can also serve as a dielectric in high-stability capacitors. However, commercial manufacture of polycarbonate capacitors mostly stopped after sole manufacturer Bayer AG stopped making capacitor-grade polycarbonate film at the end of 2000.
Construction materials
The second largest consumer of polycarbonates is the construction industry, e.g. for domelights, flat or curved glazing, roofing sheets and sound walls. Polycarbonates are used to create materials used in buildings that need to be durable but light.
3D Printing
Polycarbonates are used extensively in 3D FDM printing, producing durable strong plastic products with a high melting point. Polycarbonate is relatively difficult for casual hobbyists to print compared to thermoplastics such as Polylactic acid (PLA) or Acrylonitrile butadiene styrene (ABS) because of the high melting point, difficulty with print bed adhesion, tendency to warp during printing, and tendency to absorb moisture in humid environments. Despite these issues, 3D printing using polycarbonates is common in the professional community.
Data storage
A major polycarbonate market is the production of compact discs, DVDs, and Blu-ray discs. These discs are produced by injection-molding polycarbonate into a mold cavity that has on one side a metal stamper containing a negative image of the disc data, while the other mold side is a mirrored surface. Typical products of sheet/film production include applications in advertisement (signs, displays, poster protection).
Automotive, aircraft, and security components
In the automotive industry, injection-molded polycarbonate can produce very smooth surfaces that make it well-suited for sputter deposition or evaporation deposition of aluminium without the need for a base-coat. Decorative bezels and optical reflectors are commonly made of polycarbonate. Its low weight and high impact resistance have made polycarbonate the dominant material for automotive headlamp lenses. However, automotive headlamps require outer surface coatings because of its low scratch resistance and susceptibility to ultraviolet degradation (yellowing). The use of polycarbonate in automotive applications is limited to low stress applications. Stress from fasteners, plastic welding and molding render polycarbonate susceptible to stress corrosion cracking when it comes in contact with certain accelerants such as salt water and plastisol. It can be laminated to make bullet-proof "glass", although "bullet-resistant" is more accurate for the thinner windows, such as are used in bullet-resistant windows in automobiles. The thicker barriers of transparent plastic used in teller's windows and barriers in banks are also polycarbonate.
So-called "theft-proof" large plastic packaging for smaller items, which cannot be opened by hand, is typically made from polycarbonate.
The canopy of the Lockheed Martin F-22 Raptor jet fighter is made from a piece of high optical quality polycarbonate, and is the largest piece of its type formed in the world.
Niche applications
Polycarbonate, being a versatile material with attractive processing and physical properties, has attracted myriad smaller applications. The use of injection molded drinking bottles, glasses and food containers is common, but the use of BPA in the manufacture of polycarbonate has stirred concerns (see Potential hazards in food contact applications), leading to development and use of "BPA-free" plastics in various formulations.
Laboratory safety goggles
Polycarbonate is commonly used in eye protection, as well as in other projectile-resistant viewing and lighting applications that would normally indicate the use of glass, but require much higher impact-resistance. Polycarbonate lenses also protect the eye from UV light. Many kinds of lenses are manufactured from polycarbonate, including automotive headlamp lenses, lighting lenses, sunglass/eyeglass lenses, swimming goggles and SCUBA masks, and safety glasses/goggles/visors including visors in sporting helmets/masks and police riot gear (helmet visors, riot shields, etc.). Windscreens in small motorized vehicles are commonly made of polycarbonate, such as for motorcycles, ATVs, golf carts, and small airplanes and helicopters.
The light weight of polycarbonate as opposed to glass has led to development of electronic display screens that replace glass with polycarbonate, for use in mobile and portable devices. Such displays include newer e-ink and some LCD screens, though CRT, plasma screen and other LCD technologies generally still require glass for its higher melting temperature and its ability to be etched in finer detail.
As more and more governments are restricting the use of glass in pubs and clubs due to the increased incidence of glassings, polycarbonate glasses are becoming popular for serving alcohol because of their strength, durability, and glass-like feel.
Other miscellaneous items include durable, lightweight luggage, MP3/digital audio player cases, ocarinas, computer cases, riot shields, instrument panels, tealight candle containers and food blender jars. Many toys and hobby items are made from polycarbonate parts, like fins, gyro mounts, and flybar locks in radio-controlled helicopters, and transparent LEGO (ABS is used for opaque pieces).
Standard polycarbonate resins are not suitable for long term exposure to UV radiation. To overcome this, the primary resin can have UV stabilisers added. These grades are sold as UV stabilized polycarbonate to injection moulding and extrusion companies. Other applications, including polycarbonate sheets, may have the anti-UV layer added as a special coating or a coextrusion for enhanced weathering resistance.
Polycarbonate is also used as a printing substrate for nameplate and other forms of industrial grade under printed products. The polycarbonate provides a barrier to wear, the elements, and fading.
Medical applications
Many polycarbonate grades are used in medical applications and comply with both ISO 10993-1 and USP Class VI standards (occasionally referred to as PC-ISO). Class VI is the most stringent of the six USP ratings. These grades can be sterilized using steam at 120 °C, gamma radiation, or by the ethylene oxide (EtO) method. Dow Chemical strictly limits all its plastics with regard to medical applications. Aliphatic polycarbonates have been developed with improved biocompatibility and degradability for nanomedicine applications.
Mobile phones
Some major smartphone manufacturers use polycarbonate. Nokia used polycarbonate in their phones starting with the N9's unibody case in 2011. This practice continued with various phones in the Lumia series. Samsung has started using polycarbonate with Galaxy S III's hyperglaze-branded removable battery cover in 2012. This practice continues with various phones in the Galaxy series. Apple started using polycarbonate with the iPhone 5C's unibody case in 2013.
Benefits over glass and metal back covers include durability against shattering (weakness of glass), bending and scratching (weakness of metal), shock absorption, low manufacturing costs, and no interference with radio signals and wireless charging (weakness of metal). Polycarbonate back covers are available in glossy or matte surface textures.
History
Polycarbonates were first discovered in 1898 by Alfred Einhorn, a German scientist working at the University of Munich. However, after 30 years' laboratory research, this class of materials was abandoned without commercialization. Research resumed in 1953, when Hermann Schnell at Bayer in Uerdingen, Germany patented the first linear polycarbonate. The brand name "Makrolon" was registered in 1955.
Also in 1953, and one week after the invention at Bayer, Daniel Fox at General Electric in Schenectady, New York, independently synthesized a branched polycarbonate. Both companies filed for U.S. patents in 1955, and agreed that the company lacking priority would be granted a license to the technology.
Patent priority was resolved in Bayer's favor, and Bayer began commercial production under the trade name Makrolon in 1958. GE began production under the name Lexan in 1960, creating the GE Plastics division in 1973.
After 1970, the original brownish polycarbonate tint was improved to "glass-clear."
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1351) Jute
Summary
Jute is a long, soft, shiny bast fiber that can be spun into coarse, strong threads. It is produced from flowering plants in the genus Corchorus, which is in the mallow family Malvaceae. The primary source of the fiber is Corchorus olitorius, but such fiber is considered inferior to that derived from Corchorus capsularis. "Jute" is the name of the plant or fiber used to make burlap, hessian or gunny cloth.
Jute is one of the most affordable natural fibers, and second only to cotton in the amount produced and variety of uses. Jute fibers are composed primarily of the plant materials cellulose and lignin. Jute fiber falls into the bast fiber category (fiber collected from bast, the phloem of the plant, sometimes called the "skin") along with kenaf, industrial hemp, flax (linen), ramie, etc.. The industrial term for jute fiber is raw jute. The fibers are off-white to brown, and 1–4 metres (3–13 feet) long. Jute is also called the "golden fiber" for its color and high cash value.
Details
Jute is a vegetable fibre. It is very cheap to produce, and its production levels are similar to that of cotton. It is a bast fibre, like hemp, and flax. Coarse fabrics made of jute are called hessian, or burlap in America. Like all natural fibres, Jute is biodegradable."Jute" is the name of the plant or fiber that is used to make burlap, Hessian or gunny cloth. It is very rough and is very difficult to cut or tear.
The jute plant is easily grown in tropical countries like Bangladesh and India. India is the largest producer of jute in the world. Jute is less expensive than cotton, but cotton is better for quality clothes. Jute is used to make various products: packaging materials, jute bags, sacks, expensive carpets, espadrilles, sweaters etc. It is obtained from the bark of the jute plant. Jute plants are easy to grow, have a high yield per acre and, unlike cotton, have little need for pesticides and fertilizers.
In Iran, archaeologists have found jute existing since the Bronze Age.
Jute
Jute, Hindi pat, also called allyott, is either of two species of Corchorus plants—C. capsularis, or white jute, and C. olitorius, including both tossa and daisee varieties—belonging to the hibiscus, or mallow, family (Malvaceae), and their fibre. The latter is a bast fibre; i.e., it is obtained from the inner bast tissue of the bark of the plant’s stem. Jute fibre’s primary use is in fabrics for packaging a wide range of agricultural and industrial commodities that require bags, sacks, packs, and wrappings. Wherever bulky, strong fabrics and twines resistant to stretching are required, jute is widely used because of its low cost. Burlap is made from jute.
Jute has been grown in the Bengal area of India (and of present-day Bangladesh) from ancient times. The export of raw jute from the Indian subcontinent to the Western Hemisphere began in the 1790s. The fibre was used primarily for cordage manufacture until 1822, when commercial yarn manufacture began at Dundee, Scot., which soon became a centre for the industry. India’s own jute-processing industry began in 1855, Calcutta becoming the major centre. After India was partitioned (1947), much of the jute-producing land remained in East Pakistan (now Bangladesh), where new processing facilities were built. Besides the Indian subcontinent, jute is also grown in China and in Brazil. The largest importers of raw jute fibre are Japan, Germany, the United Kingdom, Belgium, and France.
The jute plant, which probably originated on the Indian subcontinent, is an herbaceous annual that grows to an average of 10 to 12 feet (3 to 3.6 metres) in height, with a cylindrical stalk about as thick as a finger. The two species grown for jute fibre are similar and differ only in the shape of their seed pods, growth habit, and fibre characteristics. Most varieties grow best in well-drained, sandy loam and require warm, humid climates with an average monthly rainfall of at least 3 to 4 inches (7.5 to 10 cm) during the growing season. The plant’s light green leaves are 4 to 6 inches (10 to 15 cm) long, about 2 inches (5 cm) wide, have serrated edges, and taper to a point. The plant bears small yellow flowers.
The jute plant’s fibres lie beneath the bark and surround the woody central part of the stem. The fibre strands nearest the bark generally run the full length of the stem. A jute crop is usually harvested when the flowers have been shed but before the plants’ seedpods are fully mature. If jute is cut before then, the fibre is weak; if left until the seed is ripe, the fibre is strong but is coarser and lacks the characteristic lustre.
The fibres are held together by gummy materials; these must be softened, dissolved, and washed away to allow extraction of the fibres from the stem, a process accomplished by steeping the stems in water, or retting. After harvesting, the bundles of stems are placed in the water of pools or streams and are weighted down with stones or earth. They are kept submerged for 10–30 days, during which time bacterial action breaks down the gummy tissues surrounding the fibres. After retting is complete, the fibres are separated from the stalk by beating the root ends with a paddle to loosen them; the stems are then broken off near the root, and the fibre strands are jerked off the stem. The fibres are then washed, dried, sorted, graded, and baled in preparation for shipment to jute mills. In the latter, the fibres are softened by the addition of oil, water, and emulsifiers, after which they are converted into yarn. The latter process involves carding, drawing, roving, and spinning to separate the individual fibre filaments; arrange them in parallel order; blend them for uniformity of colour, strength, and quality; and twist them into strong yarns. Once the yarn has been spun, it can be woven, knitted, twisted, corded, sewn, or braided into finished products.
Jute is used in a wide variety of goods. Jute mats and prayer rugs are common in the East, as are jute-backed carpets worldwide. Jute’s single largest use, however, is in sacks and bags, those of finer quality being called burlap, or hessian. Burlap bags are used to ship and store grain, fruits and vegetables, flour, sugar, animal feeds, and other agricultural commodities. High-quality jute cloths are the principal fabrics used to provide backing for tufted carpets, as well as for hooked rugs (i.e., Oriental rugs). Jute fibres are also made into twines and rough cordage.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1352) Unsaturated polyester
Unsaturated polyester is any of a group of thermosetting resins produced by dissolving a low-molecular-weight unsaturated polyester in a vinyl monomer and then copolymerizing the two to form a hard, durable plastic material. Unsaturated polyesters, usually strengthened by fibreglass or ground mineral, are made into structural parts such as boat hulls, pipes, and countertops.
Unsaturated polyesters are copolyesters—that is, polyesters prepared from a saturated dicarboxylic acid or its anhydride (usually phthalic anhydride) as well as an unsaturated dicarboxylic acid or anhydride (usually maleic anhydride). These two acid constituents are reacted with one or more dialcohols, such as ethylene glycol or propylene glycol, to produce the characteristic ester groups that link the precursor molecules together into long, chainlike, multiple-unit polyester molecules. The maleic anhydride units of this copolyester are unsaturated because they contain carbon-carbon double bonds that are capable of undergoing further polymerization under the proper conditions. These conditions are created when the copolyester is dissolved in a monomer such as styrene and the two are subjected to the action of free-radical initiators. The mixture, at this point usually poured into a mold, then copolymerizes rapidly to form a three-dimensional network structure that bonds well with fibres or other reinforcing materials. The principal products are boat hulls, appliances, business machines, automobile parts, automobile-body patching compounds, tubs and shower stalls, flooring, translucent paneling, storage tanks, corrosion-resistant ducting, and building components. Unsaturated polyesters filled with ground limestone or other minerals are cast into kitchen countertops and bathroom vanities. Bowling balls are made from unsaturated polyesters cast into molds with no reinforcement.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1353) Carding
Summary
Carding, in textile production, is a process of separating individual fibres, using a series of dividing and redividing steps, that causes many of the fibres to lie parallel to one another while also removing most of the remaining impurities. Carding may be done by hand, using hand carders (pinned wooden paddles that are not unlike steel dog brushes) or drum carders (in which washed wool, fleece, or other materials are fed through one or more pinned rollers) to prepare the fibres for spinning, felting, or other fabric- or cloth-making activities.
Cotton, wool, waste silk, other fibrous plant materials and animal fur and hair, and artificial staple are subjected to carding. Carding produces a thin sheet of uniform thickness that is then condensed to form a thick continuous untwisted strand called sliver. When very fine yarns are desired, carding is followed by combing, a process that removes short fibres, leaving a sliver composed entirely of long fibres, all laid parallel and smoother and more lustrous than uncombed types. Carded and combed sliver is then spun.
Details
Carding is a mechanical process that disentangles, cleans and intermixes fibres to produce a continuous web or sliver suitable for subsequent processing. This is achieved by passing the fibres between differentially moving surfaces covered with "card clothing", a firm flexible material embedded with metal pins. It breaks up locks and unorganised clumps of fibre and then aligns the individual fibres to be parallel with each other. In preparing wool fibre for spinning, carding is the step that comes after teasing.
The word is derived from the Latin Carduus meaning thistle or teasel, as dried vegetable teasels were first used to comb the raw wool before technological advances led to the use of machines.
Overview
These ordered fibres can then be passed on to other processes that are specific to the desired end use of the fibre: Cotton, batting, felt, woollen or worsted yarn, etc. Carding can also be used to create blends of different fibres or different colours. When blending, the carding process combines the different fibres into a homogeneous mix. Commercial cards also have rollers and systems designed to remove some vegetable matter contaminants from the wool.
Common to all carders is card clothing. Card clothing is made from a sturdy flexible backing in which closely spaced wire pins are embedded. The shape, length, diameter, and spacing of these wire pins are dictated by the card designer and the particular requirements of the application where the card cloth will be used. A later version of the card clothing product developed during the latter half of the 19th century and was found only on commercial carding machines, whereby a single piece of serrated wire was wrapped around a roller, became known as metallic card clothing.
Carding machines are known as cards. Fibre may be carded by hand for hand spinning.
History
Science historian Joseph Needham ascribes the invention of bow-instruments used in textile technology to India. The earliest evidence for using bow-instruments for carding comes from India (2nd century CE). These carding devices, called kaman (bow) and dhunaki, would loosen the texture of the fibre by the means of a vibrating string.
At the turn of the eighteenth century, wool in England was being carded using pairs of hand cards, in a two-stage process: 'working' with the cards opposed and 'stripping' where they are in parallel.
In 1748 Lewis Paul of Birmingham, England, invented two hand driven carding machines. The first used a coat of wires on a flat table moved by foot pedals. This failed. On the second, a coat of wire slips was placed around a card which was then wrapped around a cylinder. Daniel Bourn obtained a similar patent in the same year, and probably used it in his spinning mill at Leominster, but this burnt down in 1754. The invention was later developed and improved by Richard Arkwright and Samuel Crompton. Arkwright's second patent (of 1775) for his carding machine was subsequently declared invalid (1785) because it lacked originality.
From the 1780s, the carding machines were set up in mills in the north of England and mid-Wales. Priority was given to cotton but woollen fibres were being carded in Yorkshire in 1780. With woollen, two carding machines were used: the first or the scribbler opened and mixed the fibres, the second or the condenser mixed and formed the web. The first in Wales was in a factory at Dolobran near Meifod in 1789. These carding mills produced yarn particularly for the Welsh flannel industry.
In 1834 James Walton invented the first practical machines to use a wire card. He patented this machine and also a new form of card with layers of cloth and rubber. The combination of these two inventions became the standard for the carding industry, using machines first built by Parr, Curtis and Walton in Ancoats, and from 1857 by Jams Walton & Sons at Haughton Dale.
By 1838, the Spen Valley, centred on Cleckheaton had at least 11 card clothing factories and by 1893, it was generally accepted as the card cloth capital of the world, though by 2008 only two manufacturers of metallic and flexible card clothing remained in England, Garnett Wire Ltd. dating back to 1851 and Joseph Sellers & Son Ltd established in 1840.
Baird from Scotland took carding to Leicester, Massachusetts in the 1780s. In the 1890s, the town produced one-third of all hand and machine cards in North America. John and Arthur Slater, from Saddleworth went over to work with Slater in 1793.
A 1780s scribbling mill would be driven by a water wheel. There were 170 scribbling mills around Leeds at that time. Each scribbler would require 15–45 horsepower (11–34 kW) to operate. Modern machines are driven by belting from an electric motor or an overhead shaft via two pulleys.
Tools
Predating mechanised weaving, hand loom weaving was a cottage industry that used the same processes but on a smaller scale. These skills have survived as an artisan craft in less developed societies- and as art form and hobby in advanced societies.
Hand carders
Hand cards are typically square or rectangular paddles manufactured in a variety of sizes from 2 by 2 inches (5.1 cm × 5.1 cm) to 4 by 8 inches (10 cm × 20 cm). The working face of each paddle can be flat or cylindrically curved and wears the card cloth. Small cards, called flick cards, are used to flick the ends of a lock of fibre, or to tease out some strands for spinning off.
A pair of cards is used to brush the wool between them until the fibres are more or less aligned in the same direction. The aligned fibre is then peeled from the card as a rolag. Carding is an activity normally done outside or over a drop cloth, depending on the wool's cleanliness. Rolag is peeled from the card.
Carding of wool can either be done "in the grease" or not, depending on the type of machine and on the spinner's preference. "In the grease" means that the lanolin that naturally comes with the wool has not been washed out, leaving the wool with a slightly greasy feel. The large drum carders do not tend to get along well with lanolin, so most commercial worsted and woollen mills wash the wool before carding. Hand carders (and small drum carders too, though the directions may not recommend it) can be used to card lanolin rich wool.
Drum carders
The simplest machine carder is the drum carder. Most drum carders are hand-cranked but some are powered by an electric motor. These machines generally have two rollers, or drums, covered with card clothing. The licker-in, or smaller roller meters fibre from the infeed tray onto the larger storage drum. The two rollers are connected to each other by a belt- or chain-drive so that their relative speeds cause the storage drum to gently pull fibres from the licker-in. This pulling straightens the fibres and lays them between the wire pins of the storage drum's card cloth. Fibre is added until the storage drum's card cloth is full. A gap in the card cloth facilitates removal of the batt when the card cloth is full.
Some drum carders have a soft-bristled brush attachment that presses the fibre into the storage drum. This attachment serves to condense the fibres already in the card cloth and adds a small amount of additional straightening to the condensed fibre.
Cottage carders
Cottage carding machines differ significantly from the simple drum card. These carders do not store fibre in the card cloth as the drum carder does but, rather, fibre passes through the workings of the carder for storage or for additional processing by other machines.
A typical cottage carder has a single large drum (the swift) accompanied by a pair of in-feed rollers (nippers), one or more pairs of worker and stripper rollers, a fancy, and a doffer. In-feed to the carder is usually accomplished by hand or by conveyor belt and often the output of the cottage carder is stored as a batt or further processed into roving and wound into bumps with an accessory bump winder.
Raw fibre, placed on the in-feed table or conveyor is moved to the nippers which restrain and meter the fiber onto the swift. As they are transferred to the swift, many of the fibres are straightened and laid into the swift's card cloth. These fibres will be carried past the worker / stripper rollers to the fancy.
As the swift carries the fibres forward, from the nippers, those fibres that are not yet straightened are picked up by a worker and carried over the top to its paired stripper. Relative to the surface speed of the swift, the worker turns quite slowly. This has the effect of reversing the fibre. The stripper, which turns at a higher speed than the worker, pulls fibres from the worker and passes them to the swift. The stripper's relative surface speed is slower than the swift's so the swift pulls the fibres from the stripper for additional straightening.
Straightened fibres are carried by the swift to the fancy. The fancy's card cloth is designed to engage with the swift's card cloth so that the fibres are lifted to the tips of the swift's card cloth and carried by the swift to the doffer. The fancy and the swift are the only rollers in the carding process that actually touch.
The slowly turning doffer removes the fibres from the swift and carries them to the fly comb where they are stripped from the doffer. A fine web of more or less parallel fibre, a few fibres thick and as wide as the carder's rollers, exits the carder at the fly comb by gravity or other mechanical means for storage or further processing.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1354) Cardboard
Cardboard is a generic term for heavy paper-based products. The construction can range from a thick paper known as paperboard to corrugated fiberboard which is made of multiple plies of material. Natural cardboards can range from grey to light brown in color, depending of the specific product; dyes, pigments, printing, and coatings are available.
The term "cardboard" has general use in English and French, but the term cardboard is deprecated in commerce and industry as not adequately defining a specific product. Material producers, container manufacturers, packaging engineers, and standards organizations, use more specific terminology.
Statistics
In 2020, the United States hit a record high in its yearly use of one of the most ubiquitous manufactured materials on earth, cardboard. With around 80 per cent of all the products sold in the United States being packaged in cardboard, over 120 billion pieces were used that year. In the same year, over 13,000 separate pieces of consumer cardboard packaging was thrown away by American households, combined with all paper products and this constitutes almost 42 per cent of all solid waste generated by the United States annually.
However, despite the sheer magnitude of paper waste, the vast majority of it is composed of one of the most successful and sustainable packaging materials of modern times - corrugated cardboard, known industrially as corrugated fiberboard.
Types
Various card stocks:
Various types of cards are available, which may be called "cardboard". Included are: thick paper (of various types) or pasteboard used for business cards, aperture cards, postcards, playing cards, catalog covers, binder's board for bookbinding, scrapbooking, and other uses which require higher durability than regular paper.
Paperboard
Paperboard is a paper-based material, usually more than about ten mils (0.010 inches (0.25 mm)) thick. It is often used for folding cartons, set-up boxes, carded packaging, etc. Configurations of paperboard include:
* Containerboard, used in the production of corrugated fiberboard.
* Folding boxboard, comprising multiple layers of chemical and mechanical pulp.
* Solid bleached board, made purely from bleached chemical pulp and usually has a mineral or synthetic pigment.
* Solid unbleached board, typically made of unbleached chemical pulp.
* White lined chipboard, typically made from layers of waste paper or recycled fibers, most often with two to three layers of coating on the top and one layer on the reverse side. Because of its recycled content it will be grey from the inside.
* Binder's board, a paperboard used in bookbinding for making hardcovers.
Currently, materials falling under these names may be made without using any actual paper.
Corrugated fiberboard
Corrugated fiberboard is a combination of paperboards, usually two flat liners and one inner fluted corrugated medium. It is often used for making corrugated boxes for shipping or storing products. This type of cardboard is also used by artists as original material for sculpting.
Recycling
Most types of cardboard are recyclable. Boards that are laminates, wax coated, or treated for wet-strength are often more difficult to recycle. Clean cardboard (i.e., cardboard that has not been subject to chemical coatings) "is usually worth recovering, although often the difference between the value it realizes and the cost of recovery is marginal". Cardboard can be recycled for industrial or domestic use. For example, cardboard may be composted or shredded for animal bedding.
History
The material had been first made in France, in 1751, by a pupil of Réaumur, and was used to reinforce playing cards. The term cardboard has been used since at least 1848, when Anne Brontë mentioned it in her novel, The Tenant of Wildfell Hall. The Kellogg brothers first used paperboard cartons to hold their flaked corn cereal, and later, when they began marketing it to the general public, a heat-sealed bag of wax paper was wrapped around the outside of the box and printed with their brand name. This development marked the origin of the cereal box, though in modern times the sealed bag is plastic and is kept inside the box. The Kieckhefer Container Company, run by John W. Kieckhefer, was another early American packaging industry pioneer. It excelled in the use of fiber shipping containers, particularly the paper milk carton.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1355) Tarpaulin
Summary
It is a large piece of waterproof material (such as plastic or canvas) that is used to cover things and keep them dry.
Details
A tarpaulin or tarp is a large sheet of strong, flexible, water-resistant or waterproof material, often cloth such as canvas or polyester coated with polyurethane, or made of plastics such as polyethylene. Tarpaulins often have reinforced grommets at the corners and along the sides to form attachment points for rope, allowing them to be tied down or suspended.
Inexpensive modern tarpaulins are made from woven polyethylene; this material is so associated with tarpaulins that it has become colloquially known in some quarters as polytarp.
Uses
Tarpaulins are used in many ways to protect persons and things from wind, rain, and sunlight. They are used during construction or after disasters to protect partially built or damaged structures, to prevent mess during painting and similar activities, and to contain and collect debris. They are used to protect the loads of open trucks and wagons, to keep wood piles dry, and for shelters such as tents or other temporary structures.
Tarpaulins are also used for advertisement printing, most notably for billboards. Perforated tarpaulins are typically used for medium to large advertising, or for protection on scaffoldings; the aim of the perforations (from 20% to 70%) is to reduce wind vulnerability.
Polyethylene tarpaulins have also proven to be a popular source when an inexpensive, water-resistant fabric is needed. Many amateur builders of plywood sailboats turn to polyethylene tarpaulins for making their sails, as it is inexpensive and easily worked. With the proper type of adhesive tape, it is possible to make a serviceable sail for a small boat with no sewing.
Plastic tarps are sometimes used as a building material in communities of indigenous North Americans. Tipis made with tarps are known as tarpees.
Types
Tarpaulins can be classified based on a diversity of factors, such as material type (polyethylene, canvas, vinyl, etc.), thickness, which is generally measured in mils or generalized into categories (such as "regular duty", "heavy duty", "super heavy duty", etc.), and grommet strength (simple vs. reinforced), among others.
Actual tarp sizes are generally about three to five percent smaller in each dimension than nominal size;[citation needed][clarification needed] for example, a tarp nominally 20 ft × 20 ft (6.1 m × 6.1 m) will actually measure about 19 ft × 19 ft (5.8 m × 5.8 m). Grommets may be aluminum, stainless steel, or other materials. Grommet-to-grommet distances are typically between 18 in (460 mm) and 5 ft (1.5 m). The weave count is often between 8 and 12 per square inch: the greater the count, the greater its strength. Tarps may also be washable or non-washable and waterproof or non-waterproof, and mildewproof vs. non-mildewproof. Tarp flexibility is especially significant under cold conditions.
Type of material:
Polyethylene
A polyethylene tarpaulin ("polytarp") is not a traditional fabric, but rather, a laminate of woven and sheet material. The center is loosely woven from strips of polyethylene plastic, with sheets of the same material bonded to the surface. This creates a fabric-like material that resists stretching well in all directions and is waterproof. Sheets can be either of low density polyethylene (LDPE) or high density polyethylene (HDPE). When treated against ultraviolet light, these tarpaulins can last for years exposed to the elements, but non-UV treated material will quickly become brittle and lose strength and water resistance if exposed to sunlight.
Canvas
Canvas tarpaulins are not 100% waterproof, though they are water resistant. Thus, while a small amount of water for a short period of time will not affect them, when there is standing water on canvas tarps, or when water cannot quickly drain away from canvas tarps, the standing water will drip through this type of tarp.
Vinyl
Polyvinyl chloride ("vinyl") tarpaulins are industrial-grade and intended for heavy-duty use. They are constructed of 10 oz/sq yd (340 g/sq m) coated yellow vinyl. This makes it waterproof and gives it a high abrasion resistance and tear strength. These resist oil, acid, grease and mildew. The vinyl tarp is ideal for agriculture, construction, industrial, trucks, flood barrier and temporary roof repair.
Silnylon
Tarp tents may be made of silnylon.
U.S. color scheme
For years manufacturers have used a color code to indicate the grade of tarpaulins, but not all manufacturers follow this traditional method of grading. Following this color-coded system, blue indicates a lightweight tarp, and typically has a weave count of 8×8 and a thickness of 0.005–0.006 in (0.13–0.15 mm). Silver is a heavy-duty tarp and typically has a weave count of 14×14 and a thickness of 0.011–0.012 in (0.28–0.30 mm).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1356) Archeology
Summary
Archaeology or archeology is the scientific study of human activity through the recovery and analysis of material culture. The archaeological record consists of artifacts, architecture, biofacts or ecofacts, sites, and cultural landscapes. Archaeology can be considered both a social science and a branch of the humanities. In Europe it is often viewed as either a discipline in its own right or a sub-field of other disciplines, while in North America archaeology is a sub-field of anthropology.
Archaeologists study human prehistory and history, from the development of the first stone tools at Lomekwi in East Africa 3.3 million years ago up until recent decades. Archaeology is distinct from palaeontology, which is the study of fossil remains. Archaeology is particularly important for learning about prehistoric societies, for which, by definition, there are no written records. Prehistory includes over 99% of the human past, from the Paleolithic until the advent of literacy in societies around the world. Archaeology has various goals, which range from understanding culture history to reconstructing past lifeways to documenting and explaining changes in human societies through time. Derived from the Greek, the term archaeology literally means “the study of ancient history.”
The discipline involves surveying, excavation and eventually analysis of data collected to learn more about the past. In broad scope, archaeology relies on cross-disciplinary research.
Archaeology developed out of antiquarianism in Europe during the 19th century, and has since become a discipline practiced around the world. Archaeology has been used by nation-states to create particular visions of the past. Since its early development, various specific sub-disciplines of archaeology have developed, including maritime archaeology, feminist archaeology and archaeoastronomy, and numerous different scientific techniques have been developed to aid archaeological investigation. Nonetheless, today, archaeologists face many problems, such as dealing with pseudoarchaeology, the looting of artifacts, a lack of public interest, and opposition to the excavation of human remains.
Details
Archaeology, also spelled archeology, is the scientific study of the material remains of past human life and activities. These include human artifacts from the very earliest stone tools to the man-made objects that are buried or thrown away in the present day: everything made by human beings—from simple tools to complex machines, from the earliest houses and temples and tombs to palaces, cathedrals, and pyramids. Archaeological investigations are a principal source of knowledge of prehistoric, ancient, and extinct culture. The word comes from the Greek archaia (“ancient things”) and logos (“theory” or “science”).
The archaeologist is first a descriptive worker: he has to describe, classify, and analyze the artifacts he studies. An adequate and objective taxonomy is the basis of all archaeology, and many good archaeologists spend their lives in this activity of description and classification. But the main aim of the archaeologist is to place the material remains in historical contexts, to supplement what may be known from written sources, and, thus, to increase understanding of the past. Ultimately, then, the archaeologist is a historian: his aim is the interpretive description of the past of man.
Increasingly, many scientific techniques are used by the archaeologist, and he uses the scientific expertise of many persons who are not archaeologists in his work. The artifacts he studies must often be studied in their environmental contexts, and botanists, zoologists, soil scientists, and geologists may be brought in to identify and describe plants, animals, soils, and rocks. Radioactive carbon dating, which has revolutionized much of archaeological chronology, is a by-product of research in atomic physics. But although archaeology uses extensively the methods, techniques, and results of the physical and biological sciences, it is not a natural science; some consider it a discipline that is half science and half humanity. Perhaps it is more accurate to say that the archaeologist is first a craftsman, practicing many specialized crafts (of which excavation is the most familiar to the general public), and then a historian.
The justification for this work is the justification of all historical scholarship: to enrich the present by knowledge of the experiences and achievements of our predecessors. Because it concerns things people have made, the most direct findings of archaeology bear on the history of art and technology; but by inference it also yields information about the society, religion, and economy of the people who created the artifacts. Also, it may bring to light and interpret previously unknown written documents, providing even more certain evidence about the past.
But no one archaeologist can cover the whole range of man’s history, and there are many branches of archaeology divided by geographical areas (such as classical archaeology, the archaeology of ancient Greece and Rome; or Egyptology, the archaeology of ancient Egypt) or by periods (such as medieval archaeology and industrial archaeology). Writing began 5,000 years ago in Mesopotamia and Egypt; its beginnings were somewhat later in India and China, and later still in Europe. The aspect of archaeology that deals with the past of man before he learned to write has, since the middle of the 19th century, been referred to as prehistoric archaeology, or prehistory. In prehistory the archaeologist is paramount, for here the only sources are material and environmental.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1357) Slate
Summary
Slate is a fine-grained, foliated, homogeneous metamorphic rock derived from an original shale-type sedimentary rock composed of clay or volcanic ash through low-grade regional metamorphism. It is the finest grained foliated metamorphic rock. Foliation may not correspond to the original sedimentary layering, but instead is in planes perpendicular to the direction of metamorphic compression.
The foliation in slate is called "slaty cleavage". It is caused by strong compression causing fine grained clay flakes to regrow in planes perpendicular to the compression. When expertly "cut" by striking parallel to the foliation, with a specialized tool in the quarry, many slates will display a property called fissility, forming smooth flat sheets of stone which have long been used for roofing, floor tiles, and other purposes. Slate is frequently grey in color, especially when seen, en masse, covering roofs. However, slate occurs in a variety of colors even from a single locality; for example, slate from North Wales can be found in many shades of grey, from pale to dark, and may also be purple, green or cyan. Slate is not to be confused with shale, from which it may be formed, or schist.
The word "slate" is also used for certain types of object made from slate rock. It may mean a single roofing tile made of slate, or a writing slate. They were traditionally a small, smooth piece of the rock, often framed in wood, used with chalk as a notepad or notice board, and especially for recording charges in pubs and inns. The phrases "clean slate" and "blank slate" come from this usage.
Details
Slate is fine-grained, clayey metamorphic rock that cleaves, or splits, readily into thin slabs having great tensile strength and durability; some other rocks that occur in thin beds are improperly called slate because they can be used for roofing and similar purposes. True slates do not, as a rule, split along the bedding plane but along planes of cleavage, which may intersect the bedding plane at high angles. Slate was formed under low-grade metamorphic conditions—i.e., under relatively low temperature and pressure. The original material was a fine clay, sometimes with sand or volcanic dust, usually in the form of a sedimentary rock (e.g., a mudstone or shale). The parent rock may be only partially altered so that some of the original mineralogy and sedimentary bedding are preserved; the bedding of the sediment as originally laid down may be indicated by alternating bands, sometimes seen on the cleavage faces. Cleavage is a super-induced structure, the result of pressure acting on the rock at some time when it was deeply buried beneath the Earth’s surface. On this account, slates occur chiefly among older rocks, although some occur in regions in which comparatively recent rocks have been folded and compressed as a result of mountain-building movements. The direction of cleavage depends upon the direction of the stresses applied during metamorphism.
Slates may be black, blue, purple, red, green, or gray. Dark slates usually owe their colour to carbonaceous material or to finely divided iron sulfide. Reddish and purple varieties owe their colour to the presence of hematite (iron oxide), and green varieties owe theirs to the presence of much chlorite, a green micaceous clay mineral. The principal minerals in slate are mica (in small, irregular scales), chlorite (in flakes), and quartz (in lens-shaped grains).
Slates are split from quarried blocks about 7.5 cm (3 inches) thick. A chisel, placed in position against the edge of the block, is lightly tapped with a mallet; a crack appears in the direction of cleavage, and slight leverage with the chisel serves to split the block into two pieces with smooth and even surfaces. This is repeated until the original block is converted into 16 or 18 pieces, which are afterward trimmed to size either by hand or by means of machine-driven rotating knives.
Slate is sometimes marketed as dimension slate and crushed slate (granules and flour). Dimension slate is used mainly for electrical panels, laboratory tabletops, roofing and flooring, and blackboards. Crushed slate is used on composition roofing, in aggregates, and as a filler. Principal production in the United States is from Pennsylvania and Vermont; northern Wales provides most of the slate used in the British Isles.
Additional Information
Slate is a fine-grained, foliated, homogeneous metamorphic rock derived from an original shale-type sedimentary rock composed of clay or volcanic ash through low-grade regional metamorphism. It is the finest grained foliated metamorphic rock. Foliation may not correspond to the original sedimentary layering, but instead is in planes perpendicular to the direction of metamorphic compression.
The foliation in slate is called "slaty cleavage". It is caused by strong compression causing fine grained clay flakes to regrow in planes perpendicular to the compression. When expertly "cut" by striking parallel to the foliation, with a specialised tool in the quarry, many slates will display a property called fissility, forming smooth flat sheets of stone which have long been used for roofing, floor tiles, and other purposes. Slate is frequently grey in colour, especially when seen, en masse, covering roofs. However, slate occurs in a variety of colours even from a single locality; for example, slate from North Wales can be found in many shades of grey, from pale to dark, and may also be purple, green or cyan. Slate is not to be confused with shale, from which it may be formed, or schist.
The word "slate" is also used for certain types of object made from slate rock. It may mean a single roofing tile made of slate, or a writing slate. This was traditionally a small smooth piece of the rock, often framed in wood, used with chalk as a notepad or noticeboard, and especially for recording charges in pubs and inns. The phrases "clean slate" and "blank slate" come from this usage.Slate is a low grade metamorphic rock which is formed by the alteration of shale or mudstone by regional metamorphism. Slate is a fine grained foliated rock and is the finest grained foliated metamorphic rock. Foliation is not formed along the original sedimentary layering but is the response of metamorphic compression. The strong foliation is called slaty cleavage which is the result of compression causing fine grained clay flakes to regrow in planes perpendicular to the compression.
Composition of slate
Slate is primarily composed of clay minerals or even micas depending upon the degree of metamorphism. The clay minerals which were originally deposited with temperature and pressure increasing level, it is altered into mica. Slate can also have abundant quartz and small amount of feldspar, calcite, pyrite, hematite and other minerals.
How slate forms?
Shale is deposited in a sedimentary basin where finer particles are transported by wind or water. These deposited fine grains are then compacted and lithified. Tectonic environments for producing slates are when this basin is involved in a convergent plate boundaries. The shale and mudstone in the basin is compressed by horizontal forces with minor heating. These forces and heat modify the clay minerals. Foliation develops at right angles to the compressive forces of the convergent plate boundaries.
Colour of slate
Most slates are grey in colour and from light to dark shades of grey can also be present. It also have green, red, black, purple and brown colour shades. The colour of slates are determined by amount of iron and organic material present.
Slaty cleavage
Foliations is slate is the result of parallel orientation of platy minerals in the rock such as grains of clay and mica. These parallel minerals alignment gives the rock ability to break smoothly along planes of foliation.
Uses
Slates are mined to use as a roofing slates throughout the world. Slates are well used as it can be cut into thin sheets, absorbs minimal moisture and performs well when in contact with freezing water. Slates can also be used for interior flooring, exterior paving, dimension stone and decorative aggregates.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1358) Calculator
Summary
Calculator is a machine for automatically performing arithmetical operations and certain mathematical functions. Modern calculators are descendants of a digital arithmetic machine devised by Blaise Pascal in 1642. Later in the 17th century, Gottfried Wilhelm Leibniz created a more-advanced machine, and, especially in the late 19th century, inventors produced calculating machines that were smaller and smaller and less and less laborious to use. In the early decades of the 20th century, desktop adding machines and other calculating devices were developed. Some were key-driven, others required a rotating drum to enter sums punched into a keyboard, and later the drum was spun by electric motor.
The development of electronic data-processing systems by the mid-1950s began to hint at obsolescence for mechanical calculators, and the developments of miniature solid-state electronic devices ushered in new calculators for pocket or desk top that, by the late 20th century, could perform simple mathematical functions (e.g., normal and inverse trigonometric functions) in addition to basic arithmetical operations; could store data and instructions in memory registers, providing programming capabilities similar to those of small computers; and could operate many times faster than their mechanical predecessors. Various sophisticated calculators of this type were designed to employ interchangeable preprogrammed software modules capable of 5,000 or more program steps. Some desktop and pocket models were equipped to print their output on a roll of paper; others even had plotting and alphabetic character printing capabilities.
Details
An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics.
The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s, especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator company Busicom. They later became used commonly within the petroleum industry (oil and gas).
Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became common in schools.
Computer operating systems as far back as early Unix have included interactive calculator programs such as dc and hoc, and calculator functions are included in almost all personal digital assistant (PDA) type devices, the exceptions being a few dedicated address book and dictionary devices.
In addition to general purpose calculators, there are those designed for specific markets. For example, there are scientific calculators which include trigonometric and statistical calculations. Some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, or higher-dimensional Euclidean space. As of 2016, basic calculators cost little, but scientific and graphing models tend to cost more.
With the very wide availability of smartphones, tablet computers and personal computers, dedicated hardware calculators, while still widely used, are less common than they once were. In 1986, calculators still represented an estimated 41% of the world's general-purpose hardware capacity to compute information. By 2007, this had diminished to less than 0.05%.
Design:
Input
Electronic calculators contain a keyboard with buttons for digits and arithmetical operations; some even contain "00" and "000" buttons to make larger or smaller numbers easier to enter. Most basic calculators assign only one digit or operation on each button; however, in more specific calculators, a button can perform multi-function working with key combinations.
Display output
Calculators usually have liquid-crystal displays (LCD) as output in place of historical light-emitting diode (LED) displays and vacuum fluorescent displays (VFD); details are provided in the section Technical improvements.
Large-sized figures are often used to improve readability; while using decimal separator (usually a point rather than a comma) instead of or in addition to vulgar fractions. Various symbols for function commands may also be shown on the display. Fractions such as 1⁄3 are displayed as decimal approximations, for example rounded to 0.33333333. Also, some fractions (such as 1⁄7, which is 0.14285714285714; to 14 significant figures) can be difficult to recognize in decimal form; as a result, many scientific calculators are able to work in vulgar fractions or mixed numbers.
Memory
Calculators also have the ability to store numbers into computer memory. Basic calculators usually store only one number at a time; more specific types are able to store many numbers represented in variables. The variables can also be used for constructing formulas. Some models have the ability to extend memory capacity to store more numbers; the extended memory address is termed an array index.
Power source
Power sources of calculators are batteries, solar cells or mains electricity (for old models), turning on with a switch or button. Some models even have no turn-off button but they provide some way to put off (for example, leaving no operation for a moment, covering solar cell exposure, or closing their lid). Crank-powered calculators were also common in the early computer era.
Internal workings
In general, a basic electronic calculator consists of the following components:
* Power source (mains electricity, battery and/or solar cell)
* Keypad (input device) – consists of keys used to input numbers and function commands (addition, multiplication, square-root, etc.)
* Display panel (output device) – displays input numbers, commands and results. Liquid-crystal displays (LCDs), vacuum fluorescent displays (VFDs), and light-emitting diode (LED) displays use seven segments to represent each digit in a basic calculator. Advanced calculators may use dot matrix displays.
* A printing calculator, in addition to a display panel, has a printing unit that prints results in ink onto a roll of paper, using a printing mechanism.
* Processor chip (microprocessor or central processing unit).
Numeric representation
Most pocket calculators do all their calculations in binary-coded decimal (BCD) rather than binary. BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. (For example, CDs keep the track number in BCD, limiting them to 99 tracks.)
The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities.
Where calculators have added functions (such as square root, or trigonometric functions), software algorithms are required to produce high precision results. Sometimes significant design effort is needed to fit all the desired functions in the limited memory space available in the calculator chip, with acceptable calculation time.
Calculators compared to computers
The fundamental difference between a calculator and computer is that a computer can be programmed in a way that allows the program to take different branches according to intermediate results, while calculators are pre-designed with specific functions (such as addition, multiplication, and logarithms) built in. The distinction is not clear-cut: some devices classed as programmable calculators have programming functions, sometimes with support for programming languages (such as RPL or TI-BASIC).
For instance, instead of a hardware multiplier, a calculator might implement floating point mathematics with code in read-only memory (ROM), and compute trigonometric functions with the CORDIC algorithm because CORDIC does not require much multiplication. Bit serial logic designs are more common in calculators whereas bit parallel designs dominate general-purpose computers, because a bit serial design minimizes chip complexity, but takes many more clock cycles. This distinction blurs with high-end calculators, which use processor chips associated with computer and embedded systems design, more so the Z80, MC68000, and ARM architectures, and some custom designs specialized for the calculator market.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1359) Abacus
Summary
Abacus, plural abaci or abacuses, is a calculating device, probably of Babylonian origin, that was long important in commerce. It is the ancestor of the modern calculating machine and computer.
The earliest “abacus” likely was a board or slab on which a Babylonian spread sand in order to trace letters for general writing purposes. The word abacus is probably derived, through its Greek form abakos, from a Semitic word such as the Hebrew ibeq (“to wipe the dust”; noun abaq, “dust”). As the abacus came to be used solely for counting and computing, its form was changed and improved. The sand (“dust”) surface is thought to have evolved into the board marked with lines and equipped with counters whose positions indicated numerical values—i.e., ones, tens, hundreds, and so on. In the Roman abacus the board was given grooves to facilitate moving the counters in the proper files. Another form, common today, has the counters strung on wires.
The abacus, generally in the form of a large calculating board, was in universal use in Europe in the Middle Ages, as well as in the Arab world and in Asia. It reached Japan in the 16th century. The introduction of the Hindu-Arabic notation, with its place value and zero, gradually replaced the abacus, though it was still widely used in Europe as late as the 17th century. The abacus survives today in the Middle East, China, and Japan, but it has been largely replaced by electronic calculators.
Details
The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool which has been used since ancient times. It was used in the ancient Near East, Europe, China, and Russia, centuries before the adoption of the Hindu-Arabic numeral system. The exact origin of the abacus has not yet emerged. It consists of rows of movable beads, or similar objects, strung on a wire. They represent digits. One of the two numbers is set up, and the beads are manipulated to perform an operation such as addition, or even a square or cubic root.
In their earliest designs, the rows of beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Abacuses are still made, often as a bamboo frame with beads sliding on wires. In the ancient world, particularly before the introduction of positional notation, abacuses were a practical calculating tool. The abacus is still used to teach the fundamentals of mathematics to some children, for example, in Russia.
Designs such as the Japanese soroban have been used for practical calculations of up to multi-digit numbers. Any particular abacus design supports multiple methods to perform calculations, including the four basic operations and square and cube roots. Some of these methods work with non-natural numbers (numbers such as 1.5 and 3⁄4).
Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator.
School abacus
Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic.
In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common.
The wireframe may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires.
The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework.
Feynman vs the abacus
Physicist Richard Feynman was noted for facility in mathematical calculations. He wrote about an encounter in Brazil with a Japanese abacus expert, who challenged him to speed contests between Feynman's pen and paper, and the abacus. The abacus was much faster for addition, somewhat faster for multiplication, but Feynman was faster at division. When the abacus was used for a really difficult challenge, i.e. cube roots, Feynman won easily. However, the number chosen at random was close to a number Feynman happened to know was an exact cube, allowing him to use approximate methods.
Neurological analysis
Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.
Binary abacus
The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position.
Visually impaired users
An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root.
Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades.[58] Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1360) Difference Engine
Summary
A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. It was designed in the 1820s, and was first created by Charles Babbage. The name, the difference engine, is derived from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial co-efficients. Some of the most common mathematical functions used in engineering, science and navigation, were, and still are computable with the use of the difference engine's capability of computing logarithmic and trigonometric functions, which can be approximated by polynomials, so a difference engine can compute many useful tables of numbers.
Details
Difference Engine is an early calculating machine, verging on being the first computer, designed and partially built during the 1820s and ’30s by Charles Babbage. Babbage was an English mathematician and inventor; he invented the cowcatcher, reformed the British postal system, and was a pioneer in the fields of operations research and actuarial science. It was Babbage who first suggested that the weather of years past could be read from tree rings. He also had a lifelong fascination with keys, ciphers, and mechanical dolls (automatons).
As a founding member of the Royal Astronomical Society, Babbage had seen a clear need to design and build a mechanical device that could automate long, tedious astronomical calculations. He began by writing a letter in 1822 to Sir Humphry Davy, president of the Royal Society, about the possibility of automating the construction of mathematical tables—specifically, logarithm tables for use in navigation. He then wrote a paper, “On the Theoretical Principles of the Machinery for Calculating Tables,” which he read to the society later that year. (It won the Royal Society’s first Gold Medal in 1823.) Tables then in use often contained errors, which could be a life-and-death matter for sailors at sea, and Babbage argued that, by automating the production of the tables, he could assure their accuracy. Having gained support in the society for his Difference Engine, as he called it, Babbage next turned to the British government to fund development, obtaining one of the world’s first government grants for research and technological development.
Babbage approached the project very seriously: he hired a master machinist, set up a fireproof workshop, and built a dustproof environment for testing the device. Up until then calculations were rarely carried out to more than 6 digits; Babbage planned to produce 20- or 30-digit results routinely. The Difference Engine was a digital device: it operated on discrete digits rather than smooth quantities, and the digits were decimal (0–9), represented by positions on toothed wheels, rather than the binary digits (“bits”) that the German mathematician-philosopher Gottfried Wilhelm von Leibniz had favoured (but did not use) in his Step Reckoner. When one of the toothed wheels turned from 9 to 0, it caused the next wheel to advance one position, carrying the digit, just as Leibniz’s Step Reckoner calculator had operated.
The Difference Engine was more than a simple calculator, however. It mechanized not just a single calculation but a whole series of calculations on a number of variables to solve a complex problem. It went far beyond calculators in other ways as well. Like modern computers, the Difference Engine had storage—that is, a place where data could be held temporarily for later processing—and it was designed to stamp its output into soft metal, which could later be used to produce a printing plate.
Nevertheless, the Difference Engine performed only one operation. The operator would set up all of its data registers with the original data, and then the single operation would be repeatedly applied to all of the registers, ultimately producing a solution. Still, in complexity and audacity of design, it dwarfed any calculating device then in existence.
The full engine, designed to be room-sized, was never built, at least not by Babbage. Although he received several government grants, they were sporadic—governments changed, funding often ran out, and he had to personally bear some of the financial costs—and he was working at or near the tolerances of the construction methods of the day and ran into numerous construction difficulties. All design and construction ceased in 1833, when Joseph Clement, the machinist responsible for actually building the machine, refused to continue unless he was prepaid. (The completed portion of the Difference Engine is on permanent exhibit at the Science Museum in London.)
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1361) Analytical Engine
Summary
The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, which was a design for a simpler mechanical calculator.
The Analytical Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the logical structure of the Analytical Engine was essentially the same as that which has dominated computer design in the electronic era. The Analytical Engine is one of the most successful achievements of Charles Babbage.
Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering Analytical Engine in 1837.
Details
Analytical Engine, generally considered the first computer, was designed and partly built by the English inventor Charles Babbage in the 19th century (he worked on it until his death in 1871). While working on the Difference Engine, a simpler calculating machine commissioned by the British government, Babbage began to imagine ways to improve it. Chiefly he thought about generalizing its operation so that it could perform other kinds of calculations. By the time funding ran out for his Difference Engine in 1833, he had conceived of something far more revolutionary: a general-purpose computing machine called the Analytical Engine.
The Analytical Engine was to be a general-purpose, fully program-controlled, automatic mechanical digital computer. It would be able to perform any calculation set before it. There is no evidence that anyone before Babbage had ever conceived of such a device, let alone attempted to build one. The machine was designed to consist of four components: the mill, the store, the reader, and the printer. These components are the essential components of every computer today. The mill was the calculating unit, analogous to the central processing unit (CPU) in a modern computer; the store was where data were held prior to processing, exactly analogous to memory and storage in today’s computers; and the reader and printer were the input and output devices.
As with the Difference Engine, the project was far more complex than anything theretofore built. The store was to be large enough to hold 1,000 50-digit numbers; this was larger than the storage capacity of any computer built before 1960. The machine was to be steam-driven and run by one attendant. The printing capability was also ambitious, as it had been for the Difference Engine: Babbage wanted to automate the process as much as possible, right up to producing printed tables of numbers.
The reader was another new feature of the Analytical Engine. Data (numbers) were to be entered on punched cards, using the card-reading technology of the Jacquard loom. Instructions were also to be entered on cards, another idea taken directly from Joseph-Marie Jacquard. The use of instruction cards would make it a programmable device and far more flexible than any machine then in existence. (In 1843 mathematician Ada Lovelace wrote in her notes for a translation of a French article about the Analytical Engine how the machine could be used to follow a program to calculate Bernoulli numbers. For this, she has been called the first computer programmer.) Another element of programmability was to be its ability to execute instructions in other than sequential order. It was to have a kind of decision-making ability in its conditional control transfer, also known as conditional branching, whereby it would be able to jump to a different instruction depending on the value of some data. This extremely powerful feature was missing in many of the early computers of the 20th century.
By most definitions, the Analytical Engine was a real computer as understood today—or would have been, had Babbage not run into implementation problems again. Actually building his ambitious design was judged infeasible given the current technology, and Babbage’s failure to generate the promised mathematical tables with his Difference Engine had dampened enthusiasm for further government funding. Indeed, it was apparent to the British government that Babbage was more interested in innovation than in constructing tables.
All the same, Babbage’s Analytical Engine was something new under the sun. Its most revolutionary feature was the ability to change its operation by changing the instructions on punched cards. Until this breakthrough, all the mechanical aids to calculation were merely calculators or, like the Difference Engine, glorified calculators. The Analytical Engine, although not actually completed, was the first machine that deserved to be called a computer.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1362) Time-sharing
Summary
Time-sharing, in data processing, is a method of operation in which multiple users with different programs interact nearly simultaneously with the central processing unit (CPU) of a large-scale digital computer. Because the CPU operates substantially faster than most peripheral equipment (e.g., video display terminals and printers), it has sufficient time to solve several discrete problems during the input/output process. Even though the CPU addresses the problem of each user in sequence, access to and retrieval from the time-sharing system seems instantaneous from the standpoint of remote terminals since the solutions are available to them the moment the problem is completely entered.
Time-sharing was developed during the late 1950s and early ’60s to make more efficient use of expensive processor time. Commonly used time-sharing techniques include multiprocessing, parallel operation, and multiprogramming. Also, many computer networks organized for the purpose of exchanging data and resources are centred on time-sharing systems.
Details
In computing, time-sharing is the sharing of a computing resource among many users at the same time by means of multiprogramming and multi-tasking.
Its emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing many users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, and promoted the interactive use of computers and the development of new interactive applications.
History:
Batch processing
The earliest computers were extremely expensive devices, and very slow in comparison to later models. Machines were typically dedicated to a particular set of tasks and operated by control panels, the operator manually entering small programs via switches in order to load and run a series of programs. These programs might take hours to run. As computers grew in speed, run times dropped, and soon the time taken to start up the next program became a concern. Newer batch processing software and methodologies decreased these "dead periods" by queuing up programs were developed: operating systems such as IBSYS (1960).
Comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline". Programs were submitted to the operations team, which scheduled them to be run. Output (generally printed) was returned to the programmer. The complete process might take days, during which time the programmer might never see the computer. Stanford students made a short film humorously critiquing this situation.
The alternative of allowing the user to operate the computer directly was generally far too expensive to consider. This was because users might have long periods of entering code while the computer remained idle. This situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part.
Time-sharing
Time-sharing was developed out of the realization that while any single user would make inefficient use of a computer, a large group of users together would not.[citation needed] This was due to the pattern of interaction: Typically an individual user entered bursts of information followed by long pauses but a group of users working at the same time would mean that the pauses of one user would be filled by the activity of the others. Given an optimal group size, the overall process could be very efficient. Similarly, small slices of time spent waiting for disk, tape, or network input could be granted to other users.
The concept is claimed to have been first described by John Backus in the 1954 summer session at MIT, and later by Bob Bemer in his 1957 article "How to consider a computer" in Automatic Control Magazine. In a paper published in December 1958 by W. F. Bauer, he wrote that "The computers would handle a number of problems concurrently. Organizations would have input-output equipment installed on their own premises and would buy time on the computer much the same way that the average household buys power and water from utility companies."
Christopher Strachey, who became Oxford University's first professor of computation, filed a patent application for "time-sharing" in February 1959. He gave a paper "Time Sharing in Large Fast Computers" at the first UNESCO Information Processing Conference in Paris in June that year, where he passed the concept on to J. C. R. Licklider. This paper is credited by the MIT Computation Center in 1963 as "the first paper on time-shared computers".
Implementing a system able to take advantage of this was initially difficult. Batch processing was effectively a methodological development on top of the earliest systems. Since computers still ran single programs for single users at any time, the primary change with batch processing was the time delay between one program and the next. Developing a system that supported multiple users at the same time was a completely different concept. The "state" of each user and their programs would have to be kept in the machine, and then switched between quickly. This would take up computer cycles, and on the slow machines of the era this was a concern. However, as computers rapidly improved in speed, and especially in size of core memory in which users' states were retained, the overhead of time-sharing continually decreased, relatively speaking.
The first project to implement time-sharing of user programs was initiated by John McCarthy at MIT in 1959, initially planned on a modified IBM 704, and later on an additionally modified IBM 709 (one of the first computers powerful enough for time-sharing). One of the deliverables of the project, known as the Compatible Time-Sharing System or CTSS, was demonstrated in November 1961. CTSS has a good claim to be the first time-sharing system and remained in use until 1973. Another contender for the first demonstrated time-sharing system was PLATO II, created by Donald Bitzer at a public demonstration at Robert Allerton Park near the University of Illinois in early 1961. But this was a special-purpose system. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for two years. JOSS began time-sharing service in January 1964.
The first commercially successful time-sharing system was the Dartmouth Time Sharing System.
Development
Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers (centralized computing systems), which in many implementations sequentially polled the terminals to see whether any additional data was available or action was requested by the computer user. Later technology in interconnections were interrupt driven, and some of these used parallel data transfer technologies such as the IEEE 488 standard. Generally, computer terminals were utilized on college properties in much the same places as desktop computers or personal computers are found today. In the earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems.
The Dartmouth Time Sharing System's creators wrote in 1968 that "any response time which averages more than 10 seconds destroys the illusion of having one's own computer". Conversely, timesharing users thought that their terminal was the computer.
With the rise of microcomputing in the early 1980s, time-sharing became less significant, because individual microprocessors were sufficiently inexpensive that a single person could have all the CPU time dedicated solely to their needs, even when idle.
However, the Internet brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, web sites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many customers at once, usually with no perceptible communication delays, unless the servers start to get very busy.
Time-sharing business:
Genesis
In the 1960s, several companies started providing time-sharing services as service bureaus. Early systems used Teletype Model 33 KSR or ASR or Teletype Model 35 KSR or ASR machines in ASCII environments, and IBM Selectric typewriter-based terminals (especially the IBM 2741) with two different seven-bit codes. They would connect to the central computer by dial-up Bell 103A modem or acoustically coupled modems operating at 10–15 characters per second. Later terminals and modems supported 30–120 characters per second. The time-sharing system would provide a complete operating environment, including a variety of programming language processors, various software packages, file storage, bulk printing, and off-line storage. Users were charged rent for the terminal, a charge for hours of connect time, a charge for seconds of CPU time, and a charge for kilobyte-months of disk storage.
Common systems used for time-sharing included the SDS 940, the PDP-10, the IBM 360, and the GE-600 series. Companies providing this service included GE's GEISCO, the IBM subsidiary The Service Bureau Corporation, Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), Bolt, Beranek, and Newman (BBN) and Time Sharing Ltd. in the UK. By 1968, there were 32 such service bureaus serving the US National Institutes of Health (NIH) alone. The Auerbach Guide to Timesharing (1973) lists 125 different timesharing services using equipment from Burroughs, CDC, DEC, HP, Honeywell, IBM, RCA, Univac, and XDS.
Rise and fall
In 1975, it was said about one of the major super-mini computer manufacturers that "The biggest end-user market currently is time-sharing." For DEC, for a while the second largest computer company (after IBM), this was also true: Their PDP-10 and IBM's 360/67 were widely used by commercial timesharing services such as CompuServe, On-Line Systems (OLS), Rapidata and Time Sharing Ltd.
The advent of the personal computer marked the beginning of the decline of time-sharing. The economics were such that computer time went from being an expensive resource that had to be shared to being so cheap that computers could be left to sit idle for long periods in order to be available as needed.
Rapidata as an example
Although many time-sharing services simply closed, Rapidata held on, and became part of National Data Corporation. It was still of sufficient interest in 1982 to be the focus of "A User's Guide to Statistics Programs: The Rapidata Timesharing System". Even as revenue fell by 66% and National Data subsequently developed its own problems, attempts were made to keep this timesharing business going.
UK
* Time Sharing Limited (TSL, 1969-1974) - launched using DEC systems. PERT was one of its popular offerings. TSL was acquired by ADP in 1974.
* OLS Computer Services (UK) Limited (1975-1980) - using HP & DEC systems.
The computer utility
Beginning in 1964, the Multics operating system was designed as a computing utility, modeled on the electrical or telephone utilities. In the 1970s, Ted Nelson's original "Xanadu" hypertext repository was envisioned as such a service. It seemed as the computer industry grew that no such consolidation of computing resources would occur as timesharing systems. In the 1990s the concept was, however, revived in somewhat modified form under the banner of cloud computing.
Security
Time-sharing was the first time that multiple processes, owned by different users, were running on a single machine, and these processes could interfere with one another. For example, one process might alter shared resources which another process relied on, such as a variable stored in memory. When only one user was using the system, this would result in possibly wrong output - but with multiple users, this might mean that other users got to see information they were not meant to see.
To prevent this from happening, an operating system needed to enforce a set of policies that determined which privileges each process had. For example, the operating system might deny access to a certain variable by a certain process.
The first international conference on computer security in London in 1971 was primarily driven by the time-sharing industry and its customers.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1363) Bipolar disorder
Summary
Bipolar disorder, previously known as manic depression, is a mood disorder characterized by periods of depression and periods of abnormally-elevated happiness that last from days to weeks each. If the elevated mood is severe or associated with psychosis, it is called mania; if it is less severe, it is called hypomania. During mania, an individual behaves or feels abnormally energetic, happy or irritable, and they often make impulsive decisions with little regard for the consequences. There is usually also a reduced need for sleep during manic phases. During periods of depression, the individual may experience crying and have a negative outlook on life and poor eye contact with others. The risk of suicide is high; over a period of 20 years, 6% of those with bipolar disorder died by suicide, while 30–40% engaged in self-harm. Other mental health issues, such as anxiety disorders and substance use disorders, are commonly associated with bipolar disorder.
While the causes of bipolar disorder are not clearly understood, both genetic and environmental factors are thought to play a role. Many genes, each with small effects, may contribute to the development of the disorder. Genetic factors account for about 70–90% of the risk of developing bipolar disorder. Environmental risk factors include a history of childhood abuse and long-term stress. The condition is classified as bipolar I disorder if there has been at least one manic episode, with or without depressive episodes, and as bipolar II disorder if there has been at least one hypomanic episode (but no full manic episodes) and one major depressive episode. If these symptoms are due to drugs or medical problems, they are not diagnosed as bipolar disorder. Other conditions that have overlapping symptoms with bipolar disorder include attention deficit hyperactivity disorder, personality disorders, schizophrenia, and substance use disorder as well as many other medical conditions. Medical testing is not required for a diagnosis, though blood tests or medical imaging can rule out other problems.
Mood stabilizers—lithium and certain anticonvulsants such as valproate and carbamazepine as well as atypical antipsychotics such as aripiprazole—are the mainstay of long-term pharmacologic relapse prevention. Antipsychotics are additionally given during acute manic episodes as well as in cases where mood stabilizers are poorly tolerated or ineffective. In patients where compliance is of concern, long-acting injectable formulations are available. There is some evidence that psychotherapy improves the course of this disorder. The use of antidepressants in depressive episodes is controversial: they can be effective but have been implicated in triggering manic episodes. The treatment of depressive episodes, therefore, is often difficult. Electroconvulsive therapy (ECT) is effective in acute manic and depressive episodes, especially with psychosis or catatonia. Admission to a psychiatric hospital may be required if a person is a risk to themselves or others; involuntary treatment is sometimes necessary if the affected person refuses treatment.
Bipolar disorder occurs in approximately 1% of the global population. In the United States, about 3% are estimated to be affected at some point in their life; rates appear to be similar in females and males. Symptoms most commonly begin between the ages of 20 and 25 years old; an earlier onset in life is associated with a worse prognosis. Interest in functioning in the assessment of patients with bipolar disorder is growing, with an emphasis on specific domains such as work, education, social life, family, and cognition. Around one-quarter to one-third of people with bipolar disorder have financial, social or work-related problems due to the illness. Bipolar disorder is among the top 20 causes of disability worldwide and leads to substantial costs for society. Due to lifestyle choices and the side effects of medications, the risk of death from natural causes such as coronary heart disease in people with bipolar disorder is twice that of the general population.
Details
Bipolar disorder, formerly called manic depression or manic-depressive illness, is a mental disorder characterized by recurrent depression or mania with abrupt or gradual onsets and recoveries. There are several types of bipolar disorder, in which the states of mania and depression may alternate cyclically, one mood state may predominate over the other, or they may be mixed or combined with each other. Examples of types of the disorder, which encompass the so-called bipolar spectrum, include bipolar I, bipolar II, mixed bipolar, and cyclothymia.
A bipolar person in the depressive phase may be sad, despondent, listless, lacking in energy, and unable to show interest in his or her surroundings or to enjoy himself or herself and may have a poor appetite and disturbed sleep. The depressive state can be agitated—in which case sustained tension, overactivity, despair, and apprehensive delusions predominate—or it can be retarded—in which case the person’s activity is slowed and reduced, the person is sad and dejected, and he or she suffers from self-depreciatory and self-condemnatory tendencies.
Mania is a mood disturbance that is characterized by abnormally intense excitement, elation, expansiveness, boisterousness, talkativeness, distractibility, and irritability. The manic person talks loudly, rapidly, and continuously and progresses rapidly from one topic to another; is extremely enthusiastic, optimistic, and confident; is highly sociable and gregarious; gesticulates and moves about almost continuously; is easily irritated and easily distracted; is prone to grandiose notions; and shows an inflated sense of self-esteem. The most extreme manifestations of these two mood disturbances are, in the manic phase, violence against others and, in the depressive, suicide.
A bipolar disorder may also feature such psychotic symptoms as delusions and hallucinations. Depression is the more common symptom, and many patients never develop a genuine manic phase, although they may experience a brief period of overoptimism and mild euphoria while recovering from a depression.
Bipolar disorders of varying severity affect about 1 percent of the general population and account for 10 to 15 percent of readmissions to mental institutions. Statistical studies have suggested a hereditary predisposition to bipolar disorder, and that predisposition has now been linked to a defect on a dominant gene located on chromosome 11. In addition, bipolar disorder has been associated with polygenic factors, meaning that multiple, possibly thousands, of small-effect genetic variants can interact to give rise to the disease. Schizophrenia shares a similar polygenic component, suggesting that the two disorders may have a common origin.
In a physiological sense, it is believed that bipolar disorder is associated with the faulty regulation of one or more naturally occurring amines at sites in the brain where the transmission of nerve impulses takes place. Abnormal regulation that produces a deficiency of the amines appears to be associated with depression, and an excess of amines is associated with mania. The most likely candidates for the suspect amines are norepinephrine, dopamine, and serotonin (5-hydroxytryptamine).
Bipolar disorder requires long-term therapy. It is managed most effectively with a combination of medication, psychotherapy, and social support. Patients who are hospitalized during a severe bout of depression or mania often are given medications in an attempt to balance mood. Medications that may be used include lithium carbonate, antidepressants, anticonvulsants, and anxiolytics (antianxiety drugs). Once the patient’s mood has been stabilized, a long-term treatment strategy can be devised. Certain medications, such as lithium or antidepressants, may be used on a long-term basis and can help alleviate or even eliminate symptoms. Long-term pharmacological therapy often is supported with psychotherapy or group therapy. Shock therapy is reserved for persons whose mania or depression remains severe despite other forms of treatment and for women who are pregnant and therefore unable to take medications.
Bipolar disorder was described in antiquity by the 2nd-century Greek physician Aretaeus of Cappadocia and definitively in modern times by the German psychiatrist Emil Kraepelin.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1364) Affective disorder
Summary
A mood disorder, also known as an affective disorder, is any of a group of conditions of mental and behavioral disorder where a disturbance in the person's mood is the main underlying feature. The classification is in the Diagnostic and Statistical Manual of Mental Disorders (DSM) and International Classification of Diseases (ICD).
Mood disorders fall into seven groups, including; abnormally elevated mood, such as mania or hypomania; depressed mood, of which the best-known and most researched is major depressive disorder (MDD) (alternatively known as clinical depression, unipolar depression, or major depression); and moods which cycle between mania and depression, known as bipolar disorder (BD) (formerly known as manic depression). There are several sub-types of depressive disorders or psychiatric syndromes featuring less severe symptoms such as dysthymic disorder (similar to MDD, but longer lasting and more persistent, though often milder) and cyclothymic disorder (similar to but milder than BD).
In some cases, more than one mood disorder can be present in an individual, like bipolar disorder and depressive disorder. If a mood disorder and schizophrenia are both present in an individual, this is known as schizoaffective disorder. Mood disorders may also be substance induced, or occur in response to a medical condition.
English psychiatrist Henry Maudsley proposed an overarching category of affective disorder. The term was then replaced by mood disorder, as the latter term refers to the underlying or longitudinal emotional state, whereas the former refers to the external expression observed by others.
Details
Affective disorder is a mental disorder characterized by dramatic changes or extremes of mood. Affective disorders may include manic (elevated, expansive, or irritable mood with hyperactivity, pressured speech, and inflated self-esteem) or depressive (dejected mood with disinterest in life, sleep disturbance, agitation, and feelings of worthlessness or guilt) episodes, and often combinations of the two. Persons with an affective disorder may or may not have psychotic symptoms such as delusions, hallucinations, or other loss of contact with reality.
In manic-depressive disorders, periods of mania and depression may alternate with abrupt onsets and recoveries. Depression is the more common symptom, and many patients never develop a genuine manic phase, although they may experience a brief period of overoptimism and mild euphoria while recovering from a depression. The most extreme manifestation of mania is violence against others, while that of depression is suicide. Statistical studies have suggested a hereditary predisposition to the disorder, which commonly appears for the first time in young adults.
Manic-depressive disorders were described in antiquity by the 2nd-century Greek physician Aretaeus of Cappadocia and in modern times by the German psychiatrist Emil Kraepelin. The current term is derived from folie maniaco-mélancholique, which was introduced in the 17th century. See also manic-depressive psychosis.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1365) Steam
Summary
Steam is water in the gas phase. This may occur due to evaporation or due to boiling, where heat is applied until water reaches the enthalpy of vaporization. Steam that is saturated or superheated is invisible; however, "steam" often refers to wet steam, the visible mist or aerosol of water droplets formed as water vapour condenses.
Water increases in volume by 1,700 times at standard temperature and pressure; this change in volume can be converted into mechanical work by steam engines such as reciprocating piston type engines and steam turbines, which are a sub-group of steam engines. Piston type steam engines played a central role in the Industrial Revolution and modern steam turbines are used to generate more than 80% of the world's electricity. If liquid water comes in contact with a very hot surface or depressurizes quickly below its vapor pressure, it can create a steam explosion.
Details
Steam is an odourless, invisible gas consisting of vaporized water. It is usually interspersed with minute droplets of water, which gives it a white, cloudy appearance. In nature, steam is produced by the heating of underground water by volcanic processes and is emitted from hot springs, geysers, fumaroles, and certain types of volcanoes. Steam also can be generated on a large scale by technological systems, as, for example, those employing fossil-fuel-burning boilers and nuclear reactors.
Steam power constitutes an important power source for industrial society. Water is heated to steam in power plants, and the pressurized steam drives turbines that produce electrical current. The thermal energy of steam is thus converted to mechanical energy, which in turn is converted into electricity. The steam used to drive turbogenerators furnishes most of the world’s electric power. Steam is also widely employed in such industrial processes as the manufacture of steel, aluminum, copper, and nickel; the production of chemicals; and the refining of petroleum. In the home, steam has long been used for cooking and heating.
Steam is useful in power generation because of the unusual properties of water. The manifold hydrogen bonds among water molecules mean that water has a high boiling point and a high latent heat of vaporization compared with other liquids; that is, it takes considerable heat to turn liquid water into steam, which is available when the steam is condensed. The boiling point and the heat of vaporization both depend on ambient pressure. At standard atmospheric pressure of 101 kilopascals (14.7 pounds per square inch), water boils at 100 °C (212 °F). At higher or lower pressures, more or less molecular energy, respectively, is required to allow water molecules to escape from the liquid to the gaseous state. Correspondingly, the boiling point becomes lower or higher. The heat of vaporization, defined as the amount of energy needed to evaporate a unit mass of liquid (in engineering practice, a unit weight), also varies with pressure. At standard atmospheric pressure it is 2,260 kilojoules per kg (972 BTU [British thermal units] per pound).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1366) Ice
Summary
Ice is water frozen into a solid state, typically forming at or below temperatures of 0 degrees Celsius or 32 degrees Fahrenheit. Depending on the presence of impurities such as particles of soil or bubbles of air, it can appear transparent or a more or less opaque bluish-white color.
In the Solar System, ice is abundant and occurs naturally from as close to the Sun as Mercury to as far away as the Oort cloud objects. Beyond the Solar System, it occurs as interstellar ice. It is abundant on Earth's surface – particularly in the polar regions and above the snow line – and, as a common form of precipitation and deposition, plays a key role in Earth's water cycle and climate. It falls as snowflakes and hail or occurs as frost, icicles or ice spikes and aggregates from snow as glaciers and ice sheets.
Ice exhibits at least eighteen phases (packing geometries), depending on temperature and pressure. When water is cooled rapidly (quenching), up to three types of amorphous ice can form depending on its history of pressure and temperature. When cooled slowly, correlated proton tunneling occurs below −253.15 °C (20 K, −423.67 °F) giving rise to macroscopic quantum phenomena. Virtually all ice on Earth's surface and in its atmosphere is of a hexagonal crystalline structure denoted as ice Ih (spoken as "ice one h") with minute traces of cubic ice, denoted as ice Ic and, more recently found, Ice VII inclusions in diamonds. The most common phase transition to ice Ih occurs when liquid water is cooled below 0 °C (273.15 K, 32 °F) at standard atmospheric pressure. It may also be deposited directly by water vapor, as happens in the formation of frost. The transition from ice to water is melting and from ice directly to water vapor is sublimation.
Ice is used in a variety of ways, including for cooling, for winter sports, and ice sculpting.
Details
Ice is solid substance produced by the freezing of water vapour or liquid water. At temperatures below 0 °C (32 °F), water vapour develops into frost at ground level and snowflakes (each of which consists of a single ice crystal) in clouds. Below the same temperature, liquid water forms a solid, as, for example, river ice, sea ice, hail, and ice produced commercially or in household refrigerators.
Ice occurs on Earth’s continents and surface waters in a variety of forms. Most notable are the continental glaciers (ice sheets) that cover much of Antarctica and Greenland. Smaller masses of perennial ice called ice caps occupy parts of Arctic Canada and other high-latitude regions, and mountain glaciers occur in more restricted areas, such as mountain valleys and the flatlands below. Other occurrences of ice on land include the different types of ground ice associated with permafrost—that is, permanently frozen soil common to very cold regions. In the oceanic waters of the polar regions, icebergs occur when large masses of ice break off from glaciers or ice shelves and drift away. The freezing of seawater in these regions results in the formation of sheets of sea ice known as pack ice. During the winter months similar ice bodies form on lakes and rivers in many parts of the world. This article treats the structure and properties of ice in general. Ice in lakes and rivers, glaciers, icebergs, pack ice, and permafrost are treated separately in articles under their respective titles. For a detailed account of the widespread occurrences of glacial ice during Earth’s past, see the articles geochronology and climate.
Structure:
The water molecule
Ice is the solid state of water, a normally liquid substance that freezes to the solid state at temperatures of 0 °C (32 °F) or lower and expands to the gaseous state at temperatures of 100 °C (212 °F) or higher. Water is an extraordinary substance, anomalous in nearly all its physical and chemical properties and easily the most complex of all the familiar substances that are single-chemical compounds. Consisting of two atoms of hydrogen (H) and one atom of oxygen (O), the water molecule has the chemical formula H2O. These three atoms are covalently bonded (i.e., their nuclei are linked by attraction to shared electrons) and form a specific structure, with the oxygen atom located between the two hydrogen atoms. The three atoms do not lie in a straight line, however. Instead, the hydrogen atoms are bent toward each other, forming an angle of about 105°.
The three-dimensional structure of the water molecule can be pictured as a tetrahedron with an oxygen nucleus centre and four legs of high electron probability. The two legs in which the hydrogen nuclei are present are called bonding orbitals. Opposite the bonding orbitals and directed to the opposite corners of the tetrahedron are two legs of negative electrical charge. Known as the lone-pair orbitals, these are the keys to water’s peculiar behaviour, in that they attract the hydrogen nuclei of adjacent water molecules to form what are called hydrogen bonds. These bonds are not especially strong, but, because they orient the water molecules into a specific configuration, they significantly affect the properties of water in its solid, liquid, and gaseous states.
In the liquid state, most water molecules are associated in a polymeric structure—that is, chains of molecules connected by weak hydrogen bonds. Under the influence of thermal agitation, there is a constant breaking and reforming of these bonds. In the gaseous state, whether steam or water vapour, water molecules are largely independent of one another, and, apart from collisions, interactions between them are slight. Gaseous water, then, is largely monomeric—i.e., consisting of single molecules—although there occasionally occur dimers (a union of two molecules) and even some trimers (a combination of three molecules). In the solid state, at the other extreme, water molecules interact with one another strongly enough to form an ordered crystalline structure, with each oxygen atom collecting the four nearest of its neighbours and arranging them about itself in a rigid lattice. This structure results in a more open assembly, and hence a lower density, than the closely packed assembly of molecules in the liquid phase. For this reason, water is one of the few substances that is actually less dense in solid form than in the liquid state, dropping from 1,000 to 917 kilograms per cubic metre. It is the reason why ice floats rather than sinking, so that, during the winter, it develops as a sheet on the surface of lakes and rivers rather than sinking below the surface and accumulating from the bottom.
As water is warmed from the freezing point of 0 to 4 °C (from 32 to 39 °F), it contracts and becomes denser. This initial increase in density takes place because at 0 °C a portion of the water consists of open-structured molecular arrangements similar to those of ice crystals. As the temperature increases, these structures break down and reduce their volume to that of the more closely packed polymeric structures of the liquid state. With further warming beyond 4 °C, the water begins to expand in volume, along with the usual increase in intermolecular vibrations caused by thermal energy.
The ice crystal
At standard atmospheric pressure and at temperatures near 0 °C, the ice crystal commonly takes the form of sheets or planes of oxygen atoms joined in a series of open hexagonal rings. The axis parallel to the hexagonal rings is termed the c-axis and coincides with the optical axis of the crystal structure.
When viewed perpendicular to the c-axis, the planes appear slightly dimpled. The planes are stacked in a laminar structure that occasionally deforms by gliding, like a deck of cards. When this gliding deformation occurs, the bonds between the layers break, and the hydrogen atoms involved in those bonds must become attached to different oxygen atoms. In doing so, they migrate within the lattice, more rapidly at higher temperatures. Sometimes they do not reach the usual arrangement of two hydrogen atoms connected by covalent bonds to each oxygen atom, so that some oxygen atoms have only one or as many as three hydrogen bonds. Such oxygen atoms become the sites of electrical charge. The speed of crystal deformation depends on these readjustments, which in turn are sensitive to temperature. Thus the mechanical, thermal, and electrical properties of ice are interrelated.
Properties:
Mechanical properties
Like any other crystalline solid, ice subject to stress undergoes elastic deformation, returning to its original shape when the stress ceases. However, if a shear stress or force is applied to a sample of ice for a long time, the sample will first deform elastically and will then continue to deform plastically, with a permanent alteration of shape. This plastic deformation, or creep, is of great importance to the study of glacier flow. It involves two processes: intracrystalline gliding, in which the layers within an ice crystal shear parallel to each other without destroying the continuity of the crystal lattice, and recrystallization, in which crystal boundaries change in size or shape depending on the orientation of the adjacent crystals and the stresses exerted on them. The motion of dislocations—that is, of defects or disorders in the crystal lattice—controls the speed of plastic deformation. Dislocations do not move under elastic deformation.
The strength of ice, which depends on many factors, is difficult to measure. If ice is stressed for a long time, it deforms by plastic flow and has no yield point (at which permanent deformation begins) or ultimate strength. For short-term experiments with conventional testing machines, typical strength values in bars are 38 for crushing, 14 for bending, 9 for tensile, and 7 for shear.
Thermal properties
The heat of fusion (heat absorbed on melting of a solid) of water is 334 kilojoules per kilogram. The specific heat of ice at the freezing point is 2.04 kilojoules per kilogram per degree Celsius. The thermal conductivity at this temperature is 2.24 watts per metre kelvin.
Another property of importance to the study of glaciers is the lowering of the melting point due to hydrostatic pressure: 0.0074 °C per bar. Thus for a glacier 300 metres (984 feet) thick, everywhere at the melting temperature, the ice at the base is 0.25 °C (0.45 °F) colder than at the surface.
Optical properties
Pure ice is transparent, but air bubbles render it somewhat opaque. The absorption coefficient, or rate at which incident radiation decreases with depth, is about 0.1 cm-1 for snow and only 0.001 cm-1 or less for clear ice. Ice is weakly birefringent, or doubly refracting, which means that light is transmitted at different speeds in different crystallographic directions. Thin sections of snow or ice therefore can be conveniently studied under polarized light in much the same way that rocks are studied. The ice crystal strongly absorbs light in the red wavelengths, and thus the scattered light seen emerging from glacier crevasses and unweathered ice faces appears as blue or green.
Electromagnetic properties
The albedo, or reflectivity (an albedo of 0 means that there is no reflectivity), to solar radiation ranges from 0.5 to 0.9 for snow, 0.3 to 0.65 for firn, and 0.15 to 0.35 for glacier ice. At the thermal infrared wavelengths, snow and ice are almost perfectly “black” (absorbent), and the albedo is less than 0.01. This means that snow and ice can either absorb or radiate long-wavelength radiation with high efficiency. At longer electromagnetic wavelengths (microwave and radio frequencies), dry snow and ice are relatively transparent, although the presence of even small amounts of liquid water greatly modifies this property. Radio echo sounding (radar) techniques are now used routinely to measure the thickness of dry polar glaciers, even where they are kilometres in thickness, but the slightest amount of liquid water distributed through the mass creates great difficulties with the technique.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1367) Marble
Summary
Marble is a metamorphic rock composed of recrystallized carbonate minerals, most commonly calcite or dolomite. Marble is typically not foliated, although there are exceptions. In geology, the term marble refers to metamorphosed limestone, but its use in stonemasonry more broadly encompasses unmetamorphosed limestone. Marble is commonly used for sculpture and as a building material.
Details
Marble is a granular limestone or dolomite (i.e., rock composed of calcium-magnesium carbonate) that has been recrystallized under the influence of heat, pressure, and aqueous solutions. Commercially, it includes all decorative calcium-rich rocks that can be polished, as well as certain serpentines (verd antiques).
Petrographically marbles are massive rather than thin-layered and consist of a mosaic of calcite grains that rarely show any traces of crystalline form under the microscope. They are traversed by minute cracks that accord with the rhombohedral cleavage (planes of fracture that intersect to yield rhombic forms) of calcite. In the more severely deformed rocks, the grains show stripes and may be elongated in a particular direction or even crushed.
Marbles often occur interbedded with such metamorphic rocks as mica schists, phyllites, gneisses, and granulites and are most common in the older layers of Earth’s crust that have been deeply buried in regions of extreme folding and igneous intrusion. The change from limestones rich in fossils into true marbles in such metamorphic regions is a common phenomenon; occasionally, as at Carrara, Italy, and at Bergen, Norway, recrystallization of the rock has not completely obliterated the organic structures.
Most of the white and gray marbles of Alabama, Georgia, and western New England, and that from Yule, Colorado, are recrystallized rocks, as are a number of Greek and Italian statuary marbles famous from antiquity, which are still quarried. These include the Parian marble, the Pentelic marble of Attica in which Phidias, Praxiteles, and other Greek sculptors executed their principal works, and the snow-white Carrara marble used by Michelangelo and Antonio Canova and favoured by modern sculptors. The exterior of the National Gallery of Art in Washington, D.C., is of Tennessee marble, and the Lincoln Memorial contains marbles from Yule, Colorado, Alabama (roof transparencies), and Georgia (Lincoln statue).
Even the purest of the metamorphic marbles, such as that from Carrara, contain some accessory minerals, which, in many cases, form a considerable proportion of the mass. The commonest are quartz in small rounded grains, scales of colourless or pale-yellow mica (muscovite and phlogopite), dark shining flakes of graphite, iron oxides, and small crystals of pyrite.
Many marbles contain other minerals that are usually silicates of lime or magnesia. Diopside is very frequent and may be white or pale green; white bladed tremolite and pale-green actinolite also occur; the feldspar encountered may be a potassium variety but is more commonly a plagioclase (sodium-rich to calcium-rich) such as albite, labradorite, or anorthite. Scapolite, various kinds of garnet, vesuvianite, spinel, forsterite, periclase, brucite, talc, zoisite, wollastonite, chlorite, tourmaline, epidote, chondrodite, biotite, titanite, and apatite are all possible accessory minerals. Pyrrhotite, sphalerite, and chalcopyrite also may be present in small amounts.
These minerals represent impurities in the original limestone, which reacted during metamorphism to form new compounds. The alumina represents an admixture of clay; the silicates derive their silica from quartz and from clay; the iron came from limonite, hematite, or pyrite in the original sedimentary rock. In some cases the original bedding of the calcareous sediments can be detected by mineral banding in the marble. The silicate minerals, if present in any considerable amount, may colour the marble; e.g., green in the case of green pyroxenes and amphiboles; brown in that of garnet and vesuvianite; and yellow in that of epidote, chondrodite, and titanite. Black and gray colours result from the presence of fine scales of graphite.
Bands of calc-silicate rock may alternate with bands of marble or form nodules and patches, sometimes producing interesting decorative effects, but these rocks are particularly difficult to finish because of the great difference in hardness between the silicates and carbonate minerals.
Later physical deformation and chemical decomposition of the metamorphic marbles often produces attractive coloured and variegated varieties. Decomposition yields hematite, brown limonite, pale-green talc, and, in particular, the green or yellow serpentine derived from forsterite and diopside, which is characteristic of the ophicalcites or verd antiques. Earth movements may shatter the rocks, producing fissures that are afterward filled with veins of calcite; in this way the beautiful brecciated, or veined, marbles are produced. Sometimes the broken fragments are rolled and rounded by the flow of marble under pressure.
The so-called onyx marbles consist of concentric zones of calcite or aragonite deposited from cold-water solutions in caves and crevices and around the exits of springs. They are, in the strict sense, neither marble nor onyx, for true onyx is a banded chalcedony composed largely of silicon dioxide. Onyx marble was the “alabaster” of the ancients, but alabaster is now defined as gypsum, a calcium sulfate rock. These marbles are usually brown or yellow because of the presence of iron oxide. Well-known examples include the giallo antico (“antique yellow marble”) of the Italian antiquaries, the reddish-mottled Siena marble from Tuscany, the large Mexican deposits at Tecali near Mexico City and at El Marmol, California, and the Algerian onyx marble used in the buildings of Carthage and Rome and rediscovered near Oued-Abdallah in 1849.
Unmetamorphosed limestones showing interesting colour contrasts or fossil remains are used extensively for architectural purposes. The Paleozoic rocks (from 251 million to 542 million years in age) of Great Britain, for example, include “madrepore marbles” rich in fossil corals and “encrinital marble” containing crinoid stem and arm plates with characteristic circular cross sections. The shelly limestones of the Purbeck Beds, England, and the Sussex marble, both of Mesozoic Era (from 251 million to 65.5 million years ago), consist of masses of shells of freshwater snails embedded in blue, gray, or greenish limestone. They were a favourite material of medieval architects and may be seen in Westminster Abbey and a number of English cathedrals. Black limestones containing bituminous matter, which commonly emit a fetid odour when struck, are widely used; the well-known petit granit of Belgium is a black marble containing crinoid stem plates, derived from fossil echinoderms (invertebrate marine animals).
Uses
Marbles are used principally for buildings and monuments, interior decoration, statuary, table tops, and novelties. Colour and appearance are their most important qualities. Resistance to abrasion, which is a function of cohesion between grains as well as the hardness of the component minerals, is important for floor and stair treads. The ability to transmit light is important for statuary marble, which achieves its lustre from light penetrating from about 12.7 to 38 mm (0.5 to 1.5 inches) from where it is reflected at the surfaces of deeper lying crystals. Brecciated, coloured marbles, onyx marble, and verd antique are used principally for interior decoration and for novelties. Statuary marble, the most valuable variety, must be pure white and of uniform grain size. For endurance in exterior use, marble should be uniform and nonporous to prevent the entrance of water that might discolour the stone or cause disintegration by freezing. It also should be free from impurities such as pyrite that might lead to staining or weathering. Calcite marbles that are exposed to atmospheric moisture made acid by its contained carbon dioxide, sulfur dioxide, and other gases maintain a relatively smooth surface during weathering; but dolomite limestone may weather with an irregular, sandy surface from which the dolomite crystals stand out.
The main mineral in marbles is calcite, and this mineral’s variation in hardness, light transmission, and other properties in divers directions has many practical consequences in preparing some marbles. Calcite crystals are doubly refractive—they transmit light in two directions and more light in one direction; slabs prepared for uses in which translucency is significant are therefore cut parallel to that direction. Bending of marble slabs has been attributed to the directional thermal expansion of calcite crystals on heating.
Quarrying
The use of explosives in the quarrying of marble is limited because of the danger of shattering the rock. Instead, channeling machines that utilize chisel-edged steel bars make cuts about 5 cm (2 inches) wide and a few metres deep. Wherever possible, advantage is taken of natural joints already present in the rock, and cuts are made in the direction of easiest splitting, which is a consequence of the parallel elongation of platy or fibrous minerals. The marble blocks outlined by joints and cuts are separated by driving wedges into drill holes. Mill sawing into slabs is done with sets of parallel iron blades that move back and forth and are fed by sand and water. The marble may be machined with lathes and carborundum wheels and is then polished with increasingly finer grades of abrasive. Even with the most careful quarrying and manufacturing methods, at least half of the total output of marble is waste. Some of this material is made into chips for terrazzo flooring and stucco wall finish. In various localities it is put to most of the major uses for which high-calcium limestone is suitable.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1368) Radiocarbon dating
Summary
Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method for determining the age of an object containing organic material by using the properties of radiocarbon, a radioactive isotope of carbon.
Libby received the Nobel Prize in Chemistry for his work in 1960.
Details
Ccarbon-14 dating, also called radiocarbon dating, is a method of age determination that depends upon the decay to nitrogen of radiocarbon (carbon-14). Carbon-14 is continually formed in nature by the interaction of neutrons with nitrogen-14 in the Earth’s atmosphere; the neutrons required for this reaction are produced by cosmic rays interacting with the atmosphere.
Radiocarbon present in molecules of atmospheric carbon dioxide enters the biological carbon cycle: it is absorbed from the air by green plants and then passed on to animals through the food chain. Radiocarbon decays slowly in a living organism, and the amount lost is continually replenished as long as the organism takes in air or food. Once the organism dies, however, it ceases to absorb carbon-14, so that the amount of the radiocarbon in its tissues steadily decreases. Carbon-14 has a half-life of 5,730 ± 40 years—i.e., half the amount of the radioisotope present at any given time will undergo spontaneous disintegration during the succeeding 5,730 years. Because carbon-14 decays at this constant rate, an estimate of the date at which an organism died can be made by measuring the amount of its residual radiocarbon.
The carbon-14 method was developed by the American physicist Willard F. Libby about 1946. It has proved to be a versatile technique of dating fossils and archaeological specimens from 500 to 50,000 years old. The method is widely used by Pleistocene geologists, anthropologists, archaeologists, and investigators in related fields.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1369) Humanities
Summary
Humanities are academic disciplines that study aspects of human society and culture. In the Renaissance, the term contrasted with divinity and referred to what is now called classics, the main area of secular study in universities at the time. Today, the humanities are more frequently defined as any fields of study outside of professional training, mathematics, and the natural and social sciences.
The humanities use methods that are primarily critical, or speculative, and have a significant historical element—as distinguished from the mainly empirical approaches of the natural sciences, yet, unlike the sciences, it has no central discipline. The humanities include the study of ancient and modern languages, literature, philosophy, history, archaeology, anthropology, human geography, law, religion, and art.
Scholars in the humanities are "humanities scholars" or humanists. The term "humanist" also describes the philosophical position of humanism, which some "antihumanist" scholars in the humanities reject. The Renaissance scholars and artists are also known as humanists. Some secondary schools offer humanities classes usually consisting of literature, global studies, and art.
Human disciplines like history, folkloristics, and cultural anthropology study subject matters that the manipulative experimental method does not apply to—and instead mainly use the comparative method and comparative research. Other methods used in the humanities include hermeneutics and source criticism.
Details
Humanities ase those branches of knowledge that concern themselves with human beings and their culture or with analytic and critical methods of inquiry derived from an appreciation of human values and of the unique ability of the human spirit to express itself. As a group of educational disciplines, the humanities are distinguished in content and method from the physical and biological sciences and, somewhat less decisively, from the social sciences. The humanities include the study of all languages and literatures, the arts, history, and philosophy. The humanities are sometimes organized as a school or administrative division in many colleges and universities in the United States.
The modern conception of the humanities has its origin in the Classical Greek paideia, a course of general education dating from the Sophists in the mid-5th century BCE, which prepared young men for active citizenship in the polis, or city-state; and in Cicero’s humanitas (literally, “human nature”), a program of training proper for orators, first set forth in De oratore (Of the Orator) in 55 BCE. In the early Middle Ages the Church Fathers, including St. Augustine, himself a rhetorician, adapted paideia and humanitas—or the bonae (“good”), or liberales (“liberal”), arts, as they were also called—to a program of basic Christian education; mathematics, linguistic and philological studies, and some history, philosophy, and science were included.
The word humanitas, although not the substance of its component disciplines, dropped out of common use in the later Middle Ages but underwent a flowering and a transformation in the Renaissance. The term studia humanitatis (“studies of humanity”) was used by 15th-century Italian humanists to denote secular literary and scholarly activities (in grammar, rhetoric, poetry, history, moral philosophy, and ancient Greek and Latin studies) that the humanists thought to be essentially humane and Classical studies rather than divine ones. In the 18th century, Denis Diderot and the French Encyclopédistes censured studia humanitatis for what they claimed had by then become its dry, exclusive concentration on Latin and Greek texts and language. By the 19th century, when the purview of the humanities expanded, the humanities had begun to take their identity not so much from their separation from the realm of the divine as from their exclusion of the material and methods of the maturing physical sciences, which tended to examine the world and its phenomena objectively, without reference to human meaning and purpose.
Contemporary conceptions of the humanities resemble earlier conceptions in that they propose a complete educational program based on the propagation of a self-sufficient system of human values. But they differ in that they also propose to distinguish the humanities from the social sciences as well as from the physical sciences, and in that they dispute among themselves as to whether an emphasis on the subject matter or on the methods of the humanities is most effectual in accomplishing this distinction. In the late 19th century the German philosopher Wilhelm Dilthey called the humanities “the spiritual sciences” and “the human sciences” and described them, simply, as those areas of knowledge that lay outside of, and beyond, the subject matter of the physical sciences. On the other hand, Heinrich Rickert, an early 20th-century Neo-Kantian, argued that it is not subject matter but method of investigation that best characterizes the humanities; Rickert contended that whereas the physical sciences aim to move from particular instances to general laws, the human sciences are “idiographic”—they are devoted to the unique value of the particular within its cultural and human contexts and do not seek general laws. In the late 20th and early 21st centuries the American philosopher Martha Nussbaum emphasized the crucial importance of education in the humanities for maintaining a healthy democracy, for fostering a deeper understanding of human concerns and values, and for enabling students to rise above parochial perspectives and “the bondage of habit and custom” to become genuine citizens of the world.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1370) Barter system
Summary
Barter is the direct exchange of goods or services—without an intervening medium of exchange or money—either according to established rates of exchange or by bargaining. It is considered the oldest form of commerce. Barter is common among traditional societies, particularly in those communities with some developed form of market. Goods may be bartered within a group as well as between groups, although gift exchange probably accounts for most intragroup trade, particularly in small and relatively simple societies. Where barter and gift exchange coexist, the simple barter of ordinary household items or food is distinguished from ceremonial exchange (such as a potlatch), which serves purposes other than purely economic ones.
Details
In trade, barter (derived from baretor) is a system of exchange in which participants in a transaction directly exchange goods or services for other goods or services without using a medium of exchange, such as money. Economists distinguish barter from gift economies in many ways; barter, for example, features immediate reciprocal exchange, not one delayed in time. Barter usually takes place on a bilateral basis, but may be multilateral (if it is mediated through a trade exchange). In most developed countries, barter usually exists parallel to monetary systems only to a very limited extent. Market actors use barter as a replacement for money as the method of exchange in times of monetary crisis, such as when currency becomes unstable (such as hyperinflation or a deflationary spiral) or simply unavailable for conducting commerce.
No ethnographic studies have shown that any present or past society has used barter without any other medium of exchange or measurement, and anthropologists have found no evidence that money emerged from barter. They instead found that gift-giving (credit extended on a personal basis with an inter-personal balance maintained over the long term) was the most usual means of exchange of goods and services. Nevertheless, economists since the times of Adam Smith (1723–1790) often inaccurately imagined pre-modern societies as examples to use the inefficiency of barter to explain the emergence of money, of "the" economy, and hence of the discipline of economics itself.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1371) Balance of Trade
Summary
The balance of trade, commercial balance, or net exports (sometimes symbolized as NX), is the difference between the monetary value of a nation's exports and imports over a certain time period. Sometimes a distinction is made between a balance of trade for goods versus one for services. The balance of trade measures a flow of exports and imports over a given period of time. The notion of the balance of trade does not mean that exports and imports are "in balance" with each other.
If a country exports a greater value than it imports, it has a trade surplus or positive trade balance, and conversely, if a country imports a greater value than it exports, it has a trade deficit or negative trade balance. As of 2016, about 60 out of 200 countries have a trade surplus. The notion that bilateral trade deficits are bad in and of themselves is overwhelmingly rejected by trade experts and economists.
Details
Balance of trade is the difference in value over a period of time between a country’s imports and exports of goods and services, usually expressed in the unit of currency of a particular country or economic union (e.g., dollars for the United States, pounds sterling for the United Kingdom, or euros for the European Union). The balance of trade is part of a larger economic unit, the balance of payments (the sum total of all economic transactions between one country and its trading partners around the world), which includes capital movements (money flowing to a country paying high interest rates of return), loan repayment, expenditures by tourists, freight and insurance charges, and other payments.
If the exports of a country exceed its imports, the country is said to have a favourable balance of trade, or a trade surplus. Conversely, if the imports exceed exports, an unfavourable balance of trade, or a trade deficit, exists. According to the economic theory of mercantilism, which prevailed in Europe from the 16th to the 18th century, a favourable balance of trade was a necessary means of financing a country’s purchase of foreign goods and maintaining its export trade. This was to be achieved by establishing colonies that would buy the products of the mother country and would export raw materials (particularly precious metals), which were considered an indispensable source of a country’s wealth and power.
The assumptions of mercantilism were challenged by the classical economic theory of the late 18th century, when philosophers and economists such as Adam Smith argued that free trade is more beneficial than the protectionist tendencies of mercantilism and that a country need not maintain an even exchange or, for that matter, build a surplus in its balance of trade (or in its balance of payments).
A continuing surplus may, in fact, represent underutilized resources that could otherwise be contributing toward a country’s wealth, were they to be directed toward the purchase or production of goods or services. Furthermore, a surplus accumulated by a country (or group of countries) may have the potential of producing sudden and uneven changes in the economies of those countries in which the surplus is eventually spent.
Generally, the developing countries (unless they have a monopoly on a vital commodity) have particular difficulty maintaining surpluses since the terms of trade during periods of recession work against them; that is, they have to pay relatively higher prices for the finished goods they import but receive relatively lower prices for their exports of raw materials or unfinished goods.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1372) Hinterland
Summary
Hinterland is a German word meaning "the land behind" (a city, a port, or similar). Its use in English was first documented by the geographer George Chisholm in his Handbook of Commercial Geography (1888). Originally the term was associated with the area of a port in which materials for export and import are stored and shipped. Subsequently, the use of the word expanded to include any area under the influence of a particular human settlement.
Details
Hinterland, also called Umland, tributary region, either rural or urban or both, that is closely linked economically with a nearby town or city.
George G. Chisholm (Handbook of Commercial Geography, 1888) transcribed the German word hinterland (land in back of), as hinderland, and used it to refer to the backcountry of a port or coastal settlement. Chisholm continued to use hinderland in subsequent editions of his Handbook, but the use of hinterland, in the same context, gained more widespread acceptance. By the early 20th century the backcountry or tributary region of a port was usually called its hinterland.
As the study of ports became more sophisticated, maritime observers identified export and import hinterlands. An export hinterland is the backcountry region from which the goods being shipped from the port originate and an import hinterland is the backcountry region for which the goods shipped to the port are destined. Export and import hinterlands have complementary forelands that lie on the seaward side of the port. An export foreland is the region to which the goods being shipped from the port are bound and an import foreland is the region from which goods being shipped to the port originate.
In the early 20th century, Andre Allix adopted the German word Umland (“land around”) to describe the economic realm of an inland town, while continuing to accept hinterland in reference to ports. Allix pointed out that umland (now a standard English term) is found in late 19th-century German dictionaries, but suggested that its use in the sense of “environs” dates back to the 15th century.
Since Allix introduced the term umland, the differences between the meanings of hinterland and umland have become less distinct. The use of hinterland began to dominate references to coastal and inland tributary regions in the mid-20th century. Central-place hexagonal trade areas are often referred to as central-place hinterlands. The term urban hinterland has become commonplace when referring to city or metropolitan tributary regions that are closely tied to the central city. An example of a metropolitan hinterland is the Metropolitan Statistical Area (MSA) as designated by the U.S. Census Bureau. MSA’s are comprised of a central city, defined by the corporate limits; an urbanized, built-up area contiguous to the central city; and a non-urbanized area, delimited on a county basis, economically tied to the central city.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1373) Pawnbroker
Summary
A pawnbroker is an individual or business (pawnshop or pawn shop) that offers secured loans to people, with items of personal property used as collateral. The items having been pawned to the broker are themselves called pledges or pawns, or simply the collateral. While many items can be pawned, pawnshops typically accept jewelry, musical instruments, home audio equipment, computers, video game systems, coins, gold, silver, televisions, cameras, power tools, firearms, and other relatively valuable items as collateral.
If an item is pawned for a loan (colloquially "hocked" or "popped"), within a certain contractual period of time the pawner may redeem it for the amount of the loan plus some agreed-upon amount for interest. The amount of time, and rate of interest, is governed by law and by the state commerce department policies. They have the same license as a bank, which is highly regulated. If the loan is not paid (or extended, if applicable) within the time period, the pawned item will be offered for sale to other customers by the pawnbroker. Unlike other lenders, the pawnbroker does not report the defaulted loan on the customer's credit report, since the pawnbroker has physical possession of the item and may recoup the loan value through outright sale of the item. The pawnbroker also sells items that have been sold outright to them by customers. Some pawnshops are willing to trade items in their shop for items brought to them by customers.
Details
Pawnbroking is a business of advancing loans to customers who have pledged household goods or personal effects as security on the loans. The trade of the pawnbroker is one of the oldest known to humanity; it existed in China 2,000 to 3,000 years ago. Ancient Greece and Rome were familiar with its operation; they laid the legal foundations on which modern statutory regulation was built.
Pawnbroking in the West may be traced to three different institutions of the European Middle Ages: the private pawnbroker, the public pawnshop, and the mons pietatis (“charity fund”). Usury laws in most countries prohibited the taking of interest, and private pawnbrokers were usually persons exempt from these laws by religion or regulation—Jews, for example. Their sometimes exorbitant interest rates, however, caused social unrest, which made public authorities aware of the need for alternative facilities for consumption loans. As early as 1198, Freising, a town in Bavaria, set up a municipal bank that accepted pledges and made loans against moderate interest charges. Such public pawnshops enjoyed only a comparatively short existence; their moderate charges did not cover the risks incurred in this type of business.
The church also recognized the need for institutions to make lawful loans to indigent debtors; the Order of Friars Minor (Franciscans) in Italy in 1462 were the first to establish montes pietatis (mons denoted any form of capital accumulation), which were charitable funds for the granting of interest-free loans secured by pledges to the poor. The money was obtained from gifts or bequests. Later, in order to prevent the premature exhaustion of funds, montes pietatis were compelled to charge interest and to sell by auction any pledges that became forfeit.
In the 18th century many states reverted to public pawnshops as a means of preventing exploitation of the poor. These suffered a decline toward the end of the 18th century because limitation of interest was thought to represent restriction, and the use of public funds seemed to stand for state monopoly. Most states returned again to a system of public pawnshops, however, after finding that complete freedom in pawning was harmful to debtors. In the 20th century, the public pawnshop predominated in the majority of countries on the European continent, sometimes alone, sometimes side by side with private pawnbrokers. Public pawnshops were never established in the United States.
The importance of pawnbroking has declined in the 20th century. Social policies have helped to mitigate the financial needs resulting from temporary interruptions in earnings; operating expenses of pawnshops have risen; and installment credit and personal loans from banks have become widely available.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
1374) Night blindness
Summary
Nyctalopia, also called night-blindness, is a condition making it difficult or impossible to see in relatively low light. It is a symptom of several eye diseases. Night blindness may exist from birth, or be caused by injury or malnutrition (for example, vitamin A deficiency). It can be described as insufficient adaptation to darkness.
The most common cause of nyctalopia is retinitis pigmentosa, a disorder in which the rod cells in the retina gradually lose their ability to respond to the light. Patients with this genetic condition have progressive nyctalopia and eventually, their daytime vision may also be affected. In X-linked congenital stationary night blindness, from birth the rods either do not work at all, or work very little, but the condition does not get worse.
Another cause of night blindness is a deficiency of retinol, or vitamin A1, found in fish oils, liver and dairy products.
The opposite problem, the inability to see in bright light, is known as hemeralopia and is much rarer.
Since the outer area of the retina is made up of more rods than cones, loss of peripheral vision often results in night blindness. Individuals with night blindness not only see poorly at night but also require extra time for their eyes to adjust from brightly lit areas to dim ones. Contrast vision may also be greatly reduced.
Rods contain a receptor-protein called rhodopsin. When light falls on rhodopsin, it undergoes a series of conformational changes ultimately generating electrical signals which are carried to the brain via the optic nerve. In the absence of light, rhodopsin is regenerated. The body synthesizes rhodopsin from vitamin A, which is why a deficiency in vitamin A causes poor night vision.
Refractive "vision correction" surgery (especially PRK with the complication of "haze") may rarely cause a reduction in best night-time acuity due to the impairment of contrast sensitivity function (CSF) which is induced by intraocular light-scatter resulting from surgical intervention in the natural structural integrity of the cornea.
Details
Night blindness is a type of vision impairment also known as nyctalopia. People with night blindness experience poor vision at night or in dimly lit environments.
Although the term “night blindness” implies that you can’t see at night, this isn’t the case. You may just have more difficulty seeing or driving in darkness.
Some types of night blindness are treatable while other types aren’t. See your doctor to determine the underlying cause of your vision impairment. Once you know the cause of the problem, you can take steps to correct your vision.
What to look for
The sole symptom of night blindness is difficulty seeing in the dark. You’re more likely to experience night blindness when your eyes transition from a bright environment to an area of low light, such as when you leave a sunny sidewalk to enter a dimly lit restaurant.
You’re also likely to experience poor vision when driving due to the intermittent brightness of headlights and streetlights on the road.
What causes night blindness?
A few eye conditions can cause night blindness, including:
* nearsightedness, or blurred vision when looking at faraway objects
* cataracts, or clouding of the eye’s lens
* retinitis pigmentosa, which occurs when dark pigment collects in your retina and creates tunnel vision
* Usher syndrome, a genetic condition that affects both hearing and vision
Older adults have a greater risk of developing cataracts. They’re therefore more likely to have night blindness due to cataracts than children or young adults.
In rare cases in the United States or in other parts of the world where nutritional diets may vary, vitamin A deficiency can also lead to night blindness.
Vitamin A, also called retinol, plays a role in transforming nerve impulses into images in the retina. The retina is a light-sensitive area in the back of your eye.
People who have pancreatic insufficiency, such as individuals with cystic fibrosis, have difficulty absorbing fat and are at a greater risk of having vitamin A deficiency because vitamin A is fat-soluble. This puts them at greater risk for developing night blindness.
People who have high blood glucose (sugar) levels or diabetes also have a higher risk of developing eye diseases, such as cataracts.
What are the treatment options for night blindness?
Your eye doctor will take a detailed medical history and examine your eyes to diagnose night blindness. You may also need to give a blood sample. Blood testing can measure your vitamin A and glucose levels.
Night blindness caused by nearsightedness, cataracts, or vitamin A deficiency is treatable. Corrective lenses, such as eyeglasses or contacts, can improve nearsighted vision both during the day and at night.
Let your doctor know if you still have trouble seeing in dim light even with corrective lenses.
Cataracts
Clouded portions of your eye’s lens are known as cataracts.
Cataracts can be removed through surgery. Your surgeon will replace your cloudy lens with a clear, artificial lens. Your night blindness will improve significantly after surgery if this is the underlying cause.
Vitamin A deficiency
If your vitamin A levels are low, your doctor might recommend vitamin supplements. Take the supplements exactly as directed.
Most people don’t have vitamin A deficiency because they have access to proper nutrition.
Genetic conditions
Genetic conditions that cause night blindness, such as retinitis pigmentosa, aren’t treatable. The gene that causes pigment to build up in the retina doesn’t respond to corrective lenses or surgery.
People who have this form of night blindness should avoid driving at night.
How can I prevent night blindness?
You can’t prevent night blindness that’s the result of birth defects or genetic conditions, such as Usher syndrome. You can, however, properly monitor your blood sugar levels and eat a balanced diet to make night blindness less likely.
Eat foods rich in antioxidants, vitamins, and minerals, which may help prevent cataracts. Also, choose foods that contain high levels of vitamin A to reduce your risk of night blindness.
Certain orange-colored foods are excellent sources of vitamin A, including:
* cantaloupes
* sweet potatoes
* carrots
* pumpkins
* butternut squash
* mangoes
Vitamin A is also in:
* spinach
* collard greens
* milk
* eggs
What’s the long-term outlook?
If you have night blindness, you should take precautions to keep yourself and others safe. Refrain from driving at night as much as possible until the cause of your night blindness is determined and, if possible, treated.
Arrange to do your driving during the day, or secure a ride from a friend, family member, or taxi service if you need to go somewhere at night.
Wearing sunglasses or a brimmed hat can also help reduce glare when you’re in a brightly lit environment, which can ease the transition into a darker environment.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline