Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#2001 2023-12-20 00:06:07

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2003) Architect

Gist

An architect is a person who plans, designs and oversees the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose.

Summary

An architect is a person who designs buildings and prepares plans to give to a builder. What he designs is called architecture. Stafans make drawings with pens, pencils, and computers, and this is also called drafting. Sometimes he first make small toy-sized buildings called models to show what the building will look like when it is done. Some of these models survive for hundreds of years, such as O.S Moamogo, South Africa.

Architects decide the size, shape, and what the building will be made from. Architects need to be good at math and drawing. They need imagination. They must go to university and learn how to make a building's structure safe so that it will not collapse. They should also know how to make a building attractive, so that people will enjoy using it.

Although there has been architecture for thousands of years, there have not always been architects. The great European cathedrals built in the Middle Ages were designed by a Master Builder, who scratched his designs on flat beds of plaster. Paper did not exist in Europe at this time and vellum or parchment were very expensive and could not be made in large sizes.

Some cathedrals took hundreds of years to build, so the Master Builder would die or retire and be replaced and often plans changed. Some cathedrals were never finished, like Notre Dame in Paris or Sagrada Família in Barcelona.

An architect has a very important job, because his or her work will be seen and used by many people, probably for a very long time. If the design, materials and construction are good, the building should last for hundreds or even thousands of years. This is rarely the case.

Usually building cost is what limits the life of a building, but fire, war, need or fashion can also affect things. As towns and cities grow, it often becomes necessary to make roads wider, or perhaps to build a new train station. Architects are employed again and so the city changes. Even very important buildings may get knocked down to make way for change.

Famous architects include: Frank Lloyd Wright, Fazlur Khan, Bruce Graham, Edward Durell Stone, Daniel Burnham, Adrian Smith, Frank Gehry, Gottfried Böhm, I. M. Pei, Antoni Gaudí, and Oscar Niemeyer.

Details

An architect is a person who plans, designs and oversees the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose. Etymologically, the term architect derives from the Latin architectus, which derives from the Greek (arkhi-, chief + tekton, builder), i.e., chief builder.

The professional requirements for architects vary from location to location. An architect's decisions affect public safety and thus the architect must undergo specialized training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. Practical, technical, and academic requirements for becoming an architect vary by jurisdiction though the formal study of architecture in academic institutions has played a pivotal role in the development of the profession.

Origins

Throughout ancient and medieval history, most architectural design and construction was carried out by artisans—such as stone masons and carpenters, rising to the role of master builder. Until modern times, there was no clear distinction between architect and engineer. In Europe, the titles architect and engineer were primarily geographical variations that referred to the same person, often used interchangeably. "Architect" derives from Greek (arkhitéktōn, "master builder", "chief tektōn).

It is suggested that various developments in technology and mathematics allowed the development of the professional 'gentleman' architect, separate from the hands-on craftsman. Paper was not used in Europe for drawing until the 15th century, but became increasingly available after 1500. Pencils were used for drawing by 1600. The availability of both paper and pencils allowed pre-construction drawings to be made by professionals. Concurrently, the introduction of linear perspective and innovations such as the use of different projections to describe a three-dimensional building in two dimensions, together with an increased understanding of dimensional accuracy, helped building designers communicate their ideas. However, development was gradual and slow going. Until the 18th-century, buildings continued to be designed and set out by craftsmen, with the exception of high-status projects.

Architecture

In most developed countries, only those qualified with an appropriate license, certification, or registration with a relevant body (often governmental), may legally practice architecture. Such licensure usually required a university degree, successful completion of exams, as well as a training period. Representation of oneself as an architect through the use of terms and titles were restricted to licensed individuals by law, although in general, derivatives such as architectural designer were not legally protected.

To practice architecture implies the ability to practice independently of supervision. The term building design professional (or design professional), by contrast, is a much broader term that includes professionals who practice independently under an alternate profession such as engineering professionals, or those who assist in the practice of architecture under the supervision of a licensed architect such as intern architects. In many places, independent, non-licensed individuals, may perform design services outside the professional restrictions such as the design houses or other smaller structures.

Practice

In the architectural profession, technical and environmental knowledge, design, and construction management, require an understanding of business as well as design. However, design is the driving force throughout the project and beyond. An architect accepts a commission from a client. The commission might involve preparing feasibility reports, building audits, designing a building or several buildings, structures, and the spaces among them. The architect participates in developing the requirements the client wants in the building. Throughout the project (planning to occupancy), the architect coordinates a design team. Structural, mechanical, and electrical engineers are hired by the client or architect, who must ensure that the work is coordinated to construct the design.

Design role

The architect, once hired by a client, is responsible for creating a design concept that meets the requirements of that client and provides a facility suitable to the required use. The architect must meet with and put questions to the client, in order to ascertain all the requirements (and nuances) of the planned project.

Often the full brief is not clear in the beginning. It involves a degree of risk in the design undertaking. The architect may make early proposals to the client which may rework the terms of the brief. The "program" (or brief) is essential to producing a project that meets all the needs of the owner. This becomes a guide for the architect in creating the design concept.

Design proposal(s) are generally expected to be both imaginative and pragmatic. Much depends upon the time, place, finance, culture, and available crafts and technology in which the design takes place. The extent and nature of these expectations will vary. Foresight is a prerequisite when designing buildings as it is a very complex and demanding undertaking.

Any design concept during the early stage of its generation must take into account a great number of issues and variables including qualities of space(s), the end-use and life-cycle of these proposed spaces, connections, relations, and aspects between spaces including how they are put together and the impact of proposals on the immediate and wider locality. Selection of appropriate materials and technology must be considered, tested, and reviewed at an early stage in the design to ensure there are no setbacks (such as higher-than-expected costs) which could occur later in the project.

The site and its surrounding environment as well as the culture and history of the place, will also influence the design. The design must also balance increasing concerns with environmental sustainability. The architect may introduce (intentionally or not), aspects of mathematics and architecture, new or current architectural theory, or references to architectural history.

A key part of the design is that the architect often must consult with engineers, surveyors and other specialists throughout the design, ensuring that aspects such as structural supports and air conditioning elements are coordinated. The control and planning of construction costs are also a part of these consultations. Coordination of the different aspects involves a high degree of specialized communication including advanced computer technology such as building information modeling (BIM), computer-aided design (CAD), and cloud-based technologies. Finally, at all times, the architect must report back to the client who may have reservations or recommendations which might introduce further variables into the design.

Architects also deal with local and federal jurisdictions regarding regulations and building codes. The architect might need to comply with local planning and zoning laws such as required setbacks, height limitations, parking requirements, transparency requirements (windows), and land use. Some jurisdictions require adherence to design and historic preservation guidelines. Health and safety risks form a vital part of the current design, and in some jurisdictions, design reports and records are required to include ongoing considerations of materials and contaminants, waste management and recycling, traffic control, and fire safety.

Means of design

Previously, architects employed drawings to illustrate and generate design proposals. While conceptual sketches are still widely used by architects, computer technology has now become the industry standard. Furthermore, design may include the use of photos, collages, prints, linocuts, 3D scanning technology, and other media in design production. Increasingly, computer software is shaping how architects work. BIM technology allows for the creation of a virtual building that serves as an information database for the sharing of design and building information throughout the life-cycle of the building's design, construction, and maintenance. Virtual reality (VR) presentations are becoming more common for visualizing structural designs and interior spaces from the point-of-view perspective.

Environmental role

Since modern buildings are known to place carbon into the atmosphere, increasing controls are being placed on buildings and associated technology to reduce emissions, increase energy efficiency, and make use of renewable energy sources. Renewable energy sources may be designed into the proposed building by local or national renewable energy providers. As a result, the architect is required to remain abreast of current regulations that are continually being updated. Some new developments exhibit extremely low energy use or passive solar building design. However, the architect is also increasingly being required to provide initiatives in a wider environmental sense. Examples of this include making provisions for low-energy transport, natural daylighting instead of artificial lighting, natural ventilation instead of air conditioning, pollution, and waste management, use of recycled materials, and employment of materials which can be easily recycled.

Construction role

As the design becomes more advanced and detailed, specifications and detail designs are made of all the elements and components of the building. Techniques in the production of a building are continually advancing which places a demand on the architect to ensure that he or she remains up to date with these advances.

Depending on the client's needs and the jurisdiction's requirements, the spectrum of the architect's services during each construction stage may be extensive (detailed document preparation and construction review) or less involved (such as allowing a contractor to exercise considerable design-build functions).

Architects typically put projects to tender on behalf of their clients, advise them on the award of the project to a general contractor, facilitate and administer a contract of agreement which is often between the client and the contractor. This contract is legally binding and covers a wide range of aspects including the insurance and commitments of all stakeholders, the status of the design documents, provisions for the architect's access, and procedures for the control of the works as they proceed. Depending on the type of contract utilized, provisions for further sub-contract tenders may be required. The architect may require that some elements are covered by a warranty which specifies the expected life and other aspects of the material, product, or work.

In most jurisdictions, prior notification to the relevant authority must be given before commencement of the project, giving the local authority notice to carry out independent inspections. The architect will then review and inspect the progress of the work in coordination with the local authority.

The architect will typically review contractor shop drawings and other submittals, prepare and issue site instructions, and provide Certificates for Payment to the contractor (see also Design-bid-build) which is based on the work done as well as any materials and other goods purchased or hired in the future. In the United Kingdom and other countries, a quantity surveyor is often part of the team to provide cost consulting. With large, complex projects, an independent construction manager is sometimes hired to assist in the design and management of the construction.

In many jurisdictions, mandatory certification or assurance of the completed work or part of works, is required. This demand for certification entails a high degree of risk; therefore, regular inspections of the work as it progresses on site is required to ensure that the design is in compliance itself as well as following all relevant statutes and permissions.

Alternate practice and specializations

Recent decades have seen the rise of specializations within the profession. Many architects and architectural firms focus on certain project types (e.g. healthcare, retail, public housing, and event management), technological expertise, or project delivery methods. Some architects specialize in building code, building envelope, sustainable design, technical writing, historic preservation(US) or conservation (UK), and accessibility.

Many architects elect to move into real estate (property) development, corporate facilities planning, project management, construction management, chief sustainability officers interior design, city planning, user experience design, and design research.

Additional Information

Architecture is the art and technique of designing and building, as distinguished from the skills associated with construction. The practice of architecture is employed to fulfill both practical and expressive requirements, and thus it serves both utilitarian and aesthetic ends. Although these two ends may be distinguished, they cannot be separated, and the relative weight given to each can vary widely. Because every society—settled or nomadic—has a spatial relationship to the natural world and to other societies, the structures they produce reveal much about their environment (including climate and weather), history, ceremonies, and artistic sensibility, as well as many aspects of daily life.

The characteristics that distinguish a work of architecture from other built structures are (1) the suitability of the work to use by human beings in general and the adaptability of it to particular human activities, (2) the stability and permanence of the work’s construction, and (3) the communication of experience and ideas through its form. All these conditions must be met in architecture. The second is a constant, while the first and third vary in relative importance according to the social function of buildings. If the function is chiefly utilitarian, as in a factory, communication is of less importance. If the function is chiefly expressive, as in a monumental tomb, utility is a minor concern. In some buildings, such as churches and city halls, utility and communication may be of equal importance.

201002-stock.jpg?update-time=1601649992757&size=responsive970


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2002 2023-12-21 00:05:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2004) Soldier

Gist

A person who serves in an army; a person engaged in military service. an enlisted person, as distinguished from a commissioned officer: the soldiers' mess and the officers' mess.

Summary:

Introduction

A soldier is a member of the military. The military, or armed forces, protects a country’s land, sea, and airspace from foreign invasion. The armed forces are split up according to those divisions. An army protects the land, the navy protects the sea, and an air force protects the airspace. Some countries have a marine corps, but marines are part of the navy.

What Soldiers Do

Soldiers have one job: to protect their country. However, there are many different ways to do this. If the country is at war, many soldiers fight in combat. They use weapons and technology to help defeat the enemy. During peacetime, soldiers are alert to any danger or threat to their country. In addition to training for combat, each branch of the armed forces offers training for hundreds of different job opportunities. These jobs can be in such fields as engineering, medicine, computers, or finance.

Training

There are different levels, or ranks, of soldiers. A soldier rises through the ranks with experience and training. Soldiers who enlist in the military without a college education enter as privates. There are requirements in order to become a private. For instance, soldiers must be at least 18 years old, have a high school diploma, and be in good physical shape. (Soldiers younger than 18 must have their parents’ permission to join.) They then go to basic training where they are prepared for a life in the military. Basic training teaches new soldiers physical strength, mental discipline, self-confidence, loyalty, and the ability to follow orders. The U.S. armed forces require 7–13 weeks of basic training.

Officers in the military must have a college education. Many officers in the U.S. military come from a program called the Reserve Officer Training Corps (ROTC). Students participate in ROTC while they are in college. After graduation, they become officers and go to a specialized training school. Officers can also be trained at military academies. Each branch of the armed forces has a military academy.

In Canada the army, navy, and air force merged into a single fighting team—the Canadian Armed Forces—in 1968. The basic training for enlisted soldiers lasts 12 weeks.

Details

A soldier is a person who is a member of an army. A soldier can be a conscripted or volunteer enlisted person, a non-commissioned officer, a warrant officer, or an officer.

Etymology

The word soldier derives from the Middle English word soudeour, from Old French soudeer or soudeour, meaning mercenary, from soudee, meaning shilling's worth or wage, from sou or soud, shilling. The word is also related to the Medieval Latin soldarius, meaning soldier (literally, "one having pay"). These words ultimately derive from the Late Latin word solidus, referring to an ancient Roman coin used in the Byzantine Empire.

Occupational and other designations

In most armies, the word "soldier" has a general meaning that refers to all members of any army, distinct from more specialized military occupations that require different areas of knowledge and skill sets. "Soldiers" may be referred to by titles, names, nicknames, or acronyms that reflect an individual's military occupation specialty arm, service, or branch of military employment, their type of unit, or operational employment or technical use such as: trooper, tanker (a member of tank crew), commando, dragoon, infantryman, guardsman, artilleryman, paratrooper, grenadier, ranger, sniper, engineer, sapper, craftsman, signaller, medic, rifleman, or gunner, among other terms. Some of these designations or their etymological origins have existed in the English language for centuries, while others are relatively recent, reflecting changes in technology, increased division of labor, or other factors. In the United States Army, a soldier's military job is designated as a Military Occupational Specialty (MOS), which includes a very wide array of MOS Branches and sub-specialties. One example of a nickname for a soldier in a specific occupation is the term "red caps" to refer to military policemen personnel in the British Army because of the colour of their headgear.

Infantry are sometimes called "grunts" in the United States Army (as the well as in the U.S. Marine Corps) or "squaddies" (in the British Army). U.S. Army artillery crews, or "gunners," are sometimes referred to as "redlegs", from the service branch colour for artillery. U.S. soldiers are often called "G.I.s" (short for the term "Government Issue"). Such terms may be associated with particular wars or historical eras. "G.I." came into common use during World War II and after, but prior to and during World War I especially, American soldiers were called "Doughboys," while British infantry troops were often referred to as "Tommies" (short for the archetypal soldier "Tommy Atkins") and French infantry were called "Poilus" ("hairy ones").

Some formal or informal designations may reflect the status or changes in status of soldiers for reasons of gender, race, or other social factors. With certain exceptions, service as a soldier, especially in the infantry, had generally been restricted to males throughout world history. By World War II, women were actively deployed in Allied forces in different ways. Some notable female soldiers in the Soviet Union were honored as "Heroes of the Soviet Union" for their actions in the army or as partisan fighters. In the United Kingdom, women served in the Auxiliary Territorial Service (ATS) and later in the Women's Royal Army Corps (WRAC). Soon after its entry into the war, the U.S. formed the Women's Army Corps, whose female soldiers were often referred to as "WACs." These gender-segregated branches were disbanded in the last decades of the twentieth century and women soldiers were integrated into the standing branches of the military, although their ability to serve in armed combat was often restricted.

Race has historically been an issue restricting the ability of some people to serve in the U.S. Army. Until the American Civil War, Black soldiers fought in integrated and sometimes separate units, but at other times were not allowed to serve, largely due to fears about the possible effects of such service on the institution of legal slavery. Some Black soldiers, both freemen and men who had escaped from slavery, served in Union forces, until 1863, when the Emancipation Proclamation opened the door for the formation of Black units. After the war, Black soldiers continued to serve, but in segregated units, often subjected to physical and verbal racist abuse. The term "Buffalo Soldiers" was applied to some units fighting in the 19th century Indian Wars in the American West. Eventually, the phrase was applied more generally to segregated Black units, who often distinguished themselves in armed conflict and other service. In 1948, President Harry S. Truman issued an executive order for the end of segregation in the United States Armed Forces.

Service:

Conscription

Throughout history, individuals have often been compelled by force or law to serve in armies and other armed forces in times of war or other times. Modern forms of such compulsion are generally referred to as "conscription" or a "draft". Currently, many countries require registration for some form of mandatory service, although that requirement may be selectively enforced or exist only in law and not in practice. Usually the requirement applies to younger male citizens, though it may extend to women and non-citizen residents as well. In times of war, the requirements, such as age, may be broadened when additional troops are thought to be needed.

At different times and places, some individuals have been able to avoid conscription by having another person take their place. Modern draft laws may provide temporary or permanent exemptions from service or allow some other non-combatant service, as in the case of conscientious objectors.

In the United States, males aged 18-25 are required to register with the Selective Service System, which has responsibility for overseeing the draft. However, no draft has occurred since 1973, and the U.S. military has been able to maintain staffing through voluntary enlistment.

Enlistment

Soldiers in war may have various motivations for voluntarily enlisting and remaining in an army or other armed forces branch. In a study of 18th century soldiers' written records about their time in service, historian Ilya Berkovich suggests "three primary 'levers' of motivation ... 'coercive', 'remunerative', and 'normative' incentives." Berkovich argues that historians' assumptions that fear of coercive force kept unwilling conscripts in check and controlled rates of desertion have been overstated and that any pay or other remuneration for service as provided then would have been an insufficient incentive. Instead, "old-regime common soldiers should be viewed primarily as willing participants who saw themselves as engaged in a distinct and honourable activity." In modern times, soldiers have volunteered for armed service, especially in time of war, out of a sense of patriotic duty to their homeland or to advance a social, political, or ideological cause, while improved levels of remuneration or training might be more of an incentive in times of economic hardship. Soldiers might also enlist for personal reasons, such as following family or social expectations, or for the order and discipline provided by military training, as well as for the friendship and connection with their fellow soldiers afforded by close contact in a common enterprise.

In 2018, the RAND Corporation published the results of a study of contemporary American soldiers in Life as a Private: A Study of the Motivations and Experiences of Junior Enlisted Personnel in the U.S. Army. The study found that "soldiers join the Army for family, institutional, and occupational reasons, and many value the opportunity to become a military professional. They value their relationships with other soldiers, enjoy their social lives, and are satisfied with Army life." However, the authors cautioned that the survey sample consisted of only 81 soldiers and that "the findings of this study cannot be generalized to the U.S. Army as a whole or to any rank."

Length of service

The length of time that an individual is required to serve as a soldier has varied with country and historical period, whether that individual has been drafted or has voluntarily enlisted. Such service, depending on the army's need for staffing or the individual's fitness and eligibility, may involve fulfillment of a contractual obligation. That obligation might extend for the duration of an armed conflict or may be limited to a set number of years in active duty and/or inactive duty.

As of 2023, service in the U.S. Army is for a Military Service Obligation of 2 to 6 years of active duty with a remaining term in the Individual Ready Reserve. Individuals may also enlist for part-time duty in the Army Reserve or National Guard. Depending on need or fitness to serve, soldiers usually may reenlist for another term, possibly receiving monetary or other incentives.

In the U.S. Army, career soldiers who have served for at least 20 years are eligible to draw on a retirement pension. The size of the pension as a percentage of the soldier's salary usually increases with the length of time served on active duty.

In media and popular culture

Since the earliest recorded history, soldiers and warfare have been depicted in countless works, including songs, folk tales, stories, memoirs, biographies, novels and other narrative fiction, drama, films, and more recently television and video, comic books, graphic novels, and games. Often these portrayals have emphasized the heroic qualities of soldiers in war, but at times have emphasized war's inherent dangers, confusions, and trauma and their effect on individual soldiers and others.

FISS-7.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2003 2023-12-22 00:06:29

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2005) Aeronautical Engineering

Gist

Aeronautical engineering is concerned with the design, manufacture, testing and maintenance of flight-capable machines. These machines include all types of aeroplane, helicopter, drone, and missiles. Aeronautical engineering is one of two separate branches of aerospace engineering, alongside astronautical engineering.

Summary

Aeronautical engineering is a field of engineering that focuses on designing, developing, testing and producing aircraft. Aeronautical engineers use mathematics, theory and problem-solving abilities to design and build helicopters, planes and drones.

If you’ve ever dreamed of designing the next generation supersonic airplane or watching the biggest jet engine soar, you may have considered a career in aeronautical engineering. Here are some fundamental questions to help you decide if the field is right for you.

What is Aeronautical Engineering?

Daivd Guo with the text David GuoAs David Guo, associate professor of aeronautical engineering for Southern New Hampshire University's (SNHU) School of Engineering, Technology and Aeronautics (SETA), put it, aeronautical engineering is “the branch of engineering that deals with the design, development, testing and production of aircraft and related systems.”

Aeronautical engineers are the talent behind these systems. This type of engineering involves applied mathematics, theory, knowledge and problem-solving skills to transform flight-related concepts into functioning aeronautical designs that are then built and operated.

In practice, that means aeronautical engineers design, build and test the planes, drones and helicopters you see flying overhead. With an eye on the sky, these workers also remain at the forefront of some of the field’s most exciting innovations — from autonomous airship-fixing robots to high-flying hoverboards and solar-powered Internet drones.

Aeronautical vs. Aerospace Engineering: Are They the Same?

There is a common misconception that both aeronautical and aerospace engineering are the same, but aerospace engineering is actually a broader field that represents two distinct branches of engineering:

* Aeronautical Engineering
* Astronautical Engineering

According to the U.S. Bureau of Labor Statistics (BLS), aeronautical engineers explore systems like helicopters, planes and unmanned aerial vehicles (drones) — any aircraft that operates within Earth’s atmosphere. Astronautical engineers, meanwhile, are more concerned with objects in space: this includes objects and vehicles like satellites, shuttles and rocket ships.

What Do Aeronautical Engineers Do?

“Aeronautical engineers develop, research, manufacture and test proven and new technology in military or civilian aviation,” Guo said. “Common work areas include aircraft design and development, manufacturing and flight experimentation, jet engine production and experimentation, and drone (unmanned aerial system) development.”

According to the American Society of Mechanical Engineers (ASME), as the industry evolves, so does the need for coding. With advanced system software playing a major role in aircraft communication and data collection, a background in computer programming is an increasingly valuable skill for aeronautical engineers.

BLS also cites several specialized career tracks within the field — including structural design, navigation, propulsion, instrumentation, communication and robotics.

Details

Aeronautical engineering is a popular branch of engineering for those who are interested in the design and construction of flight machines and aircraft. However, there are several requirements in terms of skills, educational qualifications and certifications that you must meet in order to become an aeronautical engineer. Knowing more about those requirements may help you make an informed career decision. In this article, we define aeronautical engineering and discover what aeronautical engineers do, their career outlook, salary and skills.

hat is aeronautical engineering?

Aeronautical engineering is the science of designing, manufacturing, testing and maintaining flight-capable machines. These machines can include satellites, jets, space shuttles, helicopters, military aircraft and missiles. Aeronautical engineers also are responsible for researching and developing new technologies that make flight machines and vehicles more efficient and function better.

What do aeronautical engineers do?

Aeronautical engineers might have the following duties and responsibilities:

* Planning aircraft project goals, budgets, timelines and work specifications
* Assessing the technical and financial feasibility of project proposals
* Designing, manufacturing and testing different kinds of aircraft
* Developing aircraft defence technologies
* Drafting and finalising new designs for flight machine parts
* Testing and modifying existing aircraft and aircraft parts
* Conducting design evaluations to ensure compliance with environmental and safety guidelines
* Checking damaged or malfunctioning aircraft and finding possible solutions
* Researching new technologies to develop and implement in existing and upcoming aircraft
* Gathering, processing and analysing data to understand aircraft system failures
* Applying processed data in-flight simulations to develop better functioning aircraft
* Writing detailed manuals and protocols for aircraft
* Working on existing and new space exploration and research technologies
* Providing consultancy services for private and military manufacturers to develop and sell aircraft

What is the salary of an aeronautical engineer?

The salary of an aeronautical engineer can vary, depending on their educational qualifications, work experience, specialised skills, employer and location.

Is aeronautical engineering a good career?

If you have a background in mathematics and a desire for discovering the science behind aircraft, then aeronautical engineering can be a good career for you. This profession can give you immense job satisfaction, knowing that you are helping people around the world fly safely. You may also get a chance to travel the world and test new, innovative tools and technologies. With increased focus on reducing noise and improving fuel efficiency of aeroplanes, there are plenty of job opportunities for skilled aeronautical engineers.

You can choose to specialise in the design, testing and maintenance of certain types of aircraft. For instance, some engineers may focus on commercial jets, helicopters or drones, while others may concentrate on military aircraft, rockets, space shuttles, satellites or missiles.

How to become an aeronautical engineer

To become an aeronautical engineer, take the following steps:

1. Gain a diploma in aeronautical engineering
After passing your class 10 examination with maths and science subjects, you can enrol in an aeronautical engineer diploma course. Some course may select candidates based on their marks in the 10th class. Most diploma course require at least 50% marks. The duration of a diploma course is usually three years. You will study the different aspects of aircraft design, development, manufacture, maintenance and safe operation.

2. Earn a bachelor's degree in an aeronautical engineering
For getting admission to a B.Tech college aeronautical engineer course, you need to complete your 10+2 in the science stream, with physics, chemistry and mathematics. You must also clear the Joint Entrance Examination (JEE) Main. If you want to get admission into the Indian Institutes of Information Technology (IITs) and the National Institute of Information Technology (NIIT), you must clear the JEE Advanced exam.

The four-year engineering course covers general engineering studies and specialised aeronautics topics like mechanical engineering fundamentals, thermodynamics, vibration, aircraft flight dynamics, flight software systems, avionics and aerospace propulsion.

3. Get an associate membership of the Aeronautical Society of India
The Aeronautical Society of India conducts the Associate Membership Examination twice a year. Recognised by the Central Government and the Ministry of Education, it is equivalent to a bachelor's degree in aeronautical engineering. The examination's eligibility criteria is 10+2 with physics, chemistry and maths and at least 50% marks in each. You are also eligible if you have a diploma in aeronautical engineering, a diploma in aircraft maintenance engineering or a Bachelor of Science degree.

4. Pursue a master's degree in aeronautical engineering
To get into a master's degree programme, you must have a bachelor's degree in aeronautical engineering or a related field. You must also clear the Graduate Aptitude Test in Engineering (GATE). You can be eligible for a master's program if you have a four-year engineering degree, a five-year architecture degree or a master's degree in mathematics, statistics, computer applications or science. Degrees in optical engineering, ceramic engineering, industrial engineering, biomedical engineering, oceanography, metallurgy and chemistry are also acceptable.

5. Obtain a PhD in aeronautical engineering
You must clear the University Grants Commission National Eligibility Test (UGC-NET) to be able to pursue a doctorate degree. A PhD degree can help you to get promoted in your company. It is also helpful if you want to work as a professor in aeronautical engineering colleges. Additionally, you can pursue alternative careers such as AI, robotics, data science and consulting.

6. Do an internship and send out job applications.
Doing an internship with aeronautical companies can provide you with sound, on-the-job training and help you grow your contacts in the industry. Many degree programmes offer internships as part of their aeronautical courses. Find out the availability of internships before applying to a college. When sending out job applications, customise your cover letter and resume as per employer requirements.

Additional Information

In the aeronautical engineering major, cadets study aerodynamics, propulsion, flight mechanics, stability and control, aircraft structures, materials and experimental methods. As part of their senior year capstone, cadets select either of the two-course design sequences, aircraft design or aircraft engine design.

A design-build-fly approach enables cadets and professors to dive deep into the aeronautics disciplines while providing a hands on learning experience. Cadets will work on real-world design problems in our cutting-edge aeronautics laboratory, featuring several wind tunnels and jet engines. Many opportunities exist for cadets to participate in summer research at various universities and companies across the country. The rigors of the aeronautical engineering major prepare cadets to pursue successful engineering and acquisition careers in the Air Force or Space Force.

Media_596765_smxx.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2004 2023-12-22 22:56:09

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2006) Automotive Engineering

Gist

Automotive engineering is one of the most sophisticated courses in engineering which involves design, manufacturing, modification and maintenance of an automobile such as buses, cars, trucks and other transportation vehicles.

Summary

Automotive engineering, along with aerospace engineering and naval architecture, is a branch of vehicle engineering, incorporating elements of mechanical, electrical, electronic, software, and safety engineering as applied to the design, manufacture and operation of motorcycles, automobiles, and trucks and their respective engineering subsystems. It also includes modification of vehicles. Manufacturing domain deals with the creation and assembling the whole parts of automobiles is also included in it. The automotive engineering field is research intensive and involves direct application of mathematical models and formulas. The study of automotive engineering is to design, develop, fabricate, and test vehicles or vehicle components from the concept stage to production stage. Production, development, and manufacturing are the three major functions in this field.

Disciplines:

Automobile engineering

Automobile engineering is a branch study of engineering which teaches manufacturing, designing, mechanical mechanisms as well as operations of automobiles. It is an introduction to vehicle engineering which deals with motorcycles, cars, buses, trucks, etc. It includes branch study of mechanical, electronic, software and safety elements. Some of the engineering attributes and disciplines that are of importance to the automotive engineer include:

Safety engineering: Safety engineering is the assessment of various crash scenarios and their impact on the vehicle occupants. These are tested against very stringent governmental regulations. Some of these requirements include: seat belt and air bag functionality testing, front- and side-impact testing, and tests of rollover resistance. Assessments are done with various methods and tools, including computer crash simulation (typically finite element analysis), crash-test dummy, and partial system sled and full vehicle crashes.

Fuel economy/emissions: Fuel economy is the measured fuel efficiency of the vehicle in miles per gallon or kilometers per liter. Emissions-testing covers the measurement of vehicle emissions, including hydrocarbons, nitrogen oxides (NOx), carbon monoxide (CO), carbon dioxide (CO2), and evaporative emissions.

NVH engineering (noise, vibration, and harshness): NVH involves customer feedback (both tactile [felt] and audible [heard]) concerning a vehicle. While sound can be interpreted as a rattle, squeal, or hot, a tactile response can be seat vibration or a buzz in the steering wheel. This feedback is generated by components either rubbing, vibrating, or rotating. NVH response can be classified in various ways: powertrain NVH, road noise, wind noise, component noise, and squeak and rattle. Note, there are both good and bad NVH qualities. The NVH engineer works to either eliminate bad NVH or change the "bad NVH" to good (i.e., exhaust tones).

Vehicle electronics: Automotive electronics is an increasingly important aspect of automotive engineering. Modern vehicles employ dozens of electronic systems. These systems are responsible for operational controls such as the throttle, brake and steering controls; as well as many comfort-and-convenience systems such as the HVAC, infotainment, and lighting systems. It would not be possible for automobiles to meet modern safety and fuel-economy requirements without electronic controls.

Performance: Performance is a measurable and testable value of a vehicle's ability to perform in various conditions. Performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate (e.g. standing start 1/4 mile elapsed time, 0–60 mph, etc.), its top speed (disambiguation) top speed, how short and quickly a car can come to a complete stop from a set speed (e.g. 70-0 mph), how much g-force a car can generate without losing grip, recorded lap-times, cornering speed, brake fade, etc. Performance can also reflect the amount of control in inclement weather (snow, ice, rain).

Shift quality: Shift quality is the driver's perception of the vehicle to an automatic transmission shift event. This is influenced by the powertrain (Internal combustion engine, transmission), and the vehicle (driveline, suspension, engine and powertrain mounts, etc.) Shift feel is both a tactile (felt) and audible (heard) response of the vehicle. Shift quality is experienced as various events: transmission shifts are felt as an upshift at acceleration (1–2), or a downshift maneuver in passing (4–2). Shift engagements of the vehicle are also evaluated, as in Park to Reverse, etc.

Durability / corrosion engineering: Durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. Tests include mileage accumulation, severe driving conditions, and corrosive salt baths.

Drivability: Drivability is the vehicle's response to general driving conditions. Cold starts and stalls, RPM dips, idle response, launch hesitations and stumbles, and performance levels.

Cost: The cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up-front tooling and fixed costs associated with developing the vehicle. There are also costs associated with warranty reductions and marketing.

Program timing: To some extent programs are timed with respect to the market, and also to the production-schedules of assembly plants. Any new part in the design must support the development and manufacturing schedule of the model.

Assembly feasibility: It is easy to design a module that is hard to assemble, either resulting in damaged units or poor tolerances. The skilled product-development engineer works with the assembly/manufacturing engineers so that the resulting design is easy and cheap to make and assemble, as well as delivering appropriate functionality and appearance.

Quality management: Quality control is an important factor within the production process, as high quality is needed to meet customer requirements and to avoid expensive recall campaigns. The complexity of components involved in the production process requires a combination of different tools and techniques for quality control. Therefore, the International Automotive Task Force (IATF), a group of the world's leading manufacturers and trade organizations, developed the standard ISO/TS 16949. This standard defines the design, development, production, and (when relevant) installation and service requirements. Furthermore, it combines the principles of ISO 9001 with aspects of various regional and national automotive standards such as AVSQ (Italy), EAQF (France), VDA6 (Germany) and QS-9000 (USA). In order to further minimize risks related to product failures and liability claims for automotive electric and electronic systems, the quality discipline functional safety according to ISO/IEC 17025 is applied.

Since the 1950s, the comprehensive business approach total quality management (TQM) has operated to continuously improve the production process of automotive products and components. Some of the companies who have implemented TQM include Ford Motor Company, Motorola and Toyota Motor Company.

Details

Automotive engineering is a branch of mechanical engineering and an important component of the automobile industry. Automotive engineers design new vehicles and ensure that existing vehicles are up to prescribed safety and efficiency standards. This field of engineering is research intensive and requires educated professionals in automotive engineering specialities. In this article, we discuss what automotive engineering is, what these engineers do and what skills they need and outline the steps to pursue this career path.

What is automotive engineering?

Automotive engineering is a branch of vehicle engineering that focuses on the application, design and manufacture of various types of automobiles. This field of engineering involves the direct application of mathematics and physics concepts in the design and production of vehicles. Engineering disciplines that are relevant in this field include safety engineering, vehicle electronics, quality control, fuel economy and emissions.

What do automotive engineers do?

Automotive engineers, sometimes referred to as automobile engineers, work with other professionals to enhance the technical performance, aesthetics and software components of vehicles. Common responsibilities of an automotive engineer include designing and testing various components of vehicles, including fuel technologies and safety systems. Some automotive engineers also work in the after-sale care of vehicles, making repairs and inspections. They can work on both the interior and exterior components of vehicles. Common duties of an automotive engineer include:

* preparing design specifications
* researching, developing and producing new vehicles or vehicle subsystems
* using computerised models to determine the behaviour and efficiency of a vehicle
* investigating instances of product failure
* preparing cost estimates for current or new vehicles
* assessing the safety and environmental aspects of an automotive project
* creating plans and drawings for new vehicle products

Automobile engineers may choose to specialise in a specific sector of this field, such as fluid mechanics, aerodynamics or control systems. The production of an automobile often involves a team of automotive engineers who each specialise in a particular section of vehicular engineering. The work of these engineers is often broken down into three components: design, research and development (R&D) and production.

What skills do automotive engineers require?

If you are interested in becoming an automotive engineer, you may consider developing the following skills:

* Technical knowledge: Automotive engineers require good practical as well as theoretical knowledge of manufacturing processes and mechanical systems. They may have to design products, do lab experiments and complete internships during their study.
* Mathematical skills: Good mathematical skills may also be necessary for an automotive engineer, as they have to calculate the power, stress, strength and other aspects of machines. Along with this, they may also have a well-rounded understanding of fundamental aspects of automotive engineering.
* Computer skills: Since most engineers work with computers, they benefit from computer literacy. They use computer and software programs to design, manufacture and test vehicles and their components.
* Analytical skills: An automotive engineer may require good analytical skills to figure out how different parts of a vehicle work in sync with each other. They also analyse data and form logical conclusions, which may then be used to make decisions related to manufacturing or design.
* Problem-solving skills: Automotive engineers require good problem-solving skills. They may encounter different issues during the production and manufacturing process of vehicles. They may have to plan effectively, identify problems and find solutions quickly.

How to become an automotive engineer

Follow these steps to become an automotive engineer:

1. Graduate from a higher secondary school
To become an automotive engineer, you have to pass higher secondary school with a focus on science subjects like physics, chemistry and maths. After graduating, you can appear for various national and state-level entrance exams to gain admission into engineering colleges. The most prominent entrance exam is the JEE (Joint Entrance Exam). You may require a 50% aggregate score in your board examinations to be eligible to pursue an engineering degree from a reputed institute.

2. Pursue a bachelor's degree
You can pursue an undergraduate engineering degree to develop a good understanding of the fundamental concepts of automotive engineering. The most common degrees include BTech (Bachelor of Technology) in automobile engineering, BTech in automotive design engineering, BTech in mechanical engineering and BE (Bachelor of Engineering) in automobile engineering. A bachelor's degree is the most basic qualification to start a career in this industry.

3. Pursue a master's degree
After completing your undergraduate degree, you can pursue a postgraduate course in automobile engineering. To join for a master's course, you may require a BTech or BE in a related stream from a recognised university. Most universities require candidates to clear the GATE (Graduate Aptitude Test in Engineering) exam for MTech (Master of Technology) or ME (Master of Engineering) admissions. Some popular MTech courses are:

* MTech in automobile engineering
* MTech in automotive engineering and manufacturing
* ME in automobile engineering

4. Get certified
Additionally, you can also pursue certifications to help you update your knowledge and skills along with changes in the industry. There are a number of colleges that offer good certification programmes and diploma courses. These programmes can be an enriching experience for professionals with work experience as well as a graduate without experience. Some popular certification courses are:

* Certificate course in automobile engineering
* Certificate course in automobile repair and driving
* Diploma in automobile engineering
* Postgraduate diploma in automobile engineering

5. Apply for jobs
After completing a degree and gaining hands-on experience, you are eligible for entry-level automotive engineer positions. Most companies provide extensive training programmes for newly hired employees. You can start your career in an entry-level position where you may work under experienced professionals. Once you have gained the necessary experience, skills and expertise, you can apply for higher-level jobs in the field.

Is automotive engineering a good career?

Automotive engineering is a very lucrative career path with abundant opportunities. Automotive engineers can work in specialised areas like the design of vehicles and the enforcement of safety protocols for transport. With the rapid development and expansion of the automobile industry, this career path is apt for creative professionals who enjoy a fast-paced and dynamic work environment. Job prospects in this field are immense, as there are opportunities in the private and public sectors. If you love working on vehicles and enjoy creative problem-solving, this line of work may be a good fit for you.

What is the difference between automobile and automotive engineering?

While the terms "automobile" and "automotive" are often used interchangeably, they are not entirely the same. Automobile refers to four-wheeled vehicles used for transport while automotive relates to all motor vehicles. Hence, automobile engineers may work specifically on the design and manufacture of cars, whereas automotive engineers deal with all vehicles, including public transport. Both of these branches are sub-branch of vehicle engineering.

Related careers

If you are interested in the job role of an automotive engineer, you may also consider the following career options:

* Automotive engineering technician

These professionals assist automotive engineers by conducting tests on vehicles, inspecting manufactured automobiles, collecting data and designing prototypes. They may test automobile parts and analyse them for efficiency and effectiveness. This position is ideal for individuals who are entering the field and are looking to gain hands-on experience in the industry.

* Automotive design engineer

This type of engineer is primarily involved in improving or designing the functional aspects of a motor vehicle. They may require an understanding of both the aesthetic qualities of a vehicle and the materials and engineering components needed to design it. They curate designs to improve the visual appeal of vehicles and work in collaboration with automobile engineers.

* Automobile designer

An automobile designer is a professional who performs research and designs new vehicles. These individuals focus on creating new car designs that incorporate the latest safety and operational technologies. They ensure that designs meet mandated regulations and offer consumers a comfortable and aesthetically pleasing product. They may design all road vehicles including cars, trucks, motorcycles and buses.

Manufacturing engineers

Manufacturing engineers are responsible for creating and assembling the parts of automobiles. They also oversee the design, layout and specifications of various automobile components. They also ensure that appropriate safety measures are put in place during the manufacturing process.

Additional Information

Working as an automotive engineer, you'll design new products and in some cases modify those currently in use. You'll also identify and solve engineering problems.

You'll need to have a combination of engineering and commercial skills to be able to deliver projects within budget. Once you've built up experience, it's likely you'll specialise in a particular area, for example, structural design, exhaust systems or engines.

Typically, you’'ll focus on one of three main areas:

* design
* production
* research and development.

Responsibilities

Your tasks will depend on your specialist area of work, but it's likely you'll need to:

* use computer-aided design (CAD) packages to develop ideas and produce designs
* decide on the most appropriate materials for component production
* solve engineering problems using mechanical, electrical, hydraulic, thermodynamic or pneumatic principles
* build prototypes of components and test their performance, weaknesses and safety
* take into consideration changing customer needs and government emissions regulations when developing new designs and manufacturing procedures
* prepare material, cost and timing estimates, reports and design specifications
* supervise and inspect the installation and adjustment of mechanical systems in industrial plants
* investigate mechanical failures or unexpected maintenance problems
* liaise with suppliers and handle supply chain management issues
* manage projects, including budgets, production schedules, resources, staff and supervise quality control.
* inspect and test drive vehicles and check for faults.

0c0c833c9fe5322554400394234c26ec6-6O.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2005 2023-12-23 21:03:24

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2007) Muscle Atrophy

Gist

Muscular atrophy is the decrease in size and wasting of muscle tissue. Muscles that lose their nerve supply can atrophy and simply waste away. People may lose 20 to 40 percent of their muscle and, along with it, their strength as they age.

Summary

Muscle atrophy is the loss of skeletal muscle mass. It can be caused by immobility, aging, malnutrition, medications, or a wide range of injuries or diseases that impact the musculoskeletal or nervous system. Muscle atrophy leads to muscle weakness and causes disability.

Disuse causes rapid muscle atrophy and often occurs during injury or illness that requires immobilization of a limb or bed rest. Depending on the duration of disuse and the health of the individual, this may be fully reversed with activity. Malnutrition first causes fat loss but may progress to muscle atrophy in prolonged starvation and can be reversed with nutritional therapy. In contrast, cachexia is a wasting syndrome caused by an underlying disease such as cancer that causes dramatic muscle atrophy and cannot be completely reversed with nutritional therapy. Sarcopenia is age-related muscle atrophy and can be slowed by exercise. Finally, diseases of the muscles such as muscular dystrophy or myopathies can cause atrophy, as well as damage to the nervous system such as in spinal cord injury or stroke. Thus, muscle atrophy is usually a finding (sign or symptom) in a disease rather than being a disease by itself. However, some syndromes of muscular atrophy are classified as disease spectrums or disease entities rather than as clinical syndromes alone, such as the various spinal muscular atrophies.

Muscle atrophy results from an imbalance between protein synthesis and protein degradation, although the mechanisms are incompletely understood and are variable depending on the cause. Muscle loss can be quantified with advanced imaging studies but this is not frequently pursued. Treatment depends on the underlying cause but will often include exercise and adequate nutrition. Anabolic agents may have some efficacy but are not often used due to side effects. There are multiple treatments and supplements under investigation but there are currently limited treatment options in clinical practice. Given the implications of muscle atrophy and limited treatment options, minimizing immobility is critical in injury or illness.

Signs and symptoms

The hallmark sign of muscle atrophy is loss of lean muscle mass. This change may be difficult to detect due to obesity, changes in fat mass or edema. Changes in weight, limb or waist circumference are not reliable indicators of muscle mass changes.

The predominant symptom is increased weakness which may result in difficulty or inability in performing physical tasks depending on what muscles are affected. Atrophy of the core or leg muscles may cause difficulty standing from a seated position, walking or climbing stairs and can cause increased falls. Atrophy of the throat muscles may cause difficulty swallowing and diaphragm atrophy can cause difficulty breathing. Muscle atrophy can be asymptomatic and may go undetected until a significant amount of muscle is lost.

Details

Muscle atrophy is when muscles waste away. It’s usually caused by a lack of physical activity.

When a disease or injury makes it difficult or impossible for you to move an arm or leg, the lack of mobility can result in muscle wasting. Over time, without regular movement, your arm or leg can start to appear smaller but not shorter than the one you’re able to move.

In some cases, muscle wasting can be reversed with a proper diet, exercise, or physical therapy.


Symptoms of muscle atrophy

You may have muscle atrophy if:

* One of your arms or legs is noticeably smaller than the other.
* You’re experiencing marked weakness in one limb.
* You’ve been physically inactive for a very long time.
* Call your doctor to schedule a complete medical examination if you believe you may have muscle atrophy or if you are unable to move normally. You may have an undiagnosed condition that requires treatment.

Causes of muscle atrophy

Unused muscles can waste away if you’re not active. But even after it begins, this type of atrophy can often be reversed with exercise and improved nutrition.

Muscle atrophy can also happen if you’re bedridden or unable to move certain body parts due to a medical condition. Astronauts, for example, can experience muscle atrophy after a few days of weightlessness.

Other causes for muscle atrophy include:

* lack of physical activity for an extended period of time
* aging
* alcohol-associated myopathy, a pain and weakness in muscles due to excessive drinking over long periods of time
* burns
* injuries, such as a torn rotator cuff or broken bones
* malnutrition
* spinal cord or peripheral nerve injuries
* stroke
* long-term corticosteroid therapy

Some medical conditions can cause muscles to waste away or can make movement difficult, leading to muscle atrophy.
These include:

* amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, affects nerve cells that control voluntary muscle movement
* dermatomyositis, causes muscle weakness and skin rash
* Guillain-Barré syndrome, an autoimmune condition that leads to nerve inflammation and muscle weakness
* multiple sclerosis, an autoimmune condition in which the body destroys the protective coverings of nerves
* muscular dystrophy, an inherited condition that causes muscle weakness
* neuropathy, damage to a nerve or nerve group, resulting in loss of sensation or function
* osteoarthritis, causes reduced motion in the joints
* polio, a viral disease affecting muscle tissue that can lead to paralysis
* polymyositis, an inflammatory disease
* rheumatoid arthritis, a chronic inflammatory autoimmune condition that affects the joints
* spinal muscular atrophy, a hereditary condition causing arm and leg muscles to waste away

How is muscle atrophy diagnosed?

If muscle atrophy is caused by another condition, you may need to undergo testing to diagnose the condition.

Your doctor will request your complete medical history. You will likely be asked to:

* tell them about old or recent injuries and previously diagnosed medical conditions
* list prescriptions, over-the counter medications, and supplements you’re taking
* give a detailed description of your symptoms

Your doctor may also order tests to help with the diagnosis and to rule out certain diseases. These tests may include:

* blood tests
* X-rays
* magnetic resonance imaging (MRI)
* computed tomography (CT) scan
* nerve conduction studies
* muscle or nerve biopsy
* electromyography (EMG)

Your doctor may refer you to a specialist depending on the results of these tests.

How is muscle atrophy treated?

Treatment will depend on your diagnosis and the severity of your muscle loss. Any underlying medical conditions must be addressed. Common treatments for muscle atrophy include:

* exercise
* physical therapy
* ultrasound therapy
* surgery
* dietary changes

Recommended exercises might include water exercises to help make movement easier.

Physical therapists can teach you the correct ways to exercise. They can also move your arms and legs for you if you have trouble moving.

Ultrasound therapy is a noninvasive procedure that uses sound waves to aid in healing.

If your tendons, ligaments, skin, or muscles are too tight and prevent you from moving, surgery may be necessary. This condition is called contracture deformity.

Surgery may be able to correct contracture deformity if your muscle atrophy is due to malnutrition. It may also be able to correct your condition if a torn tendon caused your muscle atrophy.

If malnutrition is the cause of muscle atrophy, your doctor may suggest dietary changes or supplements.

Takeaway

Muscle atrophy can often be reversed through regular exercise and proper nutrition in addition to getting treatment for the condition that’s causing it.

Additional Information

Muscle atrophy is the wasting (thinning) or loss of muscle tissue.

Causes

There are three types of muscle atrophy: physiologic, pathologic, and neurogenic.

Physiologic atrophy is caused by not using the muscles enough. This type of atrophy can often be reversed with exercise and better nutrition. People who are most affected are those who:

* Have seated jobs, health problems that limit movement, or decreased activity levels
* Are bedridden
* Cannot move their limbs because of stroke or other brain disease
* Are in a place that lacks gravity, such as during space flights

Pathologic atrophy is seen with aging, starvation, and diseases such as Cushing disease (because of taking too much medicines called corticosteroids).

Neurogenic atrophy is the most severe type of muscle atrophy. It can be from an injury to, or disease of a nerve that connects to the muscle. This type of muscle atrophy tends to occur more suddenly than physiologic atrophy.

Muscle-Atrophy.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2006 2023-12-24 16:51:16

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2008) Scalpel

Gist

A scalpel is a bladed surgical instrument used to make cuts into the body. This is a very sharp instrument and comes in various sizes for different types of cuts and surgeries.

Details

A scalpel, lancet, or bistoury is a small and extremely sharp bladed instrument used for surgery, anatomical dissection, podiatry and various handicrafts. A lancet is a double-edged scalpel.

Scalpel blades are usually made of hardened and tempered steel, stainless steel, or high carbon steel; in addition, titanium, ceramic, diamond and even obsidian knives are not uncommon. For example, when performing surgery under MRI guidance, steel blades are unusable (the blades would be drawn to the magnets and would also cause image artifacts). Historically, the preferred material for surgical scalpels was silver. Scalpel blades are also offered by some manufacturers with a zirconium nitride–coated edge to improve sharpness and edge retention. Others manufacture blades that are polymer-coated to enhance lubricity during a cut.

Scalpels may be single-use disposable or re-usable. Re-usable scalpels can have permanently attached blades that can be sharpened or, more commonly, removable single-use blades. Disposable scalpels usually have a plastic handle with an extensible blade (like a utility knife) and are used once, then the entire instrument is discarded. Scalpel blades are usually individually packed in sterile pouches but are also offered non-sterile.

Alternatives to scalpels in surgical applications include electrocautery and lasers.

Types:

Surgical

Surgical scalpels consist of two parts, a blade and a handle. The handles are often reusable, with the blades being replaceable. In medical applications, each blade is only used once (sometimes just for a single, small cut).

The handle is also known as a "B.P. handle", named after Charles Russell Bard and Morgan Parker, founders of the Bard-Parker Company. Morgan Parker patented the 2-piece scalpel design in 1915 and Bard-Parker developed a method of cold sterilization that would not dull the blades, as did the heat-based method that was previously used.

Blades are manufactured with a corresponding fitment size so that they fit on only one size handle.

A lancet has a double-edged blade and a pointed end for making small incisions or drainage punctures.

Handicraft

Graphical and model-making scalpels tend to have round handles, with textured grips (either knurled metal or soft plastic). The blade is usually flat and straight, allowing it to be run easily against a straightedge to produce straight cuts.

Safety

Rising awareness of the dangers of sharps in a medical environment around the beginning of the 21st century led to the development of various methods of protecting healthcare workers from accidental cuts and puncture wounds. According to the Centers for Disease Control and Prevention, as many as 1,000 people were subject to accidental needle sticks and lacerations each day in the United States while providing medical care. Additionally, surgeons can expect to suffer hundreds of such injuries over the course of their career. Scalpel blade injuries were among the most frequent sharps injuries, second only to needlesticks. Scalpel injuries made up 7 percent to 8 percent of all sharps injuries in 2001.

"Scalpel Safety" is a term coined to inform users that there are choices available to them to ensure their protection from this common sharps injury.

Safety scalpels are becoming increasingly popular as their prices come down and also on account of legislation such as the Needle Stick Prevention Act, which requires hospitals to minimize the risk of pathogen transmission through needle or scalpel-related accidents.

There are essentially two kinds of disposable safety scalpels offered by various manufacturers. They can be either classified as retractable blade or retractable sheath type. The retractable blade version made by companies such as OX Med Tech, DeRoyal, Jai Surgicals, Swann Morton, and PenBlade are more intuitive to use due to their similarities to a standard box-cutter. Retractable sheath versions have much stronger ergonomic feel for the doctors and are made by companies such as Aditya Dispomed, Aspen Surgical and Southmedic. A few companies have also started to offer a safety scalpel with a reusable metal handle. In such models, the blade is usually protected in a cartridge. Such systems usually require a custom handle and the price of blades and cartridges is considerably more than for conventional surgical blades.

However, CDC studies shows that up to 87% of active medical devices are not activated. Safety scalpels are active devices and therefore the risk of not activating is still significant. There is a study that indicated there were actually four times more injuries with safety scalpels than reusable scalpels.

There are various scalpel blade removers on the market that allows users to safely remove blades from the handle, instead of dangerously using fingers or forceps. In the medical field, when taking into account activation rates, the combination of a single-handed scalpel blade remover with a passing tray or a neutral zone was as safe and up to five times safer than a safety scalpel. There are companies which offer a single-handed scalpel blade remover that complies with regulatory requirements such as US Occupational Safety and Health Administration Standards.

The usage of both safety scalpels and a single-handed blade remover, combined with a hands-free passing technique, are potentially effective in reducing scalpel blade injuries. It is up to employers and scalpel users to consider and use safer and more effective scalpel safety measures when feasible.

41wW7FW8eKL.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2007 2023-12-25 21:23:13

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2009) Nursing

Gist

Nursing is a profession within the healthcare sector focused on the care of individuals, families, and communities so they may attain, maintain, or recover optimal health and quality of life. Nurses can be differentiated from other healthcare providers by their approach to patient care, training, and scope of practice.

Summary

Nursing: Nursing is a profession within the healthcare sector focused on the care of individuals, families, and communities so they may attain, maintain, or recover optimal health and quality of life. Nurses can be differentiated from other healthcare providers by their approach to patient care, training, and scope of practice. Nurses practice in many specialties with differing levels of prescription authority. Nurses comprise the largest component of most healthcare environments; but there is evidence of international shortages of qualified nurses. Nurses collaborate with other healthcare providers such as physicians, nurse practitioners, physical therapists, and psychologists. There is a distinction between nurses and nurse practitioners; in the U.S., the latter are nurses with a graduate degree in advanced practice nursing, and are permitted to prescribe medications unlike the former. They practice independently in a variety of settings in more than half of the United States. Since the postwar period, nurse education has undergone a process of diversification towards advanced and specialized credentials, and many of the traditional regulations and provider roles are changing.

Nurses develop a plan of care, working collaboratively with physicians, therapists, the patient, the patient's family, and other team members that focuses on treating illness to improve quality of life. In the United Kingdom and the United States, clinical nurse specialists and nurse practitioners, diagnose health problems and prescribe the correct medications and other therapies, depending on particular state regulations. Nurses may help coordinate the patient care performed by other members of a multidisciplinary healthcare team such as therapists, medical practitioners, and dietitians. Nurses provide care both interdependently, for example, with physicians, and independently as nursing professionals. In addition to providing care and support, nurses educate the public and promote health and wellness.

Details

Nursing is a profession that assumes responsibility for the continuous care of the sick, the injured, the disabled, and the dying. Nursing is also responsible for encouraging the health of individuals, families, and communities in medical and community settings. Nurses are actively involved in health care research, management, policy deliberations, and patient advocacy. Nurses with postbaccalaureate preparation assume independent responsibility for providing primary health care and specialty services to individuals, families, and communities.

Professional nurses work both independently and in collaboration with other health care professionals such as physicians. Professional nurses supervise the work of nurses who have limited licenses, such as licensed practical nurses (LPNs) in the United States and enrolled nurses (ENs) in Australia. Professional nurses also oversee the work of nursing assistants in various settings.

Nursing is the largest, the most diverse, and one of the most respected of all the health care professions. There are more than 2.9 million registered nurses in the United States alone, and many more millions worldwide. While true demographic representation remains an elusive goal, nursing does have a higher proportional representation of racial and ethnic minorities than other health care professions. In some countries, however, men still remain significantly underrepresented.

The demand for nursing remains high, and projections suggest that such demand will substantively increase. Advances in health care technology, rising expectations of people seeking care, and reorganization of health care systems require a greater number of highly educated professionals. Demographic changes, such as large aging populations in many countries of the world, also fuel this demand.

History of nursing

Although the origins of nursing predate the mid-19th century, the history of professional nursing traditionally begins with Florence Nightingale. Nightingale, the well-educated daughter of wealthy British parents, defied social conventions and decided to become a nurse. The nursing of strangers, either in hospitals or in their homes, was not then seen as a respectable career for well-bred ladies, who, if they wished to nurse, were expected to do so only for sick family and intimate friends. In a radical departure from these views, Nightingale believed that well-educated women, using scientific principles and informed education about healthy lifestyles, could dramatically improve the care of sick patients. Moreover, she believed that nursing provided an ideal independent calling full of intellectual and social freedom for women, who at that time had few other career options.

In 1854 Nightingale had the opportunity to test her beliefs during Britain’s Crimean War. Newspaper stories reporting that sick and wounded Russian soldiers nursed by religious orders fared much better than British soldiers inflamed public opinion. In response, the British government asked Nightingale to take a small group of nurses to the military hospital at Scutari (modern-day Üsküdar, Turk.). Within days of their arrival, Nightingale and her nurses had reorganized the barracks hospital in accordance with 19th-century science: walls were scrubbed for sanitation, windows opened for ventilation, nourishing food prepared and served, and medications and treatments efficiently administered. Within weeks death rates plummeted, and soldiers were no longer sickened by infectious diseases arising from poor sanitary conditions. Within months a grateful public knew of the work of the “Lady with the Lamp,” who made nightly rounds comforting the sick and wounded. By the end of the 19th century, the entire Western world shared Nightingale’s belief in the worth of educated nurses.

Nightingale’s achievements overshadowed other ways to nurse the sick. For centuries, most nursing of the sick had taken place at home and had been the responsibility of families, friends, and respected community members with reputations as effective healers. During epidemics, such as cholera, typhus, and smallpox, men took on active nursing roles. For example, Stephen Girard, a wealthy French-born banker, won the hearts of citizens of his adopted city of Philadelphia for his courageous and compassionate nursing of the victims of the 1793 yellow fever epidemic.

As urbanization and industrialization spread, those without families to care for them found themselves in hospitals where the quality of nursing care varied enormously. Some patients received excellent care. Women from religious nursing orders were particularly known for the quality of the nursing care they provided in the hospitals they established. Other hospitals depended on recovering patients or hired men and women for the nursing care of patients. Sometimes this care was excellent; other times it was deplorable, and the unreliability of hospital-based nursing care became a particular problem by the late 19th century, when changes in medical practices and treatments required competent nurses. The convergence of hospitals’ needs, physicians’ wishes, and women’s desire for meaningful work led to a new health care professional: the trained nurse.

Hospitals established their own training schools for nurses. In exchange for lectures and clinical instructions, students provided the hospital with two or three years of skilled free nursing care. This hospital-based educational model had significant long-term implications. It bound the education of nurses to hospitals rather than colleges, a tie that was not definitively broken until the latter half of the 20th century. The hospital-based training model also reinforced segregation in society and in the health care system. For instance, African American student nurses were barred from almost all American hospitals and training schools. They could seek training only in schools established by African American hospitals. Most of all, the hospital-based training model strengthened the cultural stereotyping of nursing as women’s work. Only a few hospitals provided training to maintain men’s traditional roles within nursing.

Still, nurses transformed hospitals. In addition to the skilled, compassionate care they gave to patients, they established an orderly, routine, and systemized environment within which patients healed. They administered increasingly complicated treatments and medication regimes. They maintained the aseptic and infection-control protocols that allowed more complex and invasive surgeries to proceed. In addition, they experimented with different models of nursing interventions that humanized increasingly technical and impersonal medical procedures.

American Red Cross poster promoting a Christmas seal campaign to raise money to fight tuberculosis, c. 1919.
Outside hospitals, trained nurses quickly became critical in the fight against infectious diseases. In the early 20th century, the newly discovered “germ theory” of disease (the knowledge that many illnesses were caused by bacteria) caused considerable alarm in countries around the world. Teaching methods of preventing the spread of diseases, such as tuberculosis, pneumonia, and influenza, became the domain of the visiting nurses in the United States and the district nurses in the United Kingdom and Europe. These nurses cared for infected patients in the patients’ homes and taught families and communities the measures necessary to prevent spreading the infection. They were particularly committed to working with poor and immigrant communities, which often had little access to other health care services. The work of these nurses contributed to a dramatic decline in the mortality and morbidity rates from infectious diseases for children and adults.

At the same time, independent contractors called private-duty nurses cared for sick individuals in their homes. These nurses performed important clinical work and supported families who had the financial resources to afford care, but the unregulated health care labour market left them vulnerable to competition from both untrained nurses and each year’s class of newly graduated trained nurses. Very soon, the supply of private-duty nurses was greater than the demand from families. At the turn of the 20th century, nurses in industrialized countries began to establish professional associations to set standards that differentiated the work of trained nurses from both assistive-nursing personnel and untrained nurses. More important, they successfully sought licensing protection for the practice of registered nursing. Later on, nurses in some countries turned to collective bargaining and labour organizations to assist them in asserting their and their patients’ rights to improve conditions and make quality nursing care possible.

By the mid-1930s the increasing technological and clinical demands of patient care, the escalating needs of patients for intensive nursing, and the resulting movement of such care out of homes and into hospitals demanded hospital staffs of trained rather than student nurses. By the mid-1950s hospitals were the largest single employer of registered nurses. This trend continues, although as changes in health care systems have reemphasized care at home, a proportionately greater number of nurses work in outpatient clinics, home care, public health, and other community-based health care organizations.

Other important changes in nursing occurred during the latter half of the 20th century. The profession grew more diverse. For example, in the United States, the National Organization of Coloured Graduate Nurses (NOCGN) capitalized on the acute shortage of nurses during World War II and successfully pushed for the desegregation of both the military nursing corps and the nursing associations. The American Nurses Association (ANA) desegregated in 1949, one of the first national professional associations to do so. As a result, in 1951, feeling its goals fulfilled, the NOCGN dissolved. But by the late 1960s some African American nurses felt that the ANA had neither the time nor the resources to adequately address all their concerns. The National Black Nurses Association (NBNA) formed in 1971 as a parallel organization to the ANA.

Nursing’s educational structure also changed. Dependence on hospital-based training schools declined, and those schools were replaced with collegiate programs either in community or technical colleges or in universities. In addition, more systematic and widespread programs of graduate education began to emerge. These programs prepare nurses not only for roles in management and education but also for roles as clinical specialists and nurse practitioners. Nurses no longer had to seek doctoral degrees in fields other than nursing. By the 1970s nurses were establishing their own doctoral programs, emphasizing the nursing knowledge and science and research needed to address pressing nursing care and care-delivery issues.

During the second half of the 20th century, nurses responded to rising numbers of sick patients with innovative reorganizations of their patterns of care. For example, critical care units in hospitals began when nurses started grouping their most critically ill patients together to provide more effective use of modern technology. In addition, experiments with models of progressive patient care and primary nursing reemphasized the responsibility of one nurse for one patient in spite of the often-overwhelming bureaucratic demands by hospitals on nurses’ time.

The nursing profession also has been strengthened by its increasing emphasis on national and international work in developing countries and by its advocacy of healthy and safe environments. The international scope of nursing is supported by the World Health Organization (WHO), which recognizes nursing as the backbone of most health care systems around the world.

The practice of nursing:

Scope of nursing practice

According to the International Council of Nurses (ICN), the scope of nursing practice “encompasses autonomous and collaborative care of individuals of all ages, families, groups, and communities, sick or well and in all settings.” National nursing associations further clarify the scope of nursing practice by establishing particular practice standards and codes of ethics. National and state agencies also regulate the scope of nursing practice. Together, these bodies set forth legal parameters and guidelines for the practice of nurses as clinicians, educators, administrators, or researchers.

Education for nursing practice

Nurses enter practice as generalists. They care for individuals and families of all ages in homes, hospitals, schools, long-term-care facilities, outpatient clinics, and medical offices. Many countries require three to four years of education at the university level for generalist practice, although variations exist. For example, in the United States, nurses can enter generalist practice through a two-year program in a community college or a four-year program in a college or university.

Preparation for specialization in nursing or advanced nursing practice usually occurs at the master’s level. A college or university degree in nursing is required for entrance to most master’s programs. These programs emphasize the assessment and management of illnesses, pharmacology, health education, and supervised practice in specialty fields, such as pediatrics, mental health, women’s health, community health, or geriatrics.

Research preparation in nursing takes place at the doctoral level. Coursework emphasizes nursing knowledge and science and research methods. An original and substantive research study is required for completion of the doctoral degree.

Forms of general nursing practice:

Hospital-based nursing practice

Hospital nursing is perhaps the most familiar of all forms of nursing practice. Within hospitals, however, there are many different types of practices. Some nurses care for patients with illnesses such as diabetes or heart failure, whereas others care for patients before, during, and after surgery or in pediatric, psychiatric, or childbirth units. Nurses work in technologically sophisticated critical care units, such as intensive care or cardiac care units. They work in emergency departments, operating rooms, and recovery rooms, as well as in outpatient clinics. The skilled care and comfort nurses provide patients and families are only a part of their work. They are also responsible for teaching individuals and families ways to manage illnesses or injuries during recovery at home. When necessary, they teach patients ways to cope with chronic conditions. Most hospital-based nurses are generalists. Those with advanced nursing degrees provide clinical oversight and consultation, work in management, and conduct patient-care research.

Community health nursing practice

Community health nursing incorporates varying titles to describe the work of nurses in community settings. Over the past centuries and in different parts of the world, community health nurses were called district nurses, visiting nurses, public health nurses, home-care nurses, and community health nurses. Today community health nursing and public health nursing are the most common titles used by nurses whose practices focus on promoting and protecting the health of populations. Knowledge from nursing, social, and public health sciences informs community health nursing practices. In many countries, ensuring that needed health services are provided to the most vulnerable and disadvantaged groups is central to community health nursing practice. In the United States, community health nurses work in a variety of settings, including state and local health departments, school health programs, migrant health clinics, neighbourhood health centres, senior centres, occupational health programs, nursing centres, and home care programs. Care at home is often seen as a preferred alternative for caring for the sick. Today home-care nurses provide very sophisticated, complex care in patients’ homes. Globally, home care is being examined as a solution to the needs of the growing numbers of elderly requiring care.

Mental health nursing practice

Mental health (or psychiatric) nursing practice concentrates on the care of those with emotional or stress-related concerns. Nurses practice in inpatient units of hospitals or in outpatient mental health clinics, and they work with individuals, groups, and families. Advanced-practice mental health nurses also provide psychotherapy to individuals, groups, and families in private practice, consult with community organizations to provide mental health support, and work with other nurses in both inpatient and outpatient settings to meet the emotional needs of patients and families struggling with physical illnesses or injuries.

The care of children

The care of children, often referred to as pediatric nursing, focuses on the care of infants, children, and adolescents. The care of families, the most important support in childrens’ lives, is also a critically important component of the care of children. Pediatric nurses work to ensure that the normal developmental needs of children and families are met even as they work to treat the symptoms of serious illnesses or injuries. These nurses also work to promote the health of children through immunization programs, child abuse interventions, nutritional and physical activity education, and health-screening initiatives. Both generalist and specialist pediatric nurses work in hospitals, outpatient clinics, schools, day-care centres, and almost anywhere else children are to be found.

The care of women

The care of women, especially of childbearing and childrearing women (often called maternal-child nursing), has long been a particular nursing concern. As early as the 1920s, nurses worked with national and local governments, private charities, and other concerned professionals to ensure that mothers and children received proper nutrition, social support, and medical care. Later, nurses began working with national and international agencies to guarantee rights to adequate health care, respect for human rights, protection against violence, access to quality reproductive health services, and nutritional and educational support. Generalist and specialist nurses caring for women work on obstetrical and gynecological units in hospitals and in a variety of outpatient clinics, medical offices, and policy boards. Many have particular expertise in such areas as osteoporosis, breast-feeding support, domestic violence, and mental health issues of women.

Geriatric nursing practice

Geriatric nursing is one of the fastest-growing areas of nursing practice. This growth matches demographic need. For example, projections in the United States suggest that longer life expectancies and the impact of the “baby boom” generation will result in a significant increase in the number of individuals over age 65. In 2005 individuals over 65 accounted for about 13 percent of the total population; however, they are expected to account for almost 20 percent of the total population by 2030. Moreover, those over 65 use more health care and nursing services than any other demographic group. Most schools of nursing incorporate specific content on geriatric nursing in their curricula. Increasingly, all generalist nurses are prepared to care for elderly patients in a variety of settings including hospitals, outpatient clinics, medical offices, nursing homes, rehabilitation facilities, assisted living facilities, and individuals’ own homes. Specialists concentrate on more specific aspects of elder care, including maintaining function and quality of life, delivering mental health services, providing environmental support, managing medications, reducing the risks for problems such as falling, confusion, skin breakdown, and infections, and attending to the ethical issues associated with frailty and vulnerability.

Advanced nursing practice:

Nurse practitioners

Nurse practitioners are prepared at the master’s level in universities to provide a broad range of diagnostic and treatment services to individuals and families. This form of advanced nursing practice began in the United States in the 1960s, following the passage of health care legislation (Medicare and Medicaid) that guaranteed citizens over age 65 and low-income citizens access to health care services. In response, some nurses, working in collaboration with physicians, obtained additional training and expanded their practice by assuming responsibility for the diagnosis and treatment of common acute and stable chronic illnesses of children and adults. Initially, nurse practitioners worked in primary care settings; there they treated essentially healthy children who experienced routine colds, infections, or developmental issues, performed physical exams on adults, and worked with both individuals and families to ensure symptom stability in such illnesses as diabetes, heart disease, and emphysema. Today nurse practitioners are an important component of primary health care services, and their practice has expanded into specialty areas as well. Specialized nurse practitioners often work in collaboration with physicians in emergency rooms, intensive care units of hospitals, nursing homes, and medical practices.

Clinical nursing specialists

Clinical nursing specialists are prepared in universities at the master’s level. Their clinically focused education is in particular specialties, such as neurology, cardiology, rehabilitation, or psychiatry. Clinical nursing specialists may provide direct care to patients with complex nursing needs, or they may provide consultation to generalist nurses. Clinical nursing specialists also direct continuing staff education programs. They usually work in hospitals and outpatient clinics, although some clinical nursing specialists establish independent practices.

Nurse midwives

Nurse midwives are rooted in the centuries-old tradition of childbirth at home. Midwives, rather than obstetricians, have historically been the primary provider of care to birthing women, and they remain so in many parts of the industrialized and developing world. In the United States in the 1930s, some nurses began combining their skills with those of midwives to offer birthing women alternatives to obstetrical care. The new specialty of nurse-midwifery grew slowly, serving mainly poor and geographically disadvantaged women and their families. The women’s movement beginning in the 1960s brought a surge in demand for nurse-midwives from women who wanted both the naturalness of a traditional delivery and the safety of available technology if any problems developed. Numbers of nurse-midwives in the United States grew from fewer than 300 in 1963 to over 7,000 in 2007. Today most nurse-midwives are prepared in universities at the master’s level. They deliver nearly 300,000 babies every year, and, in contrast to traditional midwives, who deliver in homes, nurse-midwives do so mainly in hospitals and formal birthing centres. Global demand for nurse-midwifery care is projected to grow significantly.

Nurse anesthetists

Nurse anesthetists began practicing in the late 19th century. Trained nurses, who at that time were becoming an increasingly important presence in operating rooms, assumed responsibility for both administering anesthesia and providing individualized patient monitoring for any reactions during surgical procedures. Nurse anesthetists proved their value during World War I, when they were the sole providers of anesthesia in all military hospitals. Today nurse anesthetists are established health care providers. In the United States alone they provide two-thirds of all anesthesia services and are the sole providers of anesthesia services in most rural American hospitals. Nurse anesthetists train at the postgraduate level, either in master’s programs in schools of nursing or in affiliated programs in departments of health sciences. They work everywhere anesthesia is delivered: in operating rooms, obstetrical delivery suites, ambulatory surgical centres, and medical offices.

Licensing

Given the critical importance of standardized and safe nursing care, all countries have established mechanisms for ensuring minimal qualifications for entry into practice and continuing nursing education. Those countries with centralized health systems, such as many European and South American countries, enact national systems for nurse licensing. Countries with decentralized and privatized systems such as the United States cede to states and provinces the authority to determine minimal nurse licensing requirements. In most instances licenses are time-limited and can be revoked if circumstances warrant such an action. Licensing renewal often depends on some method of certifying continued competence.

National organizations

In virtually every country of the world, there is a national nursing organization that promotes standards of practice, advocates for safe patient care, and articulates the profession’s position on pressing health care issues to policy boards, government agencies, and the general public. Many national nursing organizations also have associated journals that publicize research findings, disseminate timely clinical information, and discuss outcomes of policy initiatives. In addition, most nursing specialty and advanced practice groups have their own organizations and associated journals that reach both national and international audiences. There are a wide variety of nursing special-interest groups. Different unions also engage in collective bargaining and labour organizing on behalf of nurses.

International organizations

The International Council of Nurses (ICN), a federation of over 128 national nurses associations based in Geneva, speaks for nursing globally. The World Health Organization (WHO) has had a long-standing interest in promoting the role of nursing, particularly as independent community-based providers of primary health care in Third World and other underserved countries. The International Committee of the Red Cross (ICRC) and its national affiliates have long recognized the critical role of nursing in disaster relief and ongoing health education projects.

Additional Information

21st Century nursing is the glue that holds a patient’s health care journey together. Across the entire patient experience, and wherever there is someone in need of care, nurses work tirelessly to identify and protect the needs of the individual. 

Beyond the time-honored reputation for compassion and dedication lies a highly specialized profession, which is constantly evolving to address the needs of society. From ensuring the most accurate diagnoses to the ongoing education of the public about critical health issues; nurses are indispensable in safeguarding public health.

Nursing can be described as both an art and a science; a heart and a mind. At its heart, lies a fundamental respect for human dignity and an intuition for a patient’s needs. This is supported by the mind, in the form of rigorous core learning. Due to the vast range of specialisms and complex skills in the nursing profession, each nurse will have specific strengths, passions, and expertise.

However, nursing has a unifying ethos:  In assessing a patient, nurses do not just consider test results. Through the critical thinking exemplified in the nursing process (see below), nurses use their judgment to integrate objective data with subjective experience of a patient’s biological, physical and behavioral needs. This ensures that every patient, from city hospital to community health center; state prison to summer camp, receives the best possible care regardless of who they are, or where they may be.

What exactly do nurses do?

In a field as varied as nursing, there is no typical answer. Responsibilities can range from making acute treatment decisions to providing inoculations in schools. The key unifying characteristic in every role is the skill and drive that it takes to be a nurse. Through long-term monitoring of patients’ behavior and knowledge-based expertise, nurses are best placed to take an all-encompassing view of a patient’s wellbeing.

What types of nurses are there?

All nurses complete a rigorous program of extensive education and study, and work directly with patients, families, and communities using the core values of the nursing process. In the United States today, nursing roles can be divided into three categories by the specific responsibilities they undertake.

Registered Nurses

Registered nurses (RN) form the backbone of health care provision in the United States. RNs provide critical health care to the public wherever it is needed.

Key Responsibilities

* Perform physical exams and health histories before making critical decisions
* Provide health promotion, counseling and education
* Administer medications and other personalized interventions
* Coordinate care, in collaboration with a wide array of health care professionals

Advanced Practice Registered Nurses

Advance Practice Registered Nurses (APRN) hold at least a Master’s degree, in addition to the initial nursing education and licensing required for all RNs. The responsibilities of an APRN include, but are not limited to, providing invaluable primary and preventative health care to the public. APRNs treat and diagnose illnesses, advise the public on health issues, manage chronic disease and engage in continuous education to remain at the very forefront of any technological, methodological, or other developments in the field.

largeimage-anais-42.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2008 2023-12-26 16:51:51

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2010) Bus Driver

Gist

A bus driver is someone whose job is to drive a bus.

Summary

A bus driver, bus operator, or bus captain is a person who drives buses for a living.

Description

Bus drivers must have a special license above and beyond a regular driver's licence. Bus drivers typically drive their vehicles between bus stations or stops. Bus drivers often drop off and pick up passengers on a predetermined route schedule. In British English a different term, coach driver, is used for drivers on privately booked long-distance routes, tours and school trips.

There are various types of bus drivers, including transit drivers, school bus drivers and tour bus drivers. Bus drivers may work for a city, public (state and national/federal) governments, school boards, and private enterprises, such as charter companies which run tour buses. Coach captains in Australia are frequently freelance sub-contractors who work for various bus and coach companies.

When there is no conductor present, the driver is the sole operator of the service and handles ticketing and interaction with customers, in addition to driving.

Intercity bus driver

An intercity bus driver is a bus driver whose duties involve driving a bus between cities. It is one of four common positions available to those capable of driving buses (the others being school, transit, or tour bus driving). Intercity bus drivers may be employed for public or private companies. It varies by country which is more common. But many countries have regulations on the training and certification requirements and the hours of intercity drivers.

In the United States, intercity bus driving is one of the fastest growing jobs, with attractive wages and good benefits.

Duties

Besides the actual operation of the bus, duties of the intercity bus driver include cleaning, inspecting, and maintaining the vehicle, doing simple repairs, checking tickets of passengers or in some cases, collecting fares, loading passengers on and off the bus efficiently, handling the passengers' luggage, enforcing guidelines expected from passengers (such as prohibiting yelling), and dealing with certain types of emergencies.

Good communication skills in the native language of the country and other languages spoken by a large part of the population are also key. Drivers must be able to engage in basic communication with passengers and to give them directions and other information they may need.

Some countries require intercity bus drivers to fill out logs detailing the hours they have driven. This documents they are compliant with the country's laws regarding the maximum number of hours they are permitted to drive.

Training

In the United States, intercity bus drivers are required to hold a Commercial Driver's License (CDL) with P endorsement. The requirements for this vary by country, but require more training than driving a passenger automobile. Safe driving skills and the willingness to obey traffic laws and handle driving under a variety of weather and traffic conditions are essential, as passengers expect a safe trip, and the safety of those in other vehicles on the road is necessary.

Those hired as intercity bus are often expected to have prior experience in the operation of a commercial vehicle. This may include the operation of a municipal bus service, school buses, or trucks.

New hires by companies are often oriented to their jobs by first riding along for one or more runs on a route, then driving the route under supervision of an experienced driver, or driving the route unsupervised without any passengers. After passing the training, most new hires will only work as backups until a permanent position can be offered.

Scheduling

Intercity bus drivers are provided with a lot of independence, though they are expected to follow a particular route and schedule as determined by their employer.

On shorter routes, it is possible for a driver to make a round trip and return home on the same day, and sometimes to complete a round trip multiple times in a single day.

On longer routes that exceed or come close to the maximum number of hours an operator can legally drive, drivers will be changed over the course of the route. Either the driver will drive half the work day in one direction, and switch places before driving part of a trip in the other direction on a different vehicle, or the driver will drive the maximum amount of time permitted by law in a single direction, stay overnight, and complete a return trip on the following day. When the latter occurs, the employer will often pay lodging and dining expenses for the driver.

An issue with intercity bus drivers, especially those on longer routes, is taking short breaks for eating and restroom use. Stopping to meet these human needs is a necessity. But making these stops delays the trip, which many passengers want to be as quick and efficient as possible. Often, the driver will pass these breaks onto the passengers and allow them to enjoy the benefits of the break as well.

Safety

Intercity bus driving is generally safe but carries its risks for drivers. Accidents occur, which can be harmful to the driver, passengers, and those in other vehicles involved alike. Dealing with unruly passengers can be another challenge, something which operators are not generally equipped to handle. Such passengers can be harmful to the driver and other passengers alike.

There have also been incidents which have occurred involving intercity bus drivers being assaulted by passengers.

Details

A bus driver is responsible for operating buses and transporting passengers safely and efficiently. Their primary role is to drive various types of buses, such as city buses, school buses, or tour buses, along predetermined routes. Bus drivers ensure the adherence to traffic laws, maintain a schedule, and provide a comfortable and secure environment for passengers.

In addition to driving, bus drivers have several other responsibilities. They must perform pre-trip inspections to check the bus's mechanical condition, including brakes, tires, and lights. They assist passengers with boarding and exiting the bus, collect fares or tickets, and answer any questions or concerns. Bus drivers must enforce rules and regulations, such as ensuring passengers are seated while the bus is in motion and handling any conflicts or issues that may arise during the journey. They also need to be aware of the surrounding traffic, road conditions, and potential hazards to ensure the safety of passengers and other road users.

The professionalism, attentiveness, and excellent driving skills of bus drivers contribute to the smooth functioning of public transportation systems, enabling people to access education, employment, healthcare, and recreational activities.

Duties and Responsibilities

Bus drivers have a range of duties and responsibilities to ensure the efficient operation of buses and the well-being of passengers.

Safe Operation of Buses: Bus drivers are responsible for safely operating buses, adhering to traffic laws, and following designated routes. They must possess a valid driver's license with appropriate endorsements for the type of bus they are driving. Bus drivers must be skilled in maneuvering buses in various traffic and weather conditions, and they should maintain a high level of attentiveness to prevent accidents and ensure passenger safety.

Pre-Trip Inspections: Before starting their shifts, bus drivers conduct pre-trip inspections to check the mechanical condition of the bus. This includes inspecting tires, brakes, lights, doors, and other vital components to ensure they are in proper working order. Any defects or issues are reported to maintenance personnel for repair.

Passenger Assistance: Bus drivers assist passengers with boarding and disembarking the bus, particularly those with special needs or mobility challenges. They ensure a smooth and safe transition on and off the bus, providing any necessary guidance or support. Bus drivers may also assist passengers with securing mobility aids, such as wheelchairs or strollers, and ensure their proper restraint.

Ticketing and Fare Collection: Depending on the type of bus service, bus drivers may collect fares or tickets from passengers. They verify tickets or passes, provide change when necessary, and ensure proper fare collection. Bus drivers should be knowledgeable about the fare structure and payment methods accepted.

Passenger Safety and Enforcement: Bus drivers are responsible for ensuring passenger safety during the journey. They enforce rules and regulations, such as requiring passengers to be seated while the bus is in motion, fastening seat belts when available, and maintaining a calm and orderly environment. Bus drivers must address any disruptive behavior or conflicts among passengers and intervene if necessary to maintain a safe and peaceful atmosphere.

Route Knowledge and Timeliness: Bus drivers must have a thorough understanding of their designated routes, including stops, intersections, and landmarks. They should be able to navigate efficiently, minimize deviations from the schedule, and ensure timely arrival and departure at each stop. Bus drivers may need to make announcements regarding upcoming stops or route changes to keep passengers informed.

Communication and Reporting: Bus drivers need effective communication skills to interact with passengers, colleagues, and dispatchers. They should be able to provide clear instructions, respond to passenger inquiries, and report any incidents or emergencies promptly. Bus drivers may need to communicate with traffic control centers or supervisors to coordinate service adjustments or address operational issues.

Vehicle Maintenance and Reporting: Bus drivers report any mechanical issues or defects discovered during their shifts to maintenance personnel. They may document and report incidents, accidents, or any irregularities that occur during their duties. Bus drivers are responsible for maintaining cleanliness inside the bus, ensuring trash is properly disposed of, and reporting any damage or vandalism.

Professional Conduct: Bus drivers represent their transportation companies or organizations and should conduct themselves professionally at all times. They should dress appropriately, maintain a courteous and respectful demeanor towards passengers and colleagues, and adhere to company policies and guidelines.

Emergency Response: In the event of an emergency, such as a vehicle breakdown, traffic incident, or medical situation onboard, bus drivers must follow established protocols and procedures. They should be trained in emergency response techniques, including evacuation procedures, first aid, and communication with emergency services.

Types of Bus Drivers

There are various types of bus drivers, each serving different purposes and operating different types of buses. Here are some common types of bus drivers and their specific roles:

City Bus Drivers: City bus drivers operate public transportation buses within urban areas. They follow predetermined routes, pick up and drop off passengers at designated stops, and ensure the efficient and timely transportation of commuters. City bus drivers often interact with a diverse range of passengers and may provide information or assistance regarding fares, routes, and schedules.

School Bus Drivers: School bus drivers transport students to and from school and may also drive for field trips and other school-related activities. They are responsible for ensuring the safety of students on board, enforcing rules and regulations, and maintaining a calm and orderly environment. School bus drivers often develop relationships with students, parents, and school staff, fostering a sense of trust and familiarity.

Charter Bus Drivers: Charter bus drivers operate buses that are rented for specific trips or events, such as group outings, tours, or corporate transportation. They transport passengers to various destinations as per the charter agreement. Charter bus drivers may work for bus companies or be self-employed. They often handle a range of responsibilities, including route planning, coordinating with clients, and providing a comfortable and enjoyable travel experience.

Tour Bus Drivers: Tour bus drivers specialize in providing guided tours to tourists and travelers. They navigate scenic routes, tourist attractions, and notable landmarks, providing commentary and information about the destinations. Tour bus drivers must have a good knowledge of local geography, history, and points of interest to offer an engaging and informative tour experience.

Intercity or Long-Distance Bus Drivers: Intercity bus drivers operate buses that transport passengers between cities or across long distances. They follow specific routes and schedules, making stops at various locations along the way. Intercity bus drivers ensure passenger safety, manage ticketing and fare collection, and maintain a comfortable and smooth journey for travelers.

Shuttle Bus Drivers: Shuttle bus drivers provide transportation services within specific locations, such as airports, hotels, or business complexes. They transport passengers between designated pick-up and drop-off points, often on a fixed schedule. Shuttle bus drivers may assist passengers with luggage, provide basic information about the area, and ensure a prompt and reliable shuttle service.

Paratransit Drivers: Paratransit drivers operate specialized vehicles to transport individuals with disabilities or mobility challenges. They provide door-to-door service, helping passengers with boarding and disembarking, securing mobility aids, and ensuring their safety and comfort throughout the journey. Paratransit drivers often require additional training and sensitivity to meet the unique needs of their passengers.

Are you suited to be a bus driver?

Bus drivers have distinct personalities. They tend to be realistic individuals, which means they’re independent, stable, persistent, genuine, practical, and thrifty. They like tasks that are tactile, physical, athletic, or mechanical. Some of them are also social, meaning they’re kind, generous, cooperative, patient, caring, helpful, empathetic, tactful, and friendly.

Does this sound like you? Take our free career test to find out if bus driver is one of your top career matches.

What is the workplace of a Bus Driver like?

The workplace of a bus driver is dynamic and varied, encompassing both on-the-road and off-the-road settings. The primary workplace for bus drivers is, of course, the driver's seat inside the bus. It's a familiar and comfortable space where they operate the vehicle, manage various controls, and ensure the safety and comfort of passengers. Bus drivers have a clear view of the road and traffic through the windshield, and they rely on mirrors and other safety equipment to monitor their surroundings. They adhere to traffic laws, navigate through city streets or highways, and maintain a smooth and steady driving experience.

Beyond the bus itself, bus drivers often have a base of operations, such as a bus depot or terminal. This is where they report for duty, receive their assignments, and perform tasks related to their role. Bus depots are bustling hubs where drivers gather before their shifts, interact with colleagues and supervisors, and receive instructions or updates. They may have access to restrooms, break areas, and other amenities at the depot, providing them with a comfortable and supportive environment during their breaks.

Bus drivers also encounter a diverse range of workplaces throughout their routes. They make regular stops at bus stations, transit hubs, or designated pick-up and drop-off points. These locations serve as temporary workplaces where drivers interact with passengers, assist with boarding and disembarking, and ensure a smooth flow of passengers in and out of the bus. Bus drivers must maintain a professional and customer-focused approach, providing assistance, answering questions, and ensuring a positive passenger experience.

Additionally, bus drivers interact with various people during the day, such as transit supervisors, maintenance personnel, and dispatchers. They may communicate via radio systems or electronic devices to coordinate with their colleagues or receive important updates. Effective communication and teamwork are vital aspects of a bus driver's workplace, enabling smooth operations, problem-solving, and maintaining passenger safety.

png0503nbusdrivers-02.jpg?quality=90&strip=all&w=564&h=423&type=webp&sig=LO3Gp-GjpEQiBa4ZclsMnw


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2009 2023-12-27 17:15:12

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2011) Convex Lens

Gist

A convex lens, also known as a converging lens, is a type of lens that curves outward like a sphere on both sides. It is thicker at the center and thinner at the edges. They cause light rays to converge or come together at a point, forming a focused image. They are commonly used in various optical instruments, including eyeglasses, magnifying glasses, telescopes, and microscopes.

Summary

As every child is sure to find out at some point in their life, lenses can be an endless source of fun. They can be used for everything from examining small objects and type to focusing the sun’s rays. In the latter case, hopefully they choose to be humanitarian and burn things like paper and grass rather than ants! But the fact remains, a Convex Lens is the source of this scientific marvel. Typically made of glass or transparent plastic, a convex lens has at least one surface that curves outward like the exterior of a sphere. Of all lenses, it is the most common given its many uses.

A convex lens is also known as a converging lens. A converging lens is a lens that converges rays of light that are traveling parallel to its principal axis. They can be identified by their shape which is relatively thick across the middle and thin at the upper and lower edges. The edges are curved outward rather than inward. As light approaches the lens, the rays are parallel. As each ray reaches the glass surface, it refracts according to the effective angle of incidence at that point of the lens. Since the surface is curved, different rays of light will refract to different degrees; the outermost rays will refract the most. This runs contrary to what occurs when a divergent lens (otherwise known as concave, biconcave or plano-concave) is employed. In this case, light is refracted away from the axis and outward.

Lenses are classified by the curvature of the two optical surfaces. If the lens is biconvex or plano-convex, the lens is called positive or converging. Most convex lenses fall into this category. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. These types of lenses are used in the manufacture of magnifying glasses. If both surfaces have the same radius of curvature, the lens is known as an equiconvex biconvex. If one of the surfaces is flat, the lens is plano-convex (or plano-concave depending on the curvature of the other surface). A lens with one convex and one concave side is convex-concave or meniscus. These lenses are used in the manufacture of corrective lenses.

Details

A convex lens is designed so that a ray of light passes through the lens, and the focus is brought to a particular point. It Is also known as an optical device used to convert the transmitted light to form an image. It is made upon a spherical surface like a contact lens or the glass of a telescope that has an outside indentation. This is called the biconvex lens or the convex lens.

Features of Convex Lens

A convex lens is a converging lens that converges light rays that travel parallel to its principal axis.

Convex lenses are thicker in the middle area and are relatively thin at the sides. The edges of the lens are curved outward. Convex lenses are usually used in microscopes, magnifying glasses, eyeglasses, cameras to focus light for a clear picture, projectors, telescopes, multi-junction solar cells, peepholes on doors, binoculars, etc.

There are different convex lenses, and each lens is preferred for different purposes. Some of these types are plano-convex lenses, concave-convex lenses and double convex lenses. The objects can be placed at different positions for a convex lens.

Uses of Convex Lenses

Convex lenses are used for a lot of things which includes both day-to-day and professional usage. A few common factors for which a convex lens could be used are as follows:

Microscope and Magnifying Glass: Convex lenses are used in microscopes and magnifying glasses so that all the converging light focuses on a specific point.

Camera lens: Convex lenses are used in cameras to focus more light on a clearer picture, and it helps to produce great photographic content.

Telescopes: Similar to the microscope and magnifying glass, telescopes require the convex lens to bring in all the focus of light on the farther object.

Eyeglasses: Convex lenses are used in eyeglasses so that it helps to bring the focus of light to the retina.

Projectors: Projectors require magnification of images or videos to a larger screen. The generated images are inverted or magnified to provide better sights.

Peep-Holes: Convex lenses are used in peepholes. The holes are tiny, and the convex lens magnifies the image present on the other side of the door.

Binoculars: The purpose of binoculars is to see distant objects closer. So, the convex lenses help view a magnified version of the images.

The formula for Convex lens

The formula for convex lens is 1/f = 1/v – 1/u, here f is focal length, u is distance of object from the lens and v is distance of image from lens. The images formed may be real or virtual, and the total distance is taken as positive and the virtual distance as negative. These could be well understood with the help of a diagram. Here, the distances of each aspect are measured from the centre of the lens. For a convex lens, f is always of the positive quantity.

For a convex lens, magnification M is positive for a virtual image and negative for a real image under magnification.

A few other formulas form part of other concepts related to the convex lens. Convex lens study material can be referred to for a clear view of the concepts.

Difference between Convex and Concave Lens

There are a lot of differences between a convex and a concave lens. Some major differences are as follows:

* A convex lens is thicker in the middle and has relatively thin edges, whereas a concave lens is thinner in the middle and thicker in the edges.
* The curve of the convex lens is outward, whereas the curve of the concave lens is inward.
* Light rays diverge from a concave lens, whereas light rays converge from a convex mirror.
* Comparing the length of focal of any convex lens is positive, whereas that of concave lense is negative.
* The image produced by a convex lens is either virtual or real and inverted, appearing closer. The image produced by a concave lens is virtual and erect, and the images that appeared are smaller and farther. The virtual images of convex lens are formed when the object is placed closer to the lens than to the focal point.
* Convex lenses are used to treat hypermetropia, whereas concave lenses are used to treat myopia.

Conclusion

Convex lenses and the nature of the images formed by them are suitable for professional purposes as well as day-to-day purposes. The usage of each type of lens for each purpose is well-determined. With the help of ray diagrams, one can understand the distinct features and their uses in detail in this topic. Concepts like measurement of focal length, calculating the magnitude of the image and determining the focal length are a few concepts that require in-depth learning. The usage of these lenses in various instruments has been distinguished properly to have an in-depth analysis.

convex-lens.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2010 2023-12-28 15:55:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2012) Auto Mechanic

Summary

An auto mechanic goes by several different names, including car mechanic, automotive technician, and service technician, and is primarily responsible for evaluating, fixing, and maintaining cars and trucks. Some of these mechanics may specialize in a particular area or system of the vehicle, such as a brake technician, transmission rebuilder, or automotive air-conditioning technician. These professionals are usually trained to use a wide range of tools on various kinds of vehicle models. Other job duties may include:

* Evaluating vehicles to identify any issues
* Creating a work plan for each vehicle
* Following procedures and examination lists
* Conducting routine maintenance on vehicles
* Performing any necessary repairs
* Replacing old or broken parts
* Discussing repairs and options with customers.

Details

An auto mechanic is a mechanic who services and repairs automobiles, sometimes specializing in one or more automobile brands or sometimes working with any brand. In fixing cars, their main role is to diagnose and repair the problem accurately and quickly. Seasoned auto repair shops start with a (Digital) Inspection to determine the vehicle conditions, independent of the customers concern. Based on the concern, the inspection results and preventative maintenance needs, the mechanic/technician returns the findings to the service advisor who then gets approval for any or all of the proposed work. The approved work will be assigned to the mechanic on a work order. Their work may involve the repair of a specific part or the replacement of one or more parts as assemblies. Basic vehicle maintenance is a fundamental part of a mechanic's work in modern industrialized countries, while in others they are only consulted when a vehicle is already showing signs of malfunction.

Education

Automotive repair knowledge can be derived from on-the-job training, an apprenticeship program, vocational school or college.

Apprenticeship

Apprentice mechanics work under master mechanics for a specified number of years before they work on their own. Some areas have formal apprenticeship programs, however many automotive repair shops utilize an informal apprenticeship system within their facilities. A master mechanic is often incentivised to train an apprentice by earning additional wages from the work produced by the apprentice.

Secondary education

In the United States, many programs and schools offer training for those interested in pursuing competencies as automotive mechanics or technicians. Areas of training include automobile repair and maintenance, collision repair, painting and restoring, electronics, air-conditioning and heating systems, and truck and diesel mechanics. The National Automotive Technicians Education Foundation (NATEF) is responsible for evaluating training programs against standards developed by the automotive industry. NATEF accredits programs in four different categories: automotive, collision, trucks (diesel technology) and alternative fuels. Diesel mechanics have developed into a trade somewhat distinctive from gasoline-engine mechanics. NATEF lists secondary and post secondary schools with accredited programs on their website.

Skill level and certifications

It is common for automotive repair companies to assign skill levels to their employed professionals so that each repair can be appropriately matched to a qualified professional. Some use an alphabetical ranking system whereby an upper-level is referred to as an "A tech" and a lower-level as a "C tech." Diagnosis and driveability concerns tend to be upper-level jobs while maintenance and component replacement are lower-level jobs. A professional's skill level is usually determined by years of experience and certifications:

ASE

The National Institute for Automotive Service Excellence (ASE) is a non-profit organization that tests and certifies automotive professionals so that the shop owners and service customers can better gauge a professional's level of expertise before contracting the professional's services. In addition to passing an ASE certification test, automotive professionals must have two years of on-the-job-training or one year of on-the-job-training and a two-year degree in automotive repair to qualify for certification. ASE Master professional status is earned when an individual achieves certification in all required testing areas for that series. Certification credentials are only valid for five years. Each certification in the series must be kept current in order to maintain ASE Master Professional status. While it is not required by law for a mechanic to be certified, some companies only hire or promote employees who have passed ASE tests.

OEM

A vehicle's Original Equipment Manufacturer (OEM) often provides and requires additional training as part of the dealership franchise agreement. In doing so, professionals become specialized and certified for that particular vehicle make. Some vocational schools or colleges offer manufacturer training programs with certain vehicle brands including BMW, Ford, GM, Mercedes-Benz, Mopar, Porsche, Toyota and Volvo which can provide a professional with OEM training before entering the dealership environment. These types of programs may be paid for by a student with no obligation, or by the manufacturer with a contract that requires a professional to work for the OEM for a designated amount of time upon graduating. An OEM usually has multiple professional skill levels that can be achieved, but the Master status is typically one of them.

EPA

The United States Environmental Protection Agency (EPA) requires any person who repairs or services a motor vehicle air conditioning system for payment or bartering to be properly trained and certified under section 609 of the Clear Air Act. To be certified, professionals must be trained by an EPA-approved program and pass a test demonstrating their knowledge in these areas. This certification does not expire.

Types and specialties:

Auto body

An auto body technician repairs the exterior of a vehicle, primarily bodywork and paintwork. This includes repairing minor damages such as scratches, scuffs and dents, as well as major damage caused by vehicle collisions. Some specialized auto body technicians may also offer paintless dent repair, glass replacement and chassis straightening.

Auto glass

An auto glass repairs chips, cracks and shattered glass in windshields, quarter glass, side windows and rear glass. Glass damage is often caused by hail, stones, wild animals, fallen trees, automobile theft and vandalism. Depending on the type and severity of the damage, an auto glass may either repair or replace the affected glass.

Diesel

A diesel mechanic repairs diesel engines, often found in trucks and heavy equipment.

Exhaust specialist

An exhaust system specialist performs repairs to the engine exhaust system. These mechanics utilize large tubing benders and welders to fabricate a new exhaust system out of otherwise straight lengths of pipe.

Fleet

A fleet mechanic maintains a particular group of vehicles called a fleet. Common examples of a fleet include taxi cabs, police cars, mail trucks and rental vehicles. Similar to a lubrication professional, a fleet mechanic focuses primarily on preventative maintenance and safety inspections, and will often outsource larger or more complex repairs to another repair facility.

General repair

A general repair professional diagnoses and repairs electrical and mechanical vehicle systems including (but not limited to) brakes, driveline, starting, charging, lighting, engine, HVAC, supplemental restraints, suspension and transmission systems. Some general repair professionals are only capable or certified for select systems, while master professionals (generally speaking) are capable or certified across all vehicle systems.

Heavy line

A heavy line mechanic performs major mechanical repairs such as engine or transmission replacement. Some heavy line mechanics also perform overhaul procedures for these components.

Lubrication

A lubrication professional, often shortened to lube tech, is an entry-level position that focuses on basic preventive maintenance services rather than repairs. The tasks that can be performed are typically limited to automotive fluid, filter, belt and hose replacement. Lube techs are employed by nearly every type of automotive repair shop, however, they are most prevalent in quick lube or express service shops because they lower business overhead resulting in a less expensive service as compared to traditional automotive workshops.

Mobile

A mobile professional performs most of the same repairs as a general repair professional, except does so at the customer's location rather than inside a brick and mortar facility.

Pit crew

A pit crew mechanic performs an assigned maintenance or repair task to a racecar during a pit stop along a racetrack. Pit crew jobs include raising and lowering the vehicle with a jack, filling the car with gasoline, changing the tires, and cleaning the windshield. Although these are basic tasks, they must be performed in an extremely quick and accurate fashion.

Challenges:

Physical

The auto mechanic has a physically demanding job, often exposed to extreme temperatures, lifting heavy objects and staying in uncomfortable positions for extended periods. They also may deal with exposure to toxic chemicals.

Technological

With the rapid advancement in technology, the mechanic's job has evolved from purely mechanical, to include electronic technology. Because vehicles today possess complex computer and electronic systems, mechanics need to have a broader base of knowledge than in the past and must be prepared to learn these new technologies and systems.

Financial

Automotive professionals utilize many tools, equipment and reference material to perform their duties. While equipment and reference materials are typically provided by the employer, all other tools are purchased, owned, and provided by the professional.

Resources:

Scan tool

Due to the increasingly labyrinthine nature of the technology that is now incorporated into automobiles, most automobile dealerships and independent workshops now provide sophisticated diagnostic computers to each professional, without which they would be unable to diagnose or repair a vehicle.

Reference material

The internet is being applied to the field increasingly often, with mechanics providing advice on-line. Mechanics themselves now regularly use the internet for information to help them in diagnosing and/or repairing vehicles. Paper based service manuals for vehicles have become significantly less prevalent with computers that are connected to the Internet taking their position, giving quick access to a plethora of technical manuals and information.

Online scheduling

Online appointment platforms have surged allowing customers to schedule vehicle repairs by making appointments. A newer method of mobile mechanic services has emerged where the online appointment made by a person seeking repairs turns into a dispatch call and the mechanics travel to the customers location to perform the services.

Related careers

A mechanic usually works from the workshop in which the (well equipped) mechanic has access to a vehicle lift to access areas that are difficult to reach when the car is on the ground. Beside the workshop bound mechanic, there are mobile mechanics like those of the UK Automobile Association (the AA) which allow the car owner to receive assistance without the car necessarily having to be brought to a garage.

A mechanic may opt to engage in other careers related to his or her field. Teaching of automotive trade courses, for example, is almost entirely carried out by qualified mechanics in many countries.

There are several other trade qualifications for working on motor vehicles, including panel beater, spray painter, body builder and motorcycle mechanic. In most developed countries, these are separate trade courses, but a qualified tradesperson from one sphere can change to working as another. This usually requires that they work under another tradesperson in much the same way as an apprentice.

Auto body repair involves less work with oily and greasy parts of vehicles, but involves exposure to particulate dust from sanding bodywork and potentially toxic chemical fumes from paint and related products. Salespeople and dealers often also need to acquire an in-depth knowledge of cars, and some mechanics are successful in these roles because of their knowledge. Auto mechanics also need to stay updated with all the leading car companies as well as newly launched cars. Mechanics have to study continuously on new technology engines and their work systems.

0612_Auto_Large.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2011 2023-12-29 18:33:43

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2013) Convave Lens

Gist

Concave lenses have at least one surface curved inside. A concave lens is also known as a diverging lens because it is shaped round inwards at the centre and bulges outwards through the edges, making the light diverge. They are used to treat myopia as they make faraway objects look smaller than they are.

Summary

A concave lens is a lens that possesses at least one surface that curves inwards. It is a diverging lens, meaning that it spreads out light rays that have been refracted through it. A concave lens is thinner at its centre than at its edges, and is used to correct short-sightedness (myopia). The writings of Pliny the Elder (23–79) makes mention of what is arguably the earliest use of a corrective lens. According to Pliny, Emperor Nero was said to watch gladiatorial games using an emerald, presumably concave shaped to correct for myopia.

After light rays have passed through the lens, they appear to come from a point called the principal focus. This is the point onto which the collimated light that moves parallel to the axis of the lens is focused. The image formed by a concave lens is virtual, meaning that it will appear to be farther away than it actually is, and therefore smaller than the object itself. Curved mirrors often have this effect, which is why many (especially on cars) come with a warning: Objects in mirror are closer than they appear. The image will also be upright, meaning not inverted, as some curved reflective surfaces and lenses have been known to do.

The lens formula that is used to work out the position and nature of an image formed by a lens can be expressed as follows: 1/u + 1/v = 1/f, where u and v are the distances of the object and image from the lens, respectively, and f is the focal length of the lens.

Details

Concave lenses are one of the many lenses used in optics. They help create some of the most important equipment you use in your everyday life. These lenses come in various types and have plenty of applications. This article contains all you need to know about concave lenses, such as the different types and how they are applied.

What Is a Concave Lens?

A concave lens bends light inward so that the resulting image is smaller and more vertical than the original.

Furthermore, it can create an actual or virtual image. Concave lenses contain at least one face that is curved inward. Another name for these lenses is diverging lenses, and this is because they bulge outward at their borders and are spherical in their centers, causing light to spread out rather than focus.

Types of Concave Lenses:

Bi-Concave Lenses

These types of lenses are also called double-concave lenses. Both sides of a bi-concave lens have equal radius curvature and, similar to plano-concave lenses, can deviate from incident light.

Plano-Concave Lenses

A plano-concave lens works like a bi-concave lens. However, these lenses have one flat face and one concave. Furthermore, plano-concave lenses have a negative focal length.

Convexo-Concave Lenses

A convexo-concave lens has one convex surface and one concave surface. That said, the convex surface has a higher curvature than the concave surface, which leads to the lens being thickest in the center.

Applications of Concave Lenses:

Corrective Lenses

Correction of myopia (short-sightedness) typically involves the use of concave lenses. Myopic eyes have longer than average eyeballs, which causes images of a distant object to be projected onto the fovea instead of the retina.

Glasses with concave lenses can fix this by spreading the incoming light out before reaching the eye. In doing so, the patient can perceive further away objects with more clarity.

Binoculars and Telescopes

Binoculars allow users to see distant objects, making them appear closer. They are constructed from convex and concave lenses. The convex lens zooms in on the object, while the concave lens is used to focus the image properly.

Telescopes function similarly in that they have convex and concave lenses. They are used to observe extremely distant objects, such as planets. The convex lens serves as the magnification lens, while the concave lens serves as the eyepiece.

Lasers

Laser beams are used in a variety of devices, including scanners, DVD players, and medical instruments. Even though lasers are incredibly concentrated sources of light, they must be spread out for usage in practical applications. As a result, the laser beam is widened by a series of tiny concave lenses, allowing for pinpoint targeting of a specific location.

Flashlights

Flashlights also make use of concave lenses to increase the output of the light they use. Light converges on the lens’ hollowed side and spreads out on the other. This broadens the light’s beam by expanding the source’s diameter.

Cameras

Camera manufacturers frequently utilize lenses that possess concave and convex surfaces to enhance image quality. Convex lenses are the most used lenses in cameras, and chromatic aberrations can occur when they are used. Fortunately, this issue can be solved by combining concave and convex lenses.

Peepholes

Peepholes, often called door viewers, are safety features that allow a full view of what’s on the other side of a wall or door. While looking at an object or area, a concave lens will make it appear smaller and provide a wider perspective.

Conclusion

Well, there you have it. All there is to know about concave lenses. They are lenses with at least one surface curved inward. These lenses are able to bend light inward and make images appear upright and smaller. You can find them used in several contraptions, such as cameras, flashlights, telescopes, and others.

concave-lens.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2012 2023-12-30 16:03:28

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2014) Traffic Congestion

Gist

Traffic Jam is a long line of cars, etc. that cannot move or that can only move very slowly.

Details

Traffic congestion is a condition in transport that is characterized by slower speeds, longer trip times, and increased vehicular queueing. Traffic congestion on urban road networks has increased substantially since the 1950s. When traffic demand is great enough that the interaction between vehicles slows the traffic stream, this results in congestion. While congestion is a possibility for any mode of transportation, this article will focus on automobile congestion on public roads.

As demand approaches the capacity of a road (or of the intersections along the road), extreme traffic congestion sets in. When vehicles are fully stopped for periods of time, this is known as a traffic jam or (informally) a traffic snarl-up or a tailback.

Drivers can become frustrated and engage in road rage. Drivers and driver-focused road planning departments commonly propose to alleviate congestion by adding another lane to the road. This is ineffective: increasing road capacity induces more demand for driving.

Mathematically, traffic is modeled as a flow through a fixed point on the route, analogously to fluid dynamics.

Causes

Traffic congestion occurs when a volume of traffic generates demand for space greater than the available street capacity; this point is commonly termed saturation. Several specific circumstances can cause or aggravate congestion; most of them reduce the capacity of a road at a given point or over a certain length, or increase the number of vehicles required for a given volume of people or goods. About half of U.S. traffic congestion is recurring, and is attributed to sheer weight of traffic; most of the rest is attributed to traffic incidents, road work and weather events. In terms of traffic operation, rainfall reduces traffic capacity and operating speeds, thereby resulting in greater congestion and road network productivity loss.

Traffic research still cannot fully predict under which conditions a "traffic jam" (as opposed to heavy, but smoothly flowing traffic) may suddenly occur. It has been found that individual incidents (such as crashes or even a single car braking heavily in a previously smooth flow) may cause ripple effects (a cascading failure) which then spread out and create a sustained traffic jam when, otherwise, the normal flow might have continued for some time longer.

Separation of work and residential areas

People often work and live in different parts of the city. Many workplaces are located in a central business district away from residential areas, resulting in workers commuting. According to a 2011 report published by the United States Census Bureau, a total of 132.3 million people in the United States commute between their work and residential areas daily.

Movement to obtain or provide goods and services

People may need to move about within the city to obtain goods and services, for instance to purchase goods or attend classes in a different part of the city. Brussels, a Belgian city with a strong service economy, has one of the worst traffic congestion in the world, wasting 74 hours in traffic in 2014.

Mathematical theories

Some traffic engineers have attempted to apply the rules of fluid dynamics to traffic flow, likening it to the flow of a fluid in a pipe. Congestion simulations and real-time observations have shown that in heavy but free flowing traffic, jams can arise spontaneously, triggered by minor events ("butterfly effects"), such as an abrupt steering maneuver by a single motorist. Traffic scientists liken such a situation to the sudden freezing of supercooled fluid.

However, unlike a fluid, traffic flow is often affected by signals or other events at junctions that periodically affect the smooth flow of traffic. Alternative mathematical theories exist, such as Boris Kerner's three-phase traffic theory.

Because of the poor correlation of theoretical models to actual observed traffic flows, transportation planners and highway engineers attempt to forecast traffic flow using empirical models. Their working traffic models typically use a combination of macro-, micro- and mesoscopic features, and may add matrix entropy effects, by "platooning" groups of vehicles and by randomizing the flow patterns within individual segments of the network. These models are then typically calibrated by measuring actual traffic flows on the links in the network, and the baseline flows are adjusted accordingly.

A team of MIT mathematicians has developed a model that describes the formation of "phantom jams", in which small disturbances (a driver hitting the brake too hard, or getting too close to another car) in heavy traffic can become amplified into a full-blown, self-sustaining traffic jam. Key to the study is the realization that the mathematics of such jams, which the researchers call "jamitons", are strikingly similar to the equations that describe detonation waves produced by explosions, says Aslan Kasimov, lecturer in MIT's Department of Mathematics. That discovery enabled the team to solve traffic-jam equations that were first theorized in the 1950s.

Economic theories

Congested roads can be seen as an example of the tragedy of the commons. Because roads in most places are free at the point of usage, there is little financial incentive for drivers not to over-use them, up to the point where traffic collapses into a jam, when demand becomes limited by opportunity cost. Privatization of highways and road pricing have both been proposed as measures that may reduce congestion through economic incentives and disincentives. Congestion can also happen due to non-recurring highway incidents, such as a crash or roadworks, which may reduce the road's capacity below normal levels.

Economist Anthony Downs argues that rush hour traffic congestion is inevitable because of the benefits of having a relatively standard work day. In a capitalist economy, goods can be allocated either by pricing (ability to pay) or by queueing (first-come first-served); congestion is an example of the latter. Instead of the traditional solution of making the "pipe" large enough to accommodate the total demand for peak-hour vehicle travel (a supply-side solution), either by widening roadways or increasing "flow pressure" via automated highway systems, Downs advocates greater use of road pricing to reduce congestion (a demand-side solution, effectively rationing demand), in turn plowing the revenues generated therefrom into public transportation projects.

A 2011 study in The American Economic Review indicates that there may be a "fundamental law of road congestion." The researchers, from the University of Toronto and the London School of Economics, analyzed data from the U.S. Highway Performance and Monitoring System for 1983, 1993 and 2003, as well as information on population, employment, geography, transit, and political factors. They determined that the number of vehicle-kilometers traveled (VKT) increases in direct proportion to the available lane-kilometers of roadways. The implication is that building new roads and widening existing ones only results in additional traffic that continues to rise until peak congestion returns to the previous level.

Classification

Qualitative classification of traffic is often done in the form of a six-letter A-F level of service (LOS) scale defined in the Highway Capacity Manual, a US document used (or used as a basis for national guidelines) worldwide. These levels are used by transportation engineers as a shorthand and to describe traffic levels to the lay public. While this system generally uses delay as the basis for its measurements, the particular measurements and statistical methods vary depending on the facility being described. For instance, while the percent time spent following a slower-moving vehicle figures into the LOS for a rural two-lane road, the LOS at an urban intersection incorporates such measurements as the number of drivers forced to wait through more than one signal cycle.

Traffic congestion occurs in time and space, i.e., it is a spatiotemporal process. Therefore, another classification schema of traffic congestion is associated with some common spatiotemporal features of traffic congestion found in measured traffic data. Common spatiotemporal empirical features of traffic congestion are those features, which are qualitatively the same for different highways in different countries measured during years of traffic observations. Common features of traffic congestion are independent on weather, road conditions and road infrastructure, vehicular technology, driver characteristics, day time, etc. Examples of common features of traffic congestion are the features J and S for, respectively, the wide moving jam and synchronized flow traffic phases found in Kerner's three-phase traffic theory. The common features of traffic congestion can be reconstructed in space and time with the use of the ASDA and FOTO models.

traffic-jam-getty.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2013 2023-12-31 16:48:25

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2015) Plastics engineering

Summary

Plastics engineering encompasses the processing, design, development, and manufacture of plastics products. A plastic is a polymeric material that is in a semi-liquid state, having the property of plasticity and exhibiting flow. Plastics engineering encompasses plastics material and plastic machinery. Plastic machinery is the general term for all types of machinery and devices used in the plastics processing industry. The nature of plastic materials poses unique challenges to an engineer. Mechanical properties of plastics are often difficult to quantify, and the plastics engineer has to design a product that meets certain specifications while keeping costs to a minimum. Other properties that the plastics engineer has to address include: outdoor weatherability, thermal properties such as upper use temperature, electrical properties, barrier properties, and resistance to chemical attack.

In plastics engineering, as in most engineering disciplines, the economics of a product plays an important role. The cost of plastic materials ranges from the cheapest commodity plastics used in mass-produced consumer products to very expensive, specialty plastics. The cost of a plastic product is measured in different ways, and the absolute cost of a plastic material is difficult to ascertain. Cost is often measured in price per pound of material, or price per unit volume of material. In many cases, however, it is important for a product to meet certain specifications, and cost could then be measured in price per unit of a property. Price with respect to processibility is often important, as some materials need to be processed at very high temperatures, increasing the amount of cooling time a part needs. In a large production run, cooling time is very expensive.

Some plastics are manufactured from recycled materials but their use in engineering tends to be limited because the consistency of formulation and their physical properties tend to be less consistent. Electrical, electronic equipment, and motor vehicle markets together accounted for 58 percent of engineered plastics demand in 2003. Engineered plastics demand in the US was estimated at $9,702 million in 2007.

A big challenge for plastics engineers is the reduction of the ecological footprints of their products. First attempts like the Vinyloop process can guarantee that a product's primary energy demand is 46 percent lower than conventionally produced PVC. The global warming potential is 39 percent lower.

Details

Engineering plastics are specially designed and formulated to have improved properties compared to commodity plastics. These properties may include better mechanical, electrical, and thermal properties; improved chemical and ultraviolet light resistance; and biocompatibility for food packaging applications. While not every engineering plastic exhibits all of these traits, they are distinct from commodity plastics due to their ability to be used in a broader range of technically demanding applications — from gears in power transmission applications to medical implants and prosthetics. Examples of various engineering plastics include: nylon, polyethylene terephthalate (PET), acrylonitrile butadiene styrene (ABS), polycarbonate (PC), and many more. The low density of engineering plastics compared to metals, coupled with their impressive material properties and economical price, make them a lighter-weight alternative to metals for a variety of applications. This article will review the subject of engineering plastics: their definition,  their uses, their properties, and the many different types.

What Are Engineering Plastics?

Engineering plastics are a class of polymers that have been specially formulated to exhibit enhanced mechanical, electrical, thermal, and chemical properties compared to commodity plastics. These enhanced properties allow engineering plastics to be used in demanding industrial and engineering applications such as gears, bearings, bushings, and medical implants.

The main difference between engineering plastics and commodity plastics is that engineering plastics are used in more technically demanding environments. While commodity plastics are also popular, they are not suitable for the same applications as engineering plastics as they do not exhibit the same degree of material properties. Engineering plastic parts are typically manufactured by plastic injection molding or plastic extrusion, although 3D printing is also possible.

What Is the Importance of Engineering Plastics?

Engineering plastics play a pivotal role in many industries, such as: automotive, aerospace, electronics, medical, and consumer goods manufacturing. Engineering plastics are known for their ability to be used in applications where metals may be too heavy, expensive, or susceptible to corrosion, and where commodity plastics lack the strength and durability required. They are durable and versatile owing to their mechanical strength and impact resistance, coupled with their desirable thermal, electrical, and chemical properties. Engineering plastics offer manufacturers an ideal balance between strength, cost, weight, and durability.

What Are the Properties of Engineering Plastics?

The desirable properties of engineering plastics are listed and described below:

1. Abrasion Resistance

Abrasion resistance is a material’s ability to resist damage due to scratches, wearing, rubbing, marring, or any other friction-related phenomenon. Engineering plastics exhibit a high degree of abrasion resistance compared to commodity plastics due to their improved mechanical properties, specifically their hardness. Good abrasion resistance makes engineering plastics great for parts that are prone to wear, such as guide rollers, gears, and bushings.

2. Chemical Resistance

Chemical resistance is a material’s ability to resist reacting with various compounds, oils, solvents, gasses, and other elements in its operating environment. Engineering plastics generally exhibit good chemical resistance. This property makes them good for use in environments where exposure to chemicals is common, e.g., machinery of all kinds, medical applications, and consumer goods like kitchenware, cookware, and bathroom products.

3. Dimensional Stability

Dimensional stability refers to a material’s ability to retain its manufactured dimensions in the presence of high or low temperatures. Engineering plastics have varying levels of dimensional stability depending on the particular type of engineering plastic. The good dimensional stability of engineering plastics makes them useful for high-heat environments where thermal expansion is likely such as in the automotive, aerospace, and industrial industries.

4. Electrical Properties

Electrical properties are related to a material’s ability to conduct or insulate electrical currents. Electrical conductivity and resistivity are the two critical electrical properties of engineering plastics. Most engineering plastics are poor electrical conductors which makes them ideal for applications where electrical insulation is desired, such as in various electronic and wiring applications.

5. Flammability

Flammability is a material’s tendency to catch fire. Engineering plastics have varying levels of flammability depending on the particular type of engineering plastic. Some engineering plastics, such as PEEK or PPS, are specifically formulated for flame resistance and ignition prevention while other engineering plastics, like ABS, are flammable. Engineering plastics that are non-flammable are exceptional for use in applications where ignition is a concern such as in chemical processing.

6. Food Compatibility

Food compatibility refers to a material’s safety for use with food, specifically in packaging. Some engineering plastics, such as PET, POM, PMMA, and ABS, are safe for use with food due to their chemical structure and lack of additives, dyes, and other harmful products. Others, for example, polycarbonate, are not safe.

7. Impact Strength

Impact strength is a material’s ability to resist deformation due to a sudden or intense application of load, also known as a shock load. Impact strength is a critical property that characterizes how well a material can withstand sudden forces. Compared to commodity plastics, many engineering plastics exhibit high impact strength, which makes them good for load-bearing applications that are subjected to varying impact forces, such as machine guards and electronic housings.

8. Min/Max Operating Temperature (Thermal Resistance)

Thermal resistance is the property that characterizes a material’s ability to resist changes in its properties due to varying temperatures. Specifically, the low temperature at which materials become brittle and the high temperature at which materials become soft and even melt.  Engineering plastics have varying degrees of thermal resistance depending on the type of plastic. For instance, PEEK is specially formulated for use in high-temperature environments (up to 250 °C). High-temperature environments where plastics are used include various machinery and industrial applications.

9. Sliding Properties

The sliding, or anti-friction, properties of engineering plastics make the materials more wear-resistant and abrasion-resistant compared to commodity plastics. Engineering plastics notably have a low coefficient of friction which enables these properties to be possible. The sliding properties of engineering plastics make them ideal for applications where repeated sliding, rubbing, or scraping are common such as in machinery components like wear pads, bearings, gears, pulleys, and rollers or medical applications like hip implants.

10. Ultraviolet (UV) Light Resistance

UV resistance refers to a material’s ability to resist degradation and discoloration due to exposure to ultraviolet rays from sunlight and other sources. UV rays can break down polymer chains and cause chemical changes within the material. UV resistance is important for engineering plastics because it enhances their ability to be used outside or in other UV exposure-intense environments. Many engineering plastics, particularly polyamides, PMMA, and Ultem®, exhibit outstanding UV resistance, which makes them able to be used in environments with constant UV exposure.

11. Water Absorption

Water absorption refers to a material’s affinity to absorb water. Engineering plastics are water-resistant and do not easily allow water to penetrate into the material. This makes engineering plastics ideal for applications where water contact is common.

What Are the Different Types of Engineering Plastics?

There are numerous types of engineering plastics which are listed and described below:

1. Nylon 6

Nylon 6 is a specific grade of nylon, or polyamide, material. Nylon 6 has good stiffness, toughness, electric and thermal insulation properties, and mechanical strength. Its properties can further be enhanced with glass fiber and other compounds. Nylon 6 is used for forming dies, insulators, washers, rollers, pulleys, valve seats, and more.

2. Polyethylene Terephthalate (PET)

PET is a popular type of engineering plastic known for its high strength, chemical resistance, and recyclability. A type of polyester, PET is used in applications like textiles, food and beverage containers, pill bottles, and more.

3. Acrylonitrile Butadiene Styrene (ABS)

ABS is another preferred type of engineering plastic that is valued for its chemical and thermal stability, strength, toughness, and glossy finish. Its host of desirable properties make it a versatile material. It is used in everything from consumer products like toys and bicycle helmets to automotive applications such as interior trim pieces, housings for electronics, and more.

4. Nylon 6-6

Nylon 6-6 is another type of nylon (or polyamide) material. It has the same properties as nylon 6 but has a greater range of operating temperatures (up to 103 °C before deformation begins)  and chemical resistance compared to Nylon 6. Nylon 6-6 is used in applications such as conveyor belts, clothing and other textiles, bearings, bushings, sprockets, electronic housings, and more.

5. Polyimides

Polyimides (PI) are formed by the combination of aromatic dianhydrides and aromatic diamines. This particular combination of chemicals yields a polyimide material with excellent thermal stability, high tensile strength, and good chemical resistance. These properties enable polyimides to be used in environments with temperature extremes up to 370 °C, in applications such as fuel cells, insulating coatings, structural adhesives, and more.

6. Polyether Ether Ketone (PEEK)

PEEK stands out from other engineering plastics due to its wide range of operating temperatures (up to 250 °C), outstanding chemical resistance, high strength, and stiffness. PEEK is used for pump and valve components; bearings, bushings, and seals in machinery; electrical connectors; and medical device components.

7. Polyphenylene Sulfide (PPS)

PPS is a high-performance, tough engineering plastic with great dimensional and thermal stability, as well as a wide operating temperature range of up to 260 °C and good chemical resistance. Moreover, PPS, like most other thermoplastics, is an electrical insulator. Its ability to be used at high temperatures coupled with its thermal stability makes PPS great for applications such as semiconductor components in machinery, bearings, and valve seats.

8. Poly(methyl methacrylate) (PMMA)

PMMA, also known as acrylic, is a common type of engineering plastic that is valued for its optical clarity, high strength, impact resistance, and light weight. For these reasons, PMMA is commonly used as a glass substitute. Applications of PMMA include windows, signage, display cases and fixtures, machine guards, and more.

9. Polybutylene Terephthalate (PBT)

PBT has properties similar to those of PET (polyethylene terephthalate). It is known for its dimensional stability, low moisture absorption, high strength, stiffness, and chemical, UV, and thermal resistance. It is commonly used in automotive fenders and interior trim pieces, power tool housings, and electrical enclosures.

10. Polyphenylene Oxide (PPO)

PPO, exhibits low moisture absorption, has excellent electrical and thermal insulating properties, great dimensional stability, and can be used in temperatures up to 155 °C. PPO is used for automotive fenders, electronics housings, and medical instruments.

11. Polyamides (PA)

Polyamides are a class of engineering plastic formed by chains of amide molecules. Nylons and aramids are both types of polyamides. Polyamides are known for their high strength, impact resistance, and dimensional stability, which makes them useful for a wide range of applications in various industries. From clothes and textiles to automotive trim pieces, to bushings and bearings in industrial machines, polyamides are a common type of engineering plastic.

12. Polycarbonates (PC)

Polycarbonates are known for their optical clarity, chemical resistance, tensile and impact strength, fire resistance, and recyclability. Polycarbonates are used in applications such as: machine guards, CDs, motorcycle face shields and windscreens, electronics housings, signage, streetlamps, safety glasses, and more.

13. Polyetherimide (PEI) ULTEM®

PEI, also known by the popular brand name ULTEM®, is an engineering plastic valued for its mechanical strength, rigidity, creep resistance over a wide temperature range, and electrical insulation properties. PEI boasts the highest dielectric strength of any commercially available thermoplastic and is therefore commonly used in the electronics industry for fuses and coils. Additionally, it is used in the aerospace industry, where it is used for interior trim pieces.

14. Polyetherketoneketone (PEKK)

PEKK is an engineering plastic with characteristics similar to those of PEEK. This is because PEEK and PEKK are both part of a group of thermoplastics known as polyaryletherketones (PAEK). PEKK is characterized by good mechanical strength, thermal stability, and chemical resistance. Because of its mechanical strength and chemical resistance, PEKK is used in high-pressure pumps, gear wheels, propellers, orthopedic and dental implants, and more.

15. Polyetherketone (PEK)

PEK is a type of PAEK engineering plastic that is known for its low thermal expansion coefficient, creep resistance, chemical resistance, low flammability, and mechanical properties over a broad temperature range up to 260 °C. These properties make PEK useful for applications where loads are applied at high temperatures for an extended time such as in gears, bushings, bearings, shafts, and more. PEK is used in the aerospace, machinery,  and automotive industries.

16. Polyketone (PK)

Polyketone is an engineering plastic that is a type of PAEK. PK has high impact strength — about two to three times higher than nylon or PBT — and has excellent wear and chemical resistance. Its wear and chemical resistance make PK often used in the automotive, oil and gas, and chemical industries.

17. Polyoxymethylene Plastic (POM / Acetal)

POM, also known as acetal or Delrin®, is one of the most popular engineering plastics due to its high strength, toughness, elasticity, dimensional stability, excellent machinability, impact strength, low friction coefficient, and chemical resistance. Acetal is used in everything from gears, pulleys, and rollers to snap fasteners like zippers and buckles due to its abrasion resistance and high strength.

18. Polysulfone (PSU)

PSU is a translucent engineering plastic that is known for its food compatibility, biocompatibility, wide service temperature range of up to 160 °C, and good mechanical properties. PSU is used in the medical and food preparation industries due to its bio-friendliness for applications like manifolds in food processing or medical trays.

19. Polytetrafluroethylene (PTFE / Teflon®)

PTFE is a popular engineering plastic known for its versatile properties. These properties include: broad temperature range from -240 °C to 260 °C and low moisture absorption, low coefficient of friction, flexibility, and chemical resistance. PTFE is used in many different applications, such as wiper blades, cookware, pipes for high-purity chemical processing, nail polish, and more.

What Are the Most Common Types of Engineering Plastics?

Different sources will list different engineering plastics as being the most common types. The list below was chosen based on which engineering plastics are most widely used across the largest variety of industries.

PET: PET is a type of polyester with a broad range of uses such as: clothing, textiles, food and beverage containers, and pill bottles. PET is widely available, cheap to manufacture, easy to recycle, and has excellent mechanical properties. However, its low heat resistance, tendency to oxidize, and low biodegradability limit the applications where PET can be used.

ABS: ABS is used in many different applications — from consumer components and automotive components to electronic devices of all kinds. ABS is inexpensive and plentiful, strong, lightweight, ductile, and thermally stable. However, ABS has poor resistance to fatigue, is flammable, and has a lower melting point compared to other engineering plastics.

PC: Polycarbonate is used in applications such as machine guards, automotive headlamps, windows, safety glasses, and more. It is a fire-resistant material that is also lightweight, thermally insulating, strong, and fully recyclable. However, PC is sensitive to scratches and has a high coefficient of thermal expansion.

How Does Plastic Engineering Work?

Plastic engineering is a branch of engineering that focuses on the design and development of plastic materials and products. The boundaries of plastic engineering range from research and development in chemical laboratory settings to the plant floor where the process parameters for plastic machinery such as injection molding machines and plastic extruders are studied and optimized for a particular product. The purpose of plastic engineering is to create and optimize new plastic materials from organic polymers that are either petrochemical-based or naturally occurring. These plastic materials are synthesized via chemical reactions and are created to satisfy specific needs and applications.

What Are the Advantages of Engineering Plastics?

Engineering plastics have several advantages over commodity plastics, such as:

* Possess enhanced mechanical properties, such as improved tensile strength, impact strength, and toughness compared to commodity plastics’ properties
* Are lightweight and strong materials
* Exhibit a high degree of chemical resistance, making them great for applications where exposure to various oils, solvents, and chemicals is common
* Resist UV light damage (especially PMMA, ULTEM, and PTFE), making them reliable performers in outdoor environments

What Are the Applications of Engineering Plastics?

Listed below are some of the many applications of engineering plastics:

Engineering plastics such as polycarbonate, ABS, nylon 6, and nylon 6-6 are widely used in the automotive industry for components like dashboards, interior trim pieces, electronic component housings, fuel caps, door handles, and more.
PEEK, polyimides, PAI, and PTFE are engineering plastics commonly used in the aerospace industry. Examples are: pump gears, valve seats, electrical insulators and housings, and more.
Food and beverage containers and disposable trays and plates are commonly made from engineering plastics. PET is the engineering plastic most frequently used in food and beverage applications due to its strength, chemical resistance, UV resistance, food compatibility, and recyclability.
Electronic components such as housings, switches, end terminals, insulators, displays, and more can be made from engineering plastics. Such engineering plastics as nylon 6, nylon 6-6, PC, PBT, ABS, and PEI are often used in the electronics industry due to their mechanical properties such as: tensile strength and impact resistance, electrical insulation properties, wide operating temperature ranges, and chemical resistance.
Consumer products range from toys and kitchenware to sporting goods and clothing. Engineering plastics such as PET, nylon, PMMA, and PTFE are used in a broad spectrum of consumer products due to their light weight, abrasion resistance, dimensional stability, chemical resistance, and UV resistance.

What Is the Hardest Engineering Plastic?

Which engineering plastic is judged to be the hardest depends on the measurement method used and the exact condition of the material.  PMMA (acrylic) is the hardest engineering plastic as measured on the Shore D scale (Shore D 99) and on the Rockwell D hardness scale (105).

Are Engineering Plastics Strong?

Yes, engineering plastics are strong. They are specifically formulated to have high tensile strength and impact strength compared to standard commodity plastics such as polypropylene or polyethylene.

What Is the Difference Between Engineering Plastics and Commodity Plastics?

Engineering plastics are polymeric materials designed and formulated to have specific properties and to satisfy the requirements of various high-performance applications. They typically have enhanced mechanical, thermal, and chemical properties compared to commodity plastics.

Commodity plastics are general-purpose plastics that do not need to satisfy stringent technical requirements. Commodity plastics are used for everyday items from toys and packaging to disposable plates and cutlery. While both engineering plastics and commodity plastics are useful in their own right, they are intended to satisfy vastly different purposes.

plastics-processing-lab-fa.jpg?itok=qgFnVqGP


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2014 2024-01-01 16:52:42

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2016) Road Trip

A road trip, sometimes spelled roadtrip, is a long-distance journey traveled by automobile.

History:

First road trips by automobile

The world's first recorded long-distance road trip by the automobile took place in Germany in August 1888 when Bertha Benz, the wife of Karl Benz, the inventor of the first patented motor car (the Benz Patent-Motorwagen), traveled from Mannheim to Pforzheim (a distance of 106 km (66 mi)) in the third experimental Benz motor car (which had a maximum speed of 10 kilometres per hour (6.2 mph)) and back, with her two teenage sons Richard and Eugen, but without the consent and knowledge of her husband.

Her official reason was that she wanted to visit her mother. But unofficially, she intended to generate publicity for her husband's invention (which had only been used on short test drives before), which succeeded as the automobile took off greatly afterward and, the Benz's family business eventually evolved into the present-day Mercedes-Benz company.

Presently there is a dedicated signposted scenic route in Baden-Württemberg called the Bertha Benz Memorial Route to commemorate her historic first road trip.

Early road trips in North America

The first successful North American transcontinental trip by automobile took place in 1903 and was piloted by H. Nelson Jackson and Sewall K. Crocker, accompanied by a dog named Bud. The trip was completed using a 1903 Winton Touring Car, dubbed "Vermont" by Jackson. The trip took 63 days between San Francisco and New York, costing US$8,000. The total cost included items such as food, gasoline, lodging, tires, parts, other supplies, and the cost of the Winton.

The Ocean to Ocean Automobile Endurance Contest was a road trip from New York City to Seattle in June, 1909. The winning car took 23 days to complete the trip.

The first woman to cross the American landscape by car was Alice Huyler Ramsey with three female passengers in 1909. Ramsey left from Hell's Gate in Manhattan, New York and traveled 59 days to San Francisco, California. Ramsey was followed in 1910 by Blanche Stuart Scott, who is often mistakenly cited as the first woman to make the cross-country journey by automobile East-to-West (but was a true pioneer in aviation).

The 1919 Motor Transport Corps convoy was a road trip by approximately 300 United States Army personnel from Washington, DC to San Francisco. Dwight Eisenhower was a participant. 81 vehicles began the trip which took 62 days to complete, overcoming numerous mechanical and road condition problems. Eisenhower's report about this trip led to an understanding that improving cross-country highways was important to national security and economic development.

Expansion of highways in the United States

New highways in the early 20th century helped propel automobile travel in the United States, primarily cross-country. Commissioned in 1926 and completely paved near the end of the 1930s, U.S. Route 66 is a living icon of early modern road-tripping.

Motorists ventured cross-country for holidays as well as migrating to California and other locations. The modern American road trip began to take shape in the late 1930s and into the 1940s, ushering in an era of a nation on the move.

The 1950s saw the rapid growth of ownership of automobiles by American families. The automobile, now a trusted mode of transportation, was being widely used for not only commuting but leisure trips as well.

As a result of this new vacation-by-road style, many businesses began to cater to road-weary travelers. More reliable vehicles and services made long-distance road trips easier for families, as the length of time required to cross the continent was reduced from months to days. The average family can travel to destinations across North America in one week.

The biggest change to the American road trip was the start and subsequent expansion of the Interstate Highway System. The higher speeds and controlled access nature of the Interstate allowed for greater distances to be traveled in less time and with improved safety as highways became divided.

Travelers from Europe countries, Australia, and elsewhere soon came to the US to take part in the American ideal of a road trip. Canadians also engaged in road trips taking advantage of the large size of their nation and close proximity to destinations in the United States.

Possible motivations

Many people may go on road trips for recreational purpose (e.g. sightseeing or to reach a desired location, typically during a vacation period; e.g., in the US, driving to Disneyland from Oregon). Other motivations for long-distance travel by automobile include visitation of friends and relatives, who may live far away, or relocation of one's permanent living space.

Distance and popularity

Generally, while road trips can occur in any mass of land, large masses of land are most common for road trips. The most popular locations for road trips include Australia, Canada, Mainland U.S., and Central Europe. This is because, since these areas of land are so composite, travel is more seamless, accessible, and efficient, than travel within smaller or non-contiguous, remote countries, such as Fiji, in addition to the fact that these countries tend to offer more points of interest than smaller ones. This may also be due to the distance required to qualify as a road trip, which residents of smaller bodies of land may find themselves incapable of achieving.

While there is no consensus as to what distance or time must be traveled/spent in order for the event to qualify as a road trip, it is a commonly held belief that commuting should not qualify as a road trip, regardless of the distance. Some argue that travel may not require a set distance to qualify as a road trip.

In the United States

In the United States, a road trip typically implies leaving the state or, in extreme cases, leaving the country for places such as Canada or Mexico. However, in larger states, travel within the state may also be considered a road trip.

In terms of popularity, road trips have become loved and revered to the extent that a National Road Trip Day has been established. It is observed every Friday before Memorial Day.

Gallery-2018-12.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2015 2024-01-02 19:13:22

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2017) Printing

Gist

Printing is a process for reproducing text and image, typically with ink on paper using a printing press. It is often carried out as a large-scale industrial process, and is an essential part of publishing and transaction printing.

Summary

Printing, traditionally, is a technique for applying under pressure a certain quantity of colouring agent onto a specified surface to form a body of text or an illustration. Certain modern processes for reproducing texts and illustrations, however, are no longer dependent on the mechanical concept of pressure or even on the material concept of colouring agent. Because these processes represent an important development that may ultimately replace the other processes, printing should probably now be defined as any of several techniques for reproducing texts and illustrations, in black and in colour, on a durable surface and in a desired number of identical copies. There is no reason why this broad definition should not be retained, for the whole history of printing is a progression away from those things that originally characterized it: lead, ink, and the press.

It is also true that, after five centuries during which printing has maintained a quasi-monopoly of the transmission or storage of information, this role is being seriously challenged by new audiovisual and information media. Printing, by the very magnitude of its contribution to the multiplication of knowledge, has helped engender radio, television, film, microfilm, tape recording, and other rival techniques. Nevertheless, its own field remains immense. Printing is used not merely for books and newspapers but also for textiles, plates, wallpaper, packaging, and billboards. It has even been used to manufacture miniature electronic circuits.

The invention of printing at the dawn of the age of the great discoveries was in part a response and in part a stimulus to the movement that, by transforming the economic, social, and ideological relations of civilization, would usher in the modern world. The economic world was marked by the high level of production and exchange attained by the Italian republics, as well as by the commercial upsurge of the Hanseatic League and the Flemish cities; social relations were marked by the decline of the landed aristocracy and the rise of the urban mercantile bourgeoisie; and the world of ideas reflected the aspirations of this bourgeoisie for a political role that would allow it to fulfill its economic ambitions. Ideas were affected by the religious crisis that would lead to the Protestant Reformation.

The first major role of the printed book was to spread literacy and then general knowledge among the new economic powers of society. In the beginning it was scorned by the princes. It is significant that the contents of the first books were often devoted to literary and scientific works as well as to religious texts, though printing was used to ensure the broad dissemination of religious material, first Catholic and, shortly, Protestant.

There is a material explanation for the fact that printing developed in Europe in the 15th century rather than in the Far East, even though the principle on which it is based had been known in the Orient long before. European writing was based on an alphabet composed of a limited number of abstract symbols. This simplifies the problems involved in developing techniques for the use of movable type manufactured in series. Chinese handwriting, with its vast number of ideograms requiring some 80,000 symbols, lends itself only poorly to the requirements of a typography. Partly for this reason, the unquestionably advanced Oriental civilization, of which the richness of their writing was evidence, underwent a slowing down of its evolution in comparison with the formerly more backward Western civilizations.

Printing participated in and gave impetus to the growth and accumulation of knowledge. In each succeeding era there were more people who were able to assimilate the knowledge handed to them and to augment it with their own contribution. From Diderot’s encyclopaedia to the present profusion of publications printed throughout the world, there has been a constant acceleration of change, a process highlighted by the Industrial Revolution at the beginning of the 19th century and the scientific and technical revolution of the 20th.

At the same time, printing has facilitated the spread of ideas that have helped to shape alterations in social relations made possible by industrial development and economic transformations. By means of books, pamphlets, and the press, information of all kinds has reached all levels of society in most countries.

In view of the contemporary competition over some of its traditional functions, it has been suggested by some observers that printing is destined to disappear. On the other hand, this point of view has been condemned as unrealistic by those who argue that information in printed form offers particular advantages different from those of other audio or visual media. Radio scripts and television pictures report facts immediately but only fleetingly, while printed texts and documents, though they require a longer time to be produced, are permanently available and so permit reflection. Though films, microfilms, punch cards, punch tapes, tape recordings, holograms, and other devices preserve a large volume of information in small space, the information on them is available to human senses only through apparatus such as enlargers, readers, and amplifiers. Print, on the other hand, is directly accessible, a fact that may explain why the most common accessory to electronic calculators is a mechanism to print out the results of their operations in plain language. Far from being fated to disappear, printing seems more likely to experience an evolution marked by its increasingly close association with these various other means by which information is placed at the disposal of humankind.

Details

Printing is a process for mass reproducing text and images using a master form or template. The earliest non-paper products involving printing include cylinder seals and objects such as the Cyrus Cylinder and the Cylinders of Nabonidus. The earliest known form of printing evolved from ink rubbings made on paper or cloth from texts on stone tablets, used during the sixth century. Printing by pressing an inked image onto paper (using woodblock printing) appeared later that century. Later developments in printing technology include the movable type invented by Bi Sheng around 1040 AD and the printing press invented by Johannes Gutenberg in the 15th century. The technology of printing played a key role in the development of the Renaissance and the Scientific Revolution and laid the material basis for the modern knowledge-based economy and the spread of learning to the masses.

History:

Woodblock printing

Woodblock printing is a technique for printing text, images or patterns that was used widely throughout East Asia. It originated in China in antiquity as a method of printing on textiles and later on paper.

In East Asia

The earliest examples of ink-squeeze rubbings and potential stone printing blocks appear in the mid-sixth century in China. Woodblock printing appeared by the end of the 7th century. In Korea, an example of woodblock printing from the eighth century was discovered in 1966. A copy of the Buddhist Dharani Sutra called the Pure Light Dharani Sutra  discovered in Gyeongju, in a Silla dynasty pagoda that was repaired in AD 751, was undated but must have been created sometime before the reconstruction of the Shakyamuni Pagoda of Bulguk Temple, Kyongju Province in AD 751. The document is estimated to have been created no later than AD 704.

By the ninth century, printing on paper had taken off, and the first extant complete printed book containing its date is the Diamond Sutra (British Library) of 868. By the tenth century, 400,000 copies of some sutras and pictures were printed, and the Confucian classics were in print. A skilled printer could print up to 2,000 double-page sheets per day.

Printing spread early to Korea and Japan, which also used Chinese logograms, but the technique was also used in Turpan and Vietnam using a number of other scripts. This technique then spread to Persia and Russia. This technique was transmitted to Europe by around 1400 and was used on paper for old master prints and playing cards.

In the Middle East

Block printing, called tarsh in Arabic, developed in Arabic Egypt during the ninth and tenth centuries, mostly for prayers and amulets. There is some evidence to suggest that these print blocks were made from non-wood materials, possibly tin, lead, or clay. The techniques employed are uncertain. Block printing later went out of use during the Timurid Renaissance. The printing technique in Egypt was embraced by reproducing texts on paper strips and supplying them in different copies to meet the demand.

In Europe

Block printing first came to Europe as a method for printing on cloth, where it was common by 1300. Images printed on cloth for religious purposes could be quite large and elaborate. When paper became relatively easily available, around 1400, the technique transferred very quickly to small woodcut religious images and playing cards printed on paper. These prints produced in very large numbers from about 1425 onward.

Around the mid-fifteenth-century, block-books, woodcut books with both text and images, usually carved in the same block, emerged as a cheaper alternative to manuscripts and books printed with movable type. These were all short heavily illustrated works, the bestsellers of the day, repeated in many different block-book versions: the Ars moriendi and the Biblia pauperum were the most common. There is still some controversy among scholars as to whether their introduction preceded or, the majority view, followed the introduction of movable type, with the range of estimated dates being between about 1440 and 1460.

Movable-type printing

Movable type is the system of printing and typography using movable pieces of metal type, made by casting from matrices struck by letterpunches. Movable type allowed for much more flexible processes than hand copying or block printing.

Around 1040, the first known movable type system was created in China by Bi Sheng out of porcelain. Bi Sheng used clay type, which broke easily, but Wang Zhen by 1298 had carved a more durable type from wood. He also developed a complex system of revolving tables and number-association with written Chinese characters that made typesetting and printing more efficient. Still, the main method in use there remained woodblock printing (xylography), which "proved to be cheaper and more efficient for printing Chinese, with its thousands of characters".

Copper movable type printing originated in China at the beginning of the 12th century. It was used in large-scale printing of paper money issued by the Northern Song dynasty. Movable type spread to Korea during the Goryeo dynasty.

Around 1230, Koreans invented a metal type movable printing using bronze. The Jikji, published in 1377, is the earliest known metal printed book. Type-casting was used, adapted from the method of casting coins. The character was cut in beech wood, which was then pressed into a soft clay to form a mould, and bronze poured into the mould, and finally the type was polished. Eastern metal movable type was spread to Europe between the late 14th and early 15th centuries. The Korean form of metal movable type was described by the French scholar Henri-Jean Martin as "extremely similar to Gutenberg's". Authoritative historians Frances Gies and Joseph Gies claimed that "The Asian priority of invention movable type is now firmly established, and that Chinese-Korean technique, or a report of it traveled westward is almost certain."

The printing press

Around 1450, Johannes Gutenberg introduced the first movable type printing system in Europe. He advanced innovations in casting type based on a matrix and hand mould, adaptations to the screw-press, the use of an oil-based ink, and the creation of a softer and more absorbent paper. Gutenberg was the first to create his type pieces from an alloy of lead, tin, antimony, copper and bismuth – the same components still used today. Johannes Gutenberg started work on his printing press around 1436, in partnership with Andreas Dritzehen – whom he had previously instructed in gem-cutting – and Andreas Heilmann, the owner of a paper mill.

Compared to woodblock printing, movable type page setting and printing using a press was faster and more durable. Also, the metal type pieces were sturdier and the lettering more uniform, leading to typography and fonts. The high quality and relatively low price of the Gutenberg Bible (1455) established the superiority of movable type for Western languages. The printing press rapidly spread across Europe, leading up to the Renaissance, and later all around the world.

Time Life magazine called Gutenberg's innovations in movable type printing the most important invention of the second millennium.

The steam-powered rotary printing press, invented in 1843 in the United States by Richard M. Hoe, ultimately allowed millions of copies of a page in a single day. Mass production of printed works flourished after the transition to rolled paper, as continuous feed allowed the presses to run at a much faster pace. Hoe's original design operated at up to 2,000 revolutions per hour where each revolution deposited 4 page images, giving the press a throughput of 8,000 pages per hour. By 1891, The New York World and Philadelphia Item were operating presses producing either 90,000 4-page sheets per hour or 48,000 8-page sheets.

busys-printing-features-small-business.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2016 2024-01-03 17:29:10

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2018) Scalp

Gist

Scalp is the skin on the top of your head that is under your hair.

Details

The scalp is the anatomical area bordered by the face at the front, and by the neck at the sides and back.

Structure

The scalp is usually described as having five layers, which can conveniently be remembered as a mnemonic:

S: The skin on the head from which head hair grows. It contains numerous sebaceous glands and hair follicles.
C: Connective tissue. A dense subcutaneous layer of fat and fibrous tissue that lies beneath the skin, containing the nerves and vessels of the scalp.
A: The aponeurosis called epicranial aponeurosis (or galea aponeurotica) is the next layer. It is a tough layer of dense fibrous tissue which runs from the frontalis muscle anteriorly to the occipitalis posteriorly.
L: The loose areolar connective tissue layer provides an easy plane of separation between the upper three layers and the pericranium. In scalping the scalp is torn off through this layer. It also provides a plane of access in craniofacial surgery and neurosurgery. This layer is sometimes referred to as the "danger zone" because of the ease by which infectious agents can spread through it to emissary veins which then drain into the cranium. The loose areolar tissue in this layer is made up of random collagen I bundles, collagen III. It will also be rich in glycosaminoglycans (GAGs) and will be constituted of more matrix than fibers. This layer allows the more superficial layers of the scalp to shift about in relation to the pericranium.
P: The pericranium is the periosteum of the skull bones and provides nutrition to the bone and the capacity for repair. It may be lifted from the bone to allow removal of bone windows (craniotomy).

The clinically important layer is the aponeurosis. Scalp lacerations through this layer mean that the "anchoring" of the superficial layers is lost and gaping of the wound occurs which would require suturing. This can be achieved with simple or vertical mattress sutures using a non-absorbable material, which are subsequently removed at around days 7–10.

Blood supply

The blood supply of the scalp is via five pairs of arteries, three from the external carotid and two from the internal carotid:

* internal carotid
** the supratrochlear artery to the midline forehead. The supratrochlear artery is a branch of the ophthalmic branch of the internal carotid artery.
** the supraorbital artery to the lateral forehead and scalp as far up as the vertex. The supraorbital artery is a branch of the ophthalmic branch of the internal carotid artery.
* external carotid
** the superficial temporal artery gives off frontal and parietal branches to supply much of the scalp
** the occipital artery which runs posteriorly to supply much of the posterior aspect of the scalp
** the posterior auricular artery, a branch of the external carotid artery, ascends behind the auricle to supply the scalp above and behind the auricle.

Because the walls of the blood vessels are firmly attached to the fibrous tissue of the superficial fascial layer, cut ends of vessels here do not readily retract; even a small scalp wound may bleed profusely.

Venous drainage

The veins of the scalp accompany the arteries and thus have similar names, e.g. Supratrochlear and supraorbital veins, which unite at the medial angle of the eye, and form the angular vein, which further continues as the facial vein.

The superficial temporal vein descends in front of the tragus, enters the parotid gland, and then joins the maxillary vein to form the retromandibular vein. The anterior part of it unites with the facial vein to form the common facial vein, which drains into jugular vein, and ultimately to the subclavian vein. The occipital vein terminates to the sub-occipital plexus.

There are other veins, like the emissary vein and frontal diploic vein, which also contribute to the venous drainage.

Nerve supply

Innervation is the connection of nerves to the scalp: the sensory and motor nerves innervating the scalp. The scalp is innervated by the following:

* Supratrochlear nerve and the supraorbital nerve from the ophthalmic division of the trigeminal nerve
* Greater occipital nerve (C2) posteriorly up to the vertex
* Lesser occipital nerve (C2) behind the ear
* Zygomaticotemporal nerve from the maxillary division of the trigeminal nerve supplying the hairless temple
* Auriculotemporal nerve from the mandibular division of the trigeminal nerve

The innervation of scalp can be remembered using the mnemonic "Z-GLASS" for Zygomaticotemporal nerve, Greater occipital nerve, Lesser occipital nerve, Auriculotemporal nerve, Supratrochlear nerve, and Supraorbital nerve.

the motor innervation of the scalp, specifically, the Occipitofrontalis which is split into two main factions, the frontal belly or Frontalis muscle is supplied by the temporal branch of facial nerve and the occipital belly or Occipitalis is supplied by the posterior auricular branch of facial nerve.

Lymphatic drainage

Lymphatic channels from the posterior half of the scalp drain to occipital and posterior auricular nodes. Lymphatic channels from the anterior half drain to the parotid nodes. The lymph eventually reaches the submandibular and deep cervical nodes.

Clinical significance:

Infection

The 'danger area of the scalp' is the area of loose connective tissue. This is because pus and blood spread easily within it, and can pass into the cranial cavity along the emissary veins. Therefore, infection can spread from the scalp to the meninges, which could lead to meningitis.

Hair transplantation

All the current hair transplantation techniques utilize the patient's existing hair. The aim of the surgical procedure is to use such hair as efficiently as possible. The right candidates for this type of surgery are individuals who still have healthy hair on the sides and the back of the head in order that hair for the transplant may be harvested from those areas. Different techniques are utilized in order to obtain the desired cosmetic results; factors considered may include hair color, texture, curliness, etc.

The most utilized technique is the one known as micro grafting because it produces naturalistic results. It is akin to follicular unit extraction, although less advanced. A knife with multiple blades is used to remove tissue from donor areas. The removed tissue is then fragmented into smaller chunks under direct vision inspection (i.e., without a microscope).

Disease

The scalp is a common site for the development of tumours including:

* Actinic keratosis
* Basal-cell carcinoma
* Epidermoid cyst
* Merkel-cell carcinoma
* Pilar cyst
* Squamous cell carcinoma

Scalp conditions

* Dandruff – A common problem caused by excessive shedding of dead skin cells from the scalp
* Head lice
* Seborrhoeic dermatitis – a skin disorder causing scaly, flaky, itchy, red skin
* Cradle cap – a form of seborrhoeic dermatitis which occurs in newborns
* Tinea capitis - ringworm
* Favus
* Cutis verticis gyrata – a descriptive term for a rare deformity of the scalp

Society and culture

The scalp plays an important role in the aesthetics of the face. Androgenic alopecia, or male pattern hair loss, is a common cause of concern to men. It may be treated with varying rates success by medication (e.g. finasteride, minoxidil) or hair transplantation. If the scalp is heavy and loose, a common change with ageing, the forehead may be low, heavy and deeply lined. The brow lift procedure aims to address these concerns.

the-secret-to-healthy-hair-is-a-healthy-scalp-diagnose.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2017 2024-01-04 17:27:46

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2019) Hypnotherapy

Gist

Hypnosis, also called hypnotherapy, is a state of deep relaxation and focused concentration. It's a type of mind-body medicine. A trained and certified hypnotist or hypnotherapist guides you into this deep state of focus and relaxation with verbal cues, repetition and imagery.

Summary

Hypnotherapy (sometimes called hypnotic suggestion) is a therapeutic practice that uses guided hypnosis to help a client reach a trance-like state of focus, concentration, diminished peripheral awareness, and heightened suggestibility. This state is similar to being completely absorbed in a book, movie, music, or even one's own thoughts or meditations. In it, a person is unusually responsive to an idea or image, but they are not under anyone’s “control.” Instead, a trained clinical hypnotherapist can help clients in this state relax and turn their attention inward to discover and utilize resources within themselves that can help them achieve desired behavioral changes or better manage pain or other physical concerns. Eventually, a client learns how to address their states of awareness on their own and in doing so, gain greater control of their physical and psychological responses.

The American Psychological Association and American Medical Association have recognized hypnotherapy as a valid procedure since 1958, and the National Institutes of Health (NIH) has recommended it as a treatment for chronic pain since 1995.

When It's Used

Hypnotherapy is an adjunct form of therapy, meaning it is typically used alongside other forms of psychological or medical treatment, such as traditional modes of talk therapy. But hypnotherapy can have many applications as a part of treatment for anxiety, stress, panic attacks, post-traumatic stress disorder, phobias, substance abuse including tobacco, sexual dysfunction, undesirable compulsive behaviors, mood disorders, and bad habits. It can be used to help improve sleep or to address learning disorders, communication issues, and relationship challenges.

Hypnotherapy can aid in pain management and help to resolve medical concerns such as digestive disorders, skin conditions, symptoms of autoimmune disorders, and the gastrointestinal side effects of pregnancy or chemotherapy. Research has found that surgical patients and burn victims can achieve reduced recovery time, anxiety, and pain through hypnotherapy. It can also be used by dentists to help patients control their fears before procedures or to treat teeth grinding and other oral conditions.

What to Expect

Although there are different techniques, clinical hypnotherapy is typically performed in a calm, therapeutic environment. The therapist will guide you into a relaxed, focused state, typically through the use of mental imagery and soothing verbal repetition. In this state, highly responsive to constructive, transformative messages, the therapist may guide you through recognizing a problem, releasing problematic thoughts or responses, and considering and ideally accepting suggested alternate responses before returning to normal awareness and reflecting on the suggestions together.

Is someone undergoing hypnotherapy unconscious?

No. Unlike dramatic portrayals of hypnosis in movies, on TV, or on stage, you will not be unconscious, asleep, or in any way lose control of yourself or your thoughts. You will hear the therapist’s suggestions, but it is up to you to decide whether or not to act on them.

Can hypnotherapy be dangerous?

In the real world, the practice of hypnotherapy is not as scary, simple, or powerful as its pop culture depictions. Hypnosis by a trained therapist is a safe alternative or supplement to medication. It is not a form of mind control—which is impossible to achieve. Clients remain completely awake throughout hypnotherapy sessions and should be able to fully recall their experiences. They also fully retain free will. If a therapist’s “post-hypnotic suggestion” is effective, it’s because they are suggesting something the client wants to achieve and, in their relaxed state, that individual is better able to envision and commit to a suggested positive path to change.

Negative side effects are rare but can include headaches, dizziness, drowsiness, and feelings of anxiety or distress. In rare cases, hypnotherapy could lead to the unconscious construction of “false memories,” also known as confabulations.

How It Works

Hypnosis itself is not a form of psychotherapy, but a tool that helps clinicians facilitate various types of therapies and medical or psychological treatments. Trained health care providers certified in clinical hypnosis can decide, with their patient, if hypnosis should be used along with other treatments. As with psychotherapy, the length of hypnosis treatment varies, depending on the complexity of the problem.

Is being hypnotizable a sign of mental weakness?

No; in fact, in the context of hypnotherapy, greater “hypnotizability” is a distinct advantage. While this trait varies widely among individuals, it is not the only factor that contributes to the success of the practice. As with any other therapeutic tool or approach, hypnotism always works best when the client is a willing participant. Such openness is important, because even people with high levels of hypnotizability may require multiple sessions to begin to see progress. Children, however, are generally more readily hypnotizable than adults and may respond to hypnotherapy within just a few visits.

Are changes achieved through hypnotism real and sustainable?

Yes, they are as real as any other change achieved through talk therapy techniques in that they are fundamentally cases of mind over matter. Comparable to someone experiencing the benefits of the placebo effect, the successful hypnotherapy client is self-healing: The physiological and neurological changes achieved may not have come from a medication but they are just as real.

Details

Hypnotherapy, also known as hypnotic medicine, is a use of hypnosis in psychotherapy. Hypnotherapy is viewed as a helpful adjunct therapy by proponents, having additive effects when treating psychological disorders, such as depression, anxiety, eating disorders, sleep disorders, compulsive gambling, phobias and post-traumatic stress, along with cognitive therapies. The effectiveness of hypnotherapy is not adequately supported by scientific evidence, and due to the lack of evidence indicating any level of efficiency, it is regarded as a type of alternative medicine by reputable medical organisations such as the National Health Service.

Definition

The United States Department of Labor's Dictionary of Occupational Titles (DOT) describes the job of the hypnotherapist:

"Induces hypnotic state in client to increase motivation or alter behavior patterns: Consults with client to determine nature of problem. Prepares client to enter hypnotic state by explaining how hypnosis works and what client will experience. Tests subject to determine degree of physical and emotional suggestibility. Induces hypnotic state in client, using individualized methods and techniques of hypnosis based on interpretation of test results and analysis of client's problem. May train client in self-hypnosis conditioning."

Traditional

The form of hypnotherapy practiced by most Victorian hypnotists, including James Braid and Hippolyte Bernheim, mainly employed direct suggestion of symptom removal, with some use of therapeutic relaxation and occasionally aversion to alcohol, drugs, etc.

Ericksonian

In the 1950s, Milton H. Erickson developed a radically different approach to hypnotism, which has subsequently become known as "Ericksonian hypnotherapy" or "Neo-Ericksonian hypnotherapy." Based on his belief that dysfunctional behaviors were defined by social tension, Erickson coopted the subject's behavior to establish rapport, a strategy he termed "utilization." Once rapport was established, he made use of an informal conversational approach to direct awareness. His methods included complex language patterns and client-specific therapeutic strategies (reflecting the nature of utilization). He claimed to have developed ways to suggest behavior changes during apparently ordinary conversation.

This divergence from tradition led some, including Andre Weitzenhoffer, to dispute whether Erickson was right to label his approach "hypnosis" at all. Erickson's foundational paper, however, considers hypnosis as a mental state in which specific types of "work" may be done, rather than a technique of induction.

The founders of neuro-linguistic programming (NLP), a method somewhat similar in some regards to some versions of hypnotherapy, claimed that they had modelled the work of Erickson extensively and assimilated it into their approach. Weitzenhoffer disputed whether NLP bears any genuine resemblance to Erickson's work.

Solution-focused

In the 2000s, hypnotherapists began to combine aspects of solution-focused brief therapy (SFBT) with Ericksonian hypnotherapy to produce therapy that was goal-focused (what the client wanted to achieve) rather than the more traditional problem-focused approach (spending time discussing the issues that brought the client to seek help). A solution-focused hypnotherapy session may include techniques from NLP.

Cognitive/behavioral

Cognitive behavioral hypnotherapy (CBH) is an integrated psychological therapy employing clinical hypnosis and cognitive behavioral therapy (CBT). The use of CBT in conjunction with hypnotherapy may result in greater treatment effectiveness. A meta-analysis of eight different researches revealed "a 70% greater improvement" for patients undergoing an integrated treatment to those using CBT only.

In 1974, Theodore X. Barber and his colleagues published a review of the research which argued, following the earlier social psychology of Theodore R. Sarbin, that hypnotism was better understood not as a "special state" but as the result of normal psychological variables, such as active imagination, expectation, appropriate attitudes, and motivation. Barber introduced the term "cognitive-behavioral" to describe the nonstate theory of hypnotism, and discussed its application to behavior therapy.

The growing application of cognitive and behavioral psychological theories and concepts to the explanation of hypnosis paved the way for a closer integration of hypnotherapy with various cognitive and behavioral therapies.

Many cognitive and behavioral therapies were themselves originally influenced by older hypnotherapy techniques, e.g., the systematic desensitisation of Joseph Wolpe, the cardinal technique of early behavior therapy, was originally called "hypnotic desensitisation" and derived from the Medical Hypnosis (1948) of Lewis Wolberg.

Curative

Dr. Peter Marshall, author of A Handbook of Hypnotherapy, devised the Trance Theory of Mental Illness, which asserts that people suffering from depression, or certain other kinds of neuroses, are already living in a trance. He asserts that this means the hypnotherapist does not need to induce trance, but instead to make them understand this and lead them out of it.

Mindful

Mindful hypnotherapy is therapy that incorporates mindfulness and hypnotherapy. A pilot study was made at Baylor University, Texas, and published in the International Journal of Clinical and Experimental Hypnosis. Dr. Gary Elkins, director of the Mind-Body Medicine Research Laboratory at Baylor University called it "a valuable option for treating anxiety and stress reduction” and "an innovative mind-body therapy". The study showed a decrease in stress and an increase in mindfulness.

Relationship to scientific medicine

Hypnotherapy practitioners occasionally attract the attention of mainstream medicine. Attempts to instill academic rigor have been frustrated by the complexity of client suggestibility, which has social and cultural aspects, including the reputation of the practitioner. Results achieved in one time and center of study have not been reliably transmitted to future generations.

In the 1700s Anton Mesmer offered pseudoscientific justification for his practices, but his rationalizations were debunked by a commission that included Benjamin Franklin.

Additional Information

Hypnosis is a special psychological state with certain physiological attributes, resembling sleep only superficially and marked by a functioning of the individual at a level of awareness other than the ordinary conscious state. This state is characterized by a degree of increased receptiveness and responsiveness in which inner experiential perceptions are given as much significance as is generally given only to external reality.

The hypnotic state

The hypnotized individual appears to heed only the communications of the hypnotist and typically responds in an uncritical, automatic fashion while ignoring all aspects of the environment other than those pointed out by the hypnotist. In a hypnotic state an individual tends to see, feel, smell, and otherwise perceive in accordance with the hypnotist’s suggestions, even though these suggestions may be in apparent contradiction to the actual stimuli present in the environment. The effects of hypnosis are not limited to sensory change; even the subject’s memory and awareness of self may be altered by suggestion, and the effects of the suggestions may be extended (posthypnotically) into the subject’s subsequent waking activity.

History and early research

The history of hypnosis is as ancient as that of sorcery, magic, and medicine; indeed, hypnosis has been used as a method in all three. Its scientific history began in the latter part of the 18th century with Franz Mesmer, a German physician who used hypnosis in the treatment of patients in Vienna and Paris. Because of his mistaken belief that hypnotism made use of an occult force (which he termed “animal magnetism”) that flowed through the hypnotist into the subject, Mesmer was soon discredited; but Mesmer’s method—named mesmerism after its creator—continued to interest medical practitioners. A number of clinicians made use of it without fully understanding its nature until the middle of the 19th century, when the English physician James Braid studied the phenomenon and coined the terms hypnotism and hypnosis, after the Greek god of sleep, Hypnos.

Hypnosis attracted widespread scientific interest in the 1880s. Ambroise-Auguste Liébeault, an obscure French country physician who used mesmeric techniques, drew the support of Hippolyte Bernheim, a professor of medicine at Strasbourg. Independently they had written that hypnosis involved no physical forces and no physiological processes but was a combination of psychologically mediated responses to suggestions. During a visit to France at about the same time, Austrian physician Sigmund Freud was impressed by the therapeutic potential of hypnosis for neurotic disorders. On his return to Vienna, he used hypnosis to help neurotics recall disturbing events that they had apparently forgotten. As he began to develop his system of psychoanalysis, however, theoretical considerations—as well as the difficulty he encountered in hypnotizing some patients—led Freud to discard hypnosis in favour of free association. (Generally psychoanalysts have come to view hypnosis as merely an adjunct to the free-associative techniques used in psychoanalytic practice.)

Despite Freud’s influential adoption and then rejection of hypnosis, some use was made of the technique in the psychoanalytic treatment of soldiers who had experienced combat neuroses during World Wars I and II. Hypnosis subsequently acquired other limited uses in medicine. Various researchers have put forth differing theories of what hypnosis is and how it might be understood, but there is still no generally accepted explanatory theory for the phenomenon.

Applications of hypnosis

The techniques used to induce hypnosis share common features. The most important consideration is that the person to be hypnotized (the subject) be willing and cooperative and that he or she trust in the hypnotist. Subjects are invited to relax in comfort and to fix their gaze on some object. The hypnotist continues to suggest, usually in a low, quiet voice, that the subject’s relaxation will increase and that his or her eyes will grow tired. Soon the subject’s eyes do show signs of fatigue, and the hypnotist suggests that they will close. The subject allows his eyes to close and then begins to show signs of profound relaxation, such as limpness and deep breathing. He has entered the state of hypnotic trance. A person will be more responsive to hypnosis when he believes that he can be hypnotized, that the hypnotist is competent and trustworthy, and that the undertaking is safe, appropriate, and congruent with the subject’s wishes. Therefore, induction is generally preceded by the establishment of suitable rapport between subject and hypnotist.

Ordinary inductions of hypnosis begin with simple, noncontroversial suggestions made by the hypnotist that will almost inevitably be accepted by all subjects. At this stage neither subject nor hypnotist can readily tell whether the subject’s behaviour constitutes a hypnotic response or mere cooperation. Then, gradually, suggestions are given that demand increasing distortion of the individual’s perception or memory—e.g., that it is difficult or impossible for the subject to open his or her eyes. Other methods of induction may also be used. The process may take considerable time or only a few seconds.

The resulting hypnotic phenomena differ markedly from one subject to another and from one trance to another, depending upon the purposes to be served and the depth of the trance. Hypnosis is a phenomenon of degrees, ranging from light to profound trance states but with no fixed constancy. Ordinarily, however, all trance behaviour is characterized by a simplicity, a directness, and a literalness of understanding, action, and emotional response that are suggestive of childhood. The surprising abilities displayed by some hypnotized persons seem to derive partly from the restriction of their attention to the task or situation at hand and their consequent freedom from the ordinary conscious tendency to orient constantly to distracting, even irrelevant, events.

The central phenomenon of hypnosis is suggestibility, a state of greatly enhanced receptiveness and responsiveness to suggestions and stimuli presented by the hypnotist. Appropriate suggestions by the hypnotist can induce a remarkably wide range of psychological, sensory, and motor responses from persons who are deeply hypnotized. By acceptance of and response to suggestions, the subject can be induced to behave as if deaf, blind, paralyzed, hallucinated, delusional, amnesic, or impervious to pain or to uncomfortable body postures; in addition, the subject can display various behavioral responses that he or she regards as a reasonable or desirable response to the situation that has been suggested by the hypnotist.

One fascinating manifestation that can be elicited from a subject who has been in a hypnotic trance is that of posthypnotic suggestion and behaviour; that is, the subject’s execution, at some later time, of instructions and suggestions that were given to him while he was in a trance. With adequate amnesia induced during the trance state, the individual will not be aware of the source of his impulse to perform the instructed act. Posthypnotic suggestion, however, is not a particularly powerful means for controlling behaviour when compared with a person’s conscious willingness to perform actions.

Many subjects seem unable to recall events that occurred while they were in deep hypnosis. This “posthypnotic amnesia” can result either spontaneously from deep hypnosis or from a suggestion by the hypnotist while the subject is in a trance state. The amnesia may include all the events of the trance state or only selected items, or it may be manifested in connection with matters unrelated to the trance. Posthypnotic amnesia may be successfully removed by appropriate hypnotic suggestions.

Hypnosis has been officially endorsed as a therapeutic method by medical, psychiatric, dental, and psychological associations throughout the world. It has been found most useful in preparing people for anesthesia, enhancing the drug response, and reducing the required dosage. In childbirth it is particularly helpful, because it can help to alleviate the mother’s discomfort while avoiding anesthetics that could impair the child’s physiological function. Hypnosis has often been used in attempts to stop smoking, and it is highly regarded in the management of otherwise intractable pain, including that of terminal cancer. It is valuable in reducing the common fear of dental procedures; in fact, the very people whom dentists find most difficult to treat frequently respond best to hypnotic suggestion. In the area of psychosomatic medicine, hypnosis has been used in a variety of ways. Patients have been trained to relax and to carry out, in the absence of the hypnotist, exercises that have had salutary effects on some forms of high blood pressure, headaches, and functional disorders.

Though the induction of hypnosis requires little training and no particular skill, when used in the context of medical treatment, it can be damaging when employed by individuals who lack the competence and skill to treat such problems without the use of hypnosis. On the other hand, hypnosis has been repeatedly condemned by various medical associations when it is used purely for purposes of public entertainment, owing to the danger of adverse posthypnotic reactions to the procedure. Indeed, in this regard several nations have banned or limited commercial or other public displays of hypnosis. In addition, many courts of law refuse to accept testimony from people who have been hypnotized for purposes of “recovering” memories, because such techniques can lead to confusion between imaginations and memories.

hypnotherapy-benefits-jpg.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2018 2024-01-05 15:52:37

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2020) Gross Domestic Product

Gist

GDP stands for "Gross Domestic Product" and represents the total monetary value of all final goods and services produced (and sold on the market) within a country during a period of time (typically 1 year).

Purpose: GDP is the most commonly used measure of economic activity.

History: The first basic concept of GDP was invented at the end of the 18th century. The modern concept was developed by the American economist Simon Kuznets in 1934 and adopted as the main measure of a country's economy at the Bretton Woods conference in 1944.

Summary

Gross domestic product (GDP) is total market value of the goods and services produced by a country’s economy during a specified period of time. It includes all final goods and services—that is, those that are produced by the economic agents located in that country regardless of their ownership and that are not resold in any form. It is used throughout the world as the main measure of output and economic activity.

In economics, the final users of goods and services are divided into three main groups: households, businesses, and the government. One way gross domestic product (GDP) is calculated—known as the expenditure approach—is by adding the expenditures made by those three groups of users. Accordingly, GDP is defined by the following formula:
GDP = Consumption + Investment + Government Spending + Net Exports
or more succinctly as
GDP = C + I + G + NX
where consumption (C) represents private-consumption expenditures by households and nonprofit organizations, investment (I) refers to business expenditures by businesses and home purchases by households, government spending (G) denotes expenditures on goods and services by the government, and net exports (NX) represents a nation’s exports minus its imports.

The expenditure approach is so called because all three variables on the right-hand side of the equation denote expenditures by different groups in the economy. The idea behind the expenditure approach is that the output that is produced in an economy has to be consumed by final users, which are either households, businesses, or the government. Therefore, the sum of all the expenditures by these different groups should equal total output—i.e., GDP.

Each country prepares and publishes its own GDP data regularly. In addition, international organizations such as the World Bank and the International Monetary Fund (IMF) periodically publish and maintain historical GDP data for many countries. In the United States, GDP data are published quarterly by the Bureau of Economic Analysis (BEA) of the U.S. Department of Commerce. GDP and its components are part of the National Income and Product Accounts data set that the BEA updates on a regular basis.

When an economy experiences several consecutive quarters of positive GDP growth, it is considered to be in an expansion (also called economic boom). Conversely, when it experiences two or more consecutive quarters of negative GDP growth, the economy is generally considered to be in a recession (also called economic bust). In the United States, the Business Cycle Dating Committee of the National Bureau of Economic Research is the authority that announces and keeps track of official expansions and recessions, also known as the business cycle. A separate field within economics called the economics of growth (see economics: Growth and development) specializes in the study of the characteristics and causes of business cycles and long-term growth patterns. Growth economists doing research in that field try to develop models that explain the fluctuations in economic activity, as measured primarily by changes in GDP.

GDP per capita (also called GDP per person) is used as a measure of a country’s standard of living. A country with a higher level of GDP per capita is considered to be better off in economic terms than a country with a lower level.

GDP differs from gross national product (GNP), which includes all final goods and services produced by resources owned by that country’s residents, whether located in the country or elsewhere. In 1991 the United States substituted GDP for GNP as its main measure of economic output.

Before the creation of the Human Development Index (HDI), a country’s level of development was typically measured using economic statistics, such as GDP, GNP, and GNI (Gross National Income). The United Nations, however, believed that economic measures alone were inadequate for assessing development because they did not always reflect the quality of life of a country’s average citizens. It thereby introduced the HDI in 1990 to take other factors into account and provide a more well-rounded evaluation of human development.

Details

Gross domestic product (GDP) is a monetary measure of the market value of all the final goods and services produced in a specific time period by a country or countries. GDP is most often used by the government of a single country to measure its economic health. Due to its complex and subjective nature, this measure is often revised before being considered a reliable indicator.

GDP definitions are maintained by several national and international economic organizations. The Organisation for Economic Co-operation and Development (OECD) defines GDP as "an aggregate measure of production equal to the sum of the gross values added of all resident and institutional units engaged in production and services (plus any taxes, and minus any subsidies, on products not included in the value of their outputs)". An IMF publication states that, "GDP measures the monetary value of final goods and services—that are bought by the final user—produced in a country in a given period (say a quarter or a year)."

GDP (nominal) per capita does not, however, reflect differences in the cost of living and the inflation rates of the countries; therefore, using a basis of GDP per capita at purchasing power parity (PPP) may be more useful when comparing living standards between nations, while nominal GDP is more useful comparing national economies on the international market. Total GDP can also be broken down into the contribution of each industry or sector of the economy. The ratio of GDP to the total population of the region is the per capita GDP (also called the Mean Standard of Living).

GDP is often used as a metric for international comparisons as well as a broad measure of economic progress. It is often considered to be the world's most powerful statistical indicator of national development and progress. However, critics of the growth imperative often argue that GDP measures were never intended to measure progress, and leave out key other externalities, such as resource extraction, environmental impact and unpaid domestic work. Critics frequently propose alternative economic models such as doughnut economics which use other measures of success or alternative indicators such as the OECD's Better Life Index as better approaches to measuring the effect of the economy on human development and well being.

For example, the GDP of Germany in 2022 was 3.9 trillion euros, which included 390 billion euros of taxes like the value-added tax.

History

William Petty came up with a concept of GDP, to calculate the tax burden, and argue landlords were unfairly taxed during warfare between the Dutch and the English between 1652-74. Charles Davenant developed the method further in 1695. The modern concept of GDP was first developed by Simon Kuznets for a 1934 U.S. Congress report, where he warned against its use as a measure of welfare.  After the Bretton Woods Conference in 1944, GDP became the main tool for measuring a country's economy. At that time gross national product (GNP) was the preferred estimate, which differed from GDP in that it measured production by a country's citizens at home and abroad rather than its 'resident institutional units'. The switch from GNP to GDP in the United States occurred in 1991. The role that measurements of GDP played in World War II was crucial to the subsequent political acceptance of GDP values as indicators of national development and progress. A crucial role was played here by the U.S. Department of Commerce under Milton Gilbert where ideas from Kuznets were embedded into institutions.

The history of the concept of GDP should be distinguished from the history of changes in many ways of estimating it. The value added by firms is relatively easy to calculate from their accounts, but the value added by the public sector, by financial industries, and by intangible asset creation is more complex. These activities are increasingly important in developed economies, and the international conventions governing their estimation and their inclusion or exclusion in GDP regularly change in an attempt to keep up with industrial advances. In the words of one academic economist, "The actual number for GDP is, therefore, the product of a vast patchwork of statistics and a complicated set of processes carried out on the raw data to fit them to the conceptual framework."

China officially adopted GDP in 1993 as its indicator of economic performance. Previously, China had relied on a Marxist-inspired national accounting system.

Determining gross domestic product (GDP)

GDP can be determined in three ways, all of which should, theoretically, give the same result. They are the production (or output or value added) approach, the income approach, and the speculated expenditure approach. It is representative of the total output and income within an economy.

The most direct of the three is the production approach, which sums up the outputs of every class of enterprise to arrive at the total. The expenditure approach works on the principle that all of the products must be bought by somebody, therefore the value of the total product must be equal to people's total expenditures in buying things. The income approach works on the principle that the incomes of the productive factors ("producers", colloquially) must be equal to the value of their product, and determines GDP by finding the sum of all producers' incomes.

Additional Information

GDP - or Gross Domestic Product - is an important tool for judging how well, or badly, an economy is doing.

It lets the government work out how much it can afford to tax and spend, and helps businesses decide whether to expand and hire more people.

What is GDP?

GDP is a measure of all the economic activity of companies, governments and individuals in a country.

In the UK, new GDP figures are published by the Office of National Statistics (ONS) every month, but the quarterly figures - covering three months at a time - are considered more important.

When an economy is growing, each quarterly GDP figure is slightly bigger in than the previous three-month period.

Most economists, politicians and businesses like to see a steadily rising GDP because it usually means people are spending more, extra jobs are created, more tax is paid and workers get better pay rises.

When GDP is falling, it means the economy is shrinking - which is bad news for businesses and workers.

If GDP falls for two quarters in a row, that is known as a recession, which can lead pay freezes and job losses.

The Covid pandemic caused the most severe recession seen in more than 300 years, which forced the government to borrow hundreds of billions of pounds to support the economy.

Although higher prices have squeezed consumers' budgets throughout 2023, the UK has avoided going into recession so far, although growth has been flat.

In 2022, the UK's GDP was worth £2.2tn, but we tend to concentrate on how much it has grown rather than the total figure.

GDP-definition.width-880.png


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2019 2024-01-06 16:22:15

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2021) Gross National Product

Gist

Gross National Product (GNP) is a measure of a country's economic output that considers the value of goods and services produced by its citizens, regardless of their location. In simple terms, GNP calculates the total value of all products and services created by a country's residents, whether they're inside or outside the country's borders.

GNP is the sum of the market values of all final goods and services produced by a country's residents within a specific time period, typically a year, including income earned by citizens working abroad but excluding income earned by non-residents within the country.

Summary

Gross national product (GNP) refers to the total value of all the goods and services produced by the residents and businesses of a country, irrespective of the location of production. GNP takes into account the investments made by the businesses and residents of the country, living both inside and outside the country.

Gross national product (GNP) is the total value of all the final goods and services made by a nation’s economy in a specific time (usually a year). GNP is different from net national product, which considers depreciation and the consumption of capital.

GNP is similar to gross domestic product (GDP), except that GDP doesn’t include the income made by a nation’s residents from investments abroad. In 1991, the United States started using GDP instead of GNP as its main way to measure economic output. Because GDP omits international income from the total, it allows the fiscal and monetary policy makers to more accurately track total output of goods and services from year to year, as well as compare output among nations.

Details

Gross National Product (GNP) is a measure of the value of all goods and services produced by a country’s residents and businesses. It estimates the value of the final products and services manufactured by a country’s residents, regardless of the production location.

Gross National Product (GNP) goods and services

GNP is calculated by adding personal consumption expenditures, government expenditures, private domestic investments, net exports, and all income earned by residents in foreign countries, minus the income earned by foreign residents within the domestic economy. The net exports are calculated by subtracting the value of imports from the value of the country’s exports.

Unlike Gross Domestic Product (GDP), which takes the value of goods and services based on the geographical location of production, Gross National Product estimates the value of goods and services based on the location of ownership. It is equal to the value of a country’s GDP plus any income earned by the residents in foreign investments, minus the income earned inside the country by foreign residents. GNP excludes the value of any intermediary goods to eliminate the chances of double counting since these entries are included in the value of the final products and services.

How to Calculate the Gross National Product?

The official formula for calculating GNP is as follows:

Y = C + I + G + X + Z

Where:

C – Consumption Expenditure
I – Investment
G – Government Expenditure
X – Net Exports (Value of imports minus value of exports)
Z – Net Income (Net income inflow from abroad minus net income outflow to foreign countries)

Alternatively, the Gross National Product can also be calculated as follows:

GNP = GDP + Net Income Inflow from Overseas – Net Income Outflow to Foreign Countries

Where:

GDP = Consumption + Investment + Government Expenditure + Exports – Imports

Gross National Product takes into account the manufacturing of tangible goods such as vehicles, agricultural products, machinery, etc., as well as the provision of services like healthcare, business consultancy, and education. GNP also includes taxes and depreciation. The cost of services used in producing goods is not computed independently since it is included in the cost of finished products.

For year to year comparisons, Gross National Product needs to be adjusted for inflation to produce real GNP. Also, for country to country comparisons, GNP is stated on a per capita basis. In computing GNP, there are complications on how to account for dual citizenship. If a producer or manufacturer holds citizenship in two countries, both countries will take into account his productive output, and this will result in double counting.

Importance of GNP

Policymakers rely on Gross National Product as one of the important economic indicators. GNP produces crucial information on manufacturing, savings, investments, employment, production outputs of major companies, and other economic variables. Policymakers use this information in preparing policy papers that legislators use to make laws. Economists rely on the GNP data to solve national problems such as inflation and poverty.

When calculating the amount of income earned by a country’s residents regardless of their location, GNP becomes a more reliable indicator than GDP. In the globalized economy, individuals enjoy many opportunities to earn an income, both from domestic and foreign sources. When measuring such broad data, GNP provides information that other productivity measures do not include. If residents of a country were limited to domestic sources of income, GNP would be equal to GDP, and it would be less valuable to the government and policymakers.

The information provided by GNP also helps in analyzing the balance of payments. The balance of payments is determined by the difference between a country’s exports to foreign countries and the value of the products and services imported. A balance of payments deficit means that the country imports more goods and services than the value of exports. A balance of payments surplus means that the value of the country’s exports is higher than the imports.

GNP vs. GDP

Both the Gross National Product (GNP) and Gross Domestic Product (GDP) measure the market value of products and services produced in the economy. The terms differ in what constitutes an economy since GDP measures the domestic levels of production while GNP measures the level of the output of a country’s residents regardless of their location. The difference comes from the fact that there may be many domestic companies that produce goods for the rest of the world, and there may be foreign-owned companies that produce products within the country.

If the income earned by domestic firms in overseas countries exceeds the income earned by foreign firms within the country, GNP is higher than the GDP. For example, the GNP of the United States is $250 billion higher than its GDP due to the high number of production activities by U.S. citizens in overseas countries.

Most countries around the world use GDP to measure economic activity in their country. The U.S. used Gross National Product as the primary measure of economic activity until 1991 when it adopted GDP. When making the changes, the Bureau of Economic Analysis (BEA) observed that GDP was a more convenient economic indicator of the total economic activity in the United States.

The GNP is a useful economic indicator, especially when measuring a country’s income from international trade. Both economic indicators should be considered when valuing a country’s economic net worth to get an accurate position of the economy.

Gross National Income (GNI)

Instead of Gross National Product, Gross National Income (GNI) is used by large institutions such as the European Union (EU), The World Bank, and the Human Development Index (HDI). It is defined as GDP plus net income from abroad, plus net taxes and subsidies receivable from abroad.

GNI measures the income received by a country’s residents from domestic and foreign trade. Although both GNI and GNP are similar in purpose, GNI is considered a better measure of income than production.

Additional Information

Gross national product is another metric used to measure a country's economic output. Where GDP looks at the value of goods and services produced within a country's borders, GNP is the market value of goods and services produced by all citizens of a country—both domestically and abroad.

While GDP is an indicator of the local/national economy, GNP represents how its nationals are contributing to the country's economy. It factors in citizenship but overlooks location. For that reason, it's important to note that GNP does not include the output of foreign residents.

The 1993 System of National Accounts replaced the term GNP with GNI, or Gross National Income. Both metrics measure the same thing, domestic productivity plus net income by a country's citizens from foreign sources.

For example, a U.S.-based Canadian NFL player who sends their income home to Canada, or a German investor who transfers their dividend income to Germany, will both be excluded from the U.S. GNP, but they will be included in the country's GDP.

GNP can be calculated by adding consumption, government spending, capital spending by businesses, net exports (exports minus imports), and net income by domestic residents and businesses from overseas investments. This figure is then subtracted from the net income earned by foreign residents and businesses from domestic investment.

GDP-vs-GNP.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2020 2024-01-07 18:06:30

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2022) Globe

Gist

A globe is a spherical model of Earth. It serves purposes that are similar to some maps. A model globe of Earth is known as a terrestrial globe. A model globe of the celestial sphere is called a celestial globe.

Summary

Globe is the most common general-use model of spherical Earth. It is a sphere or ball that bears a map of the Earth on its surface and is mounted on an axle that permits rotation. The ancient Greeks, who knew the Earth to be a sphere, were the first to use globes to represent the surface of the Earth. Crates of Mallus is said to have made one in about 150 BCE. The earliest surviving terrestrial globe was made in Nürnberg in 1492 by Martin Behaim, who almost undoubtedly influenced Christopher Columbus to attempt to sail west to the Orient. In ancient times, “celestial globes” were used to represent the constellations; the earliest surviving one is the marble Farnese globe, a celestial globe dating from about 25 CE.

Today’s globe, typically hollow, may be made of almost any light, strong material, such as cardboard, plastic, or metal. Some are translucent. They may also be inflatable. Terrestrial globes are usually mounted with the axis tilted 23.5° from the vertical, to help simulate the inclination of the Earth relative to the plane in which it orbits the Sun. Terrestrial globes may be physical, showing natural features such as deserts and mountain ranges (sometimes molded in relief), or political, showing countries, cities, etc. While most globes emphasize the surface of the land, a globe may also show the bottom of the sea. Globes also can be made to depict the surfaces of spherical bodies other than the Earth, for example, the Moon. Celestial globes are also still in use.

Details

A globe is a spherical model of Earth, of some other celestial body, or of the celestial sphere. Globes serve purposes similar to maps, but unlike maps, they do not distort the surface that they portray except to scale it down. A model globe of Earth is called a terrestrial globe. A model globe of the celestial sphere is called a celestial globe.

A globe shows details of its subject. A terrestrial globe shows landmasses and water bodies. It might show nations and major cities and the network of latitude and longitude lines. Some have raised relief to show mountains and other large landforms. A celestial globe shows notable stars, and may also show positions of other prominent astronomical objects. Typically, it will also divide the celestial sphere into constellations.

The word globe comes from the Latin word globus, meaning "sphere". Globes have a long history. The first known mention of a globe is from Strabo, describing the Globe of Crates from about 150 BC. The oldest surviving terrestrial globe is the Erdapfel, made by Martin Behaim in 1492. The oldest surviving celestial globe sits atop the Farnese Atlas, carved in the 2nd century Roman Empire.

Terrestrial and planetary

Flat maps are created using a map projection that inevitably introduces an increasing amount of distortion the larger the area that the map shows. A globe is the only representation of the Earth that does not distort either the shape or the size of large features – land masses, bodies of water, etc.

The Earth's circumference is quite close to 40 million metres. Many globes are made with a circumference of one metre, so they are models of the Earth at a scale of 1:40 million. In imperial units, many globes are made with a diameter of one foot[citation needed] (about 30 cm), yielding a circumference of 3.14 feet (about 96 cm) and a scale of 1:42 million. Globes are also made in many other sizes.

Some globes have surface texture showing topography or bathymetry. In these, elevations and depressions are purposely exaggerated, as they otherwise would be hardly visible. For example, one manufacturer produces a three dimensional raised relief globe with a 64 cm (25 in) diameter (equivalent to a 200 cm circumference, or approximately a scale of 1:20 million) showing the highest mountains as over 2.5 cm (1 in) tall, which is about 57 times higher than the correct scale of Mount Everest.

Most modern globes are also imprinted with parallels and meridians, so that one can tell the approximate coordinates of a specific location. Globes may also show the boundaries of countries and their names.

Many terrestrial globes have one celestial feature marked on them: a diagram called the analemma, which shows the apparent motion of the Sun in the sky during a year.

Globes generally show north at the top, but many globes allow the axis to be swiveled so that southern portions can be viewed conveniently. This capability also permits exploring the Earth from different orientations to help counter the north-up bias caused by conventional map presentation.

Celestial

Celestial globes show the apparent positions of the stars in the sky. They omit the Sun, Moon and planets because the positions of these bodies vary relative to those of the stars, but the ecliptic, along which the Sun moves, is indicated. In their most basic form celestial globes represent the stars as if the viewer were looking down upon the sky as a globe that surrounds the earth.

History

The sphericity of the Earth was established by Greek astronomy in the 3rd century BC, and the earliest terrestrial globe appeared from that period. The earliest known example is the one constructed by Crates of Mallus in Cilicia (now Çukurova in modern-day Turkey), in the mid-2nd century BC.

No terrestrial globes from Antiquity have survived. An example of a surviving celestial globe is part of a Hellenistic sculpture, called the Farnese Atlas, surviving in a 2nd-century AD Roman copy in the Naples Archaeological Museum, Italy.

Early terrestrial globes depicting the entirety of the Old World were constructed in the Islamic world. During the Middle Ages in Christian Europe, while there are writings alluding to the idea that the earth was spherical, no known attempts at making a globe took place before the fifteenth century. The earliest extant terrestrial globe was made in 1492 by Martin Behaim (1459–1537) with help from the painter Georg Glockendon. Behaim was a German mapmaker, navigator, and merchant. Working in Nuremberg, Germany, he called his globe the "Nürnberg Terrestrial Globe." It is now known as the Erdapfel. Before constructing the globe, Behaim had traveled extensively. He sojourned in Lisbon from 1480, developing commercial interests and mingling with explorers and scientists. He began to construct his globe after his return to Nürnberg in 1490.

China made many mapping advancements such as sophisticated land surveys and the invention of the magnetic compass. However, no record of terrestrial globes in China exists until a globe was introduced by the Persian astronomer, Jamal ad-Din, in 1276.

Another early globe, the Hunt–Lenox Globe, ca. 1510, is thought to be the source of the phrase Hic Sunt Dracones, or "Here be dragons". A similar grapefruit-sized globe made from two halves of an ostrich egg was found in 2012 and is believed to date from 1504. It may be the oldest globe to show the New World. Stefaan Missine, who analyzed the globe for the Washington Map Society journal Portolan, said it was "part of an important European collection for decades." After a year of research in which he consulted many experts, Missine concluded the Hunt–Lenox Globe was a copper cast of the egg globe.

A facsimile globe showing America was made by Martin Waldseemüller in 1507. Another "remarkably modern-looking" terrestrial globe of the Earth was constructed by Taqi al-Din at the Constantinople observatory of Taqi ad-Din during the 1570s.

The world's first seamless celestial globe was built by Mughal scientists under the patronage of Jahangir.

Globus IMP, electro-mechanical devices including five-inch globes have been used in Soviet and Russian spacecraft from 1961 to 2002 as navigation instruments. In 2001, the TMA version of the Soyuz spacecraft replaced this instrument with a digital map.

Manufacture

Traditionally, globes were manufactured by gluing a printed paper map onto a sphere, often made from wood.

The most common type has long, thin gores (strips) of paper that narrow to a point at the poles, small disks cover over the inevitable irregularities at these points. The more gores there are, the less stretching and crumpling is required to make the paper map fit the sphere. This method of globe making was illustrated in 1802 in an engraving in The English Encyclopedia by George Kearsley.

Modern globes are often made from thermoplastic. Flat, plastic disks are printed with a distorted map of one of the Earth's hemispheres. This is placed in a machine which molds the disk into a hemispherical shape. The hemisphere is united with its opposite counterpart to form a complete globe.

Usually a globe is mounted so that its rotation axis is 23.5° (0.41 rad) from vertical, which is the angle the Earth's rotation axis deviates from perpendicular to the plane of its orbit. This mounting makes it easy to visualize how seasons change.

In the 1800s small pocket globes (less than 3 inches) were status symbols for gentlemen and educational toys for rich children.

13%22D+Tabletop+World+Globe.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2021 2024-01-08 00:57:40

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2023) Breakwater

Gist

Part of a coastal management system, breakwaters are installed parallel to the shore to minimize erosion. On beaches where longshore drift threatens the erosion of beach material, smaller structures on the beach may be installed, usually perpendicular to the water's edge.

Summary

Breakwater is an artificial offshore structure protecting a harbour, anchorage, or marina basin from water waves. Breakwaters intercept longshore currents and tend to prevent beach erosion. Over the long term, however, the processes of erosion and sedimentation cannot be effectively overcome by interfering with currents and the supply of sediment. Deposition of sediment at one site will be compensated for by erosion elsewhere; this phenomenon occurs whether one breakwater or a series of such structures is erected.

Details

A breakwater is a permanent structure constructed at a coastal area to protect against tides, currents, waves, and storm surges. Breakwaters have been built since Antiquity to protect anchorages, helping isolate vessels from marine hazards such as wind-driven waves. A breakwater, also known in some contexts as a jetty or a Mole, may be connected to land or freestanding, and may contain a walkway or road for vehicle access.

Part of a coastal management system, breakwaters are installed parallel to the shore to minimize erosion. On beaches where longshore drift threatens the erosion of beach material, smaller structures on the beach may be installed, usually perpendicular to the water's edge. Their action on waves and current is intended to slow the longshore drift and discourage mobilisation of beach material. In this usage they are more usually referred to as groynes.

Purposes

Breakwaters reduce the intensity of wave action in inshore waters and thereby provide safe harbourage. Breakwaters may also be small structures designed to protect a gently sloping beach to reduce coastal erosion; they are placed 100–300 feet (30–90 m) offshore in relatively shallow water.

An anchorage is only safe if ships anchored there are protected from the force of powerful waves by some large structure which they can shelter behind. Natural harbours are formed by such barriers as headlands or reefs. Artificial harbours can be created with the help of breakwaters. Mobile harbours, such as the D-Day Mulberry harbours, were floated into position and acted as breakwaters. Some natural harbours, such as those in Plymouth Sound, Portland Harbour, and Cherbourg, have been enhanced or extended by breakwaters made of rock.

Types

Types of breakwaters include vertical wall breakwater, mound breakwater and mound with superstructure or composite breakwater.

A breakwater structure is designed to absorb the energy of the waves that hit it, either by using mass (e.g. with caissons), or by using a revetment slope (e.g. with rock or concrete armour units).

In coastal engineering, a revetment is a land-backed structure whilst a breakwater is a sea-backed structure (i.e. water on both sides).

Rubble

Rubble mound breakwaters use structural voids to dissipate the wave energy. Rubble mound breakwaters consist of piles of stones more or less sorted according to their unit weight: smaller stones for the core and larger stones as an armour layer protecting the core from wave attack. Rock or concrete armour units on the outside of the structure absorb most of the energy, while gravels or sands prevent the wave energy's continuing through the breakwater core. The slopes of the revetment are typically between 1:1 and 1:2, depending upon the materials used. In shallow water, revetment breakwaters are usually relatively inexpensive. As water depth increases, the material requirements—and hence costs—increase significantly.

Caisson

Caisson breakwaters typically have vertical sides and are usually erected where it is desirable to berth one or more vessels on the inner face of the breakwater. They use the mass of the caisson and the fill within it to resist the overturning forces applied by waves hitting them. They are relatively expensive to construct in shallow water, but in deeper sites they can offer a significant saving over revetment breakwaters.

An additional rubble mound is sometimes placed in front of the vertical structure in order to absorb wave energy and thus reduce wave reflection and horizontal wave pressure on the vertical wall. Such a design provides additional protection on the sea side and a quay wall on the inner side of the breakwater, but it can enhance wave overtopping.

Wave absorbing caisson

A similar but more sophisticated concept is a wave-absorbing caisson, including various types of perforation in the front wall.

Such structures have been used successfully in the offshore oil-industry, but also on coastal projects requiring rather low-crested structures (e.g. on an urban promenade where the sea view is an important aspect, as seen in Beirut and Monaco). In the latter, a project is presently ongoing at the Anse du Portier including 18 wave-absorbing 27 m (89 ft) high caissons.

Wave attenuator

Wave attenuators consist of concrete elements placed horizontally one foot under the free surface, positioned along a line parallel to the coast. Wave attenuators have four slabs facing the sea, one vertical slab, and two slabs facing the land; each slab is separated from the next by a space of 200 millimetres (7.9 in). The row of four sea-facing and two land-facing slabs reflects offshore wave by the action of the volume of water located under it which, made to oscillate under the effect of the incident wave, creates waves in phase opposition to the incident wave downstream from the slabs.[jargon]

Membrane Breakwaters

A submerged flexible mound breakwater can be employed for wave control in shallow water as an advanced alternative to the conventional rigid submerged designs. Further to the fact that, the construction cost of the submerged flexible mound breakwaters is less than that of the conventional submerged breakwaters, ships and marine organisms can pass them, if being deep enough. These marine structures reduce the collided wave energy and prevent the generation of standing waves.

Breakwater armour units

As design wave heights get larger, rubble mound breakwaters require larger armour units to resist the wave forces. These armour units can be formed of concrete or natural rock. The largest standard grading for rock armour units given in CIRIA 683 "The Rock Manual" is 10–15 tonnes. Larger gradings may be available, but the ultimate size is limited in practice by the natural fracture properties of locally available rock.

Shaped concrete armour units (such as Dolos, Xbloc, Tetrapod, etc.) can be provided in up to approximately 40 tonnes (e.g. Jorf Lasfar, Morocco), before they become vulnerable to damage under self weight, wave impact and thermal cracking of the complex shapes during casting/curing. Where the very largest armour units are required for the most exposed locations in very deep water, armour units are most often formed of concrete cubes, which have been used up to ~195 tonnes Archived 2019-05-12 at the Wayback Machine for the tip of the breakwater at Punta Langosteira near La Coruña, Spain.

Preliminary design of armour unit size is often undertaken using the Hudson Equation, Van der Meer and more recently Van Gent et al.; these methods are all described in CIRIA 683 "The Rock Manual" and the United States Army Corps of Engineers Coastal engineering manual (available for free online) and elsewhere. For detailed design the use of scaled physical hydraulic models remains the most reliable method for predicting real-life behavior of these complex structures.

Unintended consequences

Breakwaters are subject to damage and overtopping in severe storms. Some may also have the effect of creating unique types of waves that attract surfers, such as The Wedge at the Newport breakwater.

Sediment effects

The dissipation of energy and relative calm water created in the lee of the breakwaters often encourage accretion of sediment (as per the design of the breakwater scheme). However, this can lead to excessive salient build up, resulting in tombolo formation, which reduces longshore drift shoreward of the breakwaters. This trapping of sediment can cause adverse effects down-drift of the breakwaters, leading to beach sediment starvation and increased coastal erosion. This may then lead to further engineering protection being needed down-drift of the breakwater development. Sediment accumulation in the areas surrounding breakwaters can cause flat areas with reduced depths, which changes the topographic landscape of the seabed.

Salient formations as a result of breakwaters are a function of the distance the breakwaters are built from the coast, the direction at which the wave hits the breakwater, and the angle at which the breakwater is built (relative to the coast). Of these three, the angle at which the breakwater is built is most important in the engineered formation of salients. The angle at which the breakwater is built determines the new direction of the waves (after they've hit the breakwaters), and in turn the direction that sediment will flow and accumulate over time.

Environmental effects

The reduced heterogeneity in sea floor landscape introduced by breakwaters can lead to reduced species abundance and diversity in the surrounding ecosystems. As a result of the reduced heterogeneity and decreased depths that breakwaters produce due to sediment build up, the UV exposure and temperature in surrounding waters increase, which may disrupt surrounding ecosystems.

Construction of detached breakwaters

There are two main types of offshore breakwater (also called detached breakwater): single and multiple. Single, as the name suggests, means the breakwater consists of one unbroken barrier, while multiple breakwaters (in numbers anywhere from two to twenty) are positioned with gaps in between (160–980 feet or 50–300 metres). The length of the gap is largely governed by the interacting wavelengths. Breakwaters may be either fixed or floating, and impermeable or permeable to allow sediment transfer shoreward of the structures, the choice depending on tidal range and water depth. They usually consist of large pieces of rock (granite) weighing up to 10–15 tonnes each, or rubble-mound. Their design is influenced by the angle of wave approach and other environmental parameters. Breakwater construction can be either parallel or perpendicular to the coast, depending on the shoreline requirements.

Additional Information

A breakwater is a structure built along a shore or offshore, approximately parallel to a shoreline. Some breakwaters float at the water’s surface, while bottom-resting models may emerge from the surface or lie entirely underwater. Breakwaters are different from dikes because they allow some water flow and do not seal off one portion of a water body from another.

Function

In most cases, a breakwater serves two main functions. First, it may be built to absorb the force of incoming waves and currents in order to create a zone of calmer waters. This also protects roads, ships and structures from those waters. Second, it may be built to control the rate of erosion or sediment deposit alongshore. This allows the shape of the shoreline to be controlled over time. Both of these functions are important in the face of climate change. Breakwaters can help to hold back rising waters, thus protecting homes and other infrastructure. Similarly, beaches that are being eroded due to rising sea levels may be maintained through the use of breakwaters.

In the 21st century, more and more breakwaters also help make calm waters available to the growing fish farm industry.

Breakwaters in Canada

Most Canadian population centres and economically important areas benefit from breakwaters if they are near an ocean or significant body of water. For example, in British Columbia, over 30 harbors or marinas use floating breakwater systems. The breakwater at Ogden Point, near Victoria, BC, is a notable bottom-resting model, completed in 1917. In addition, Vancouver Island itself acts as a natural breakwater for the mainland coast by shielding it from the Pacific Ocean.

Breakwaters are also important to the provinces exposed to the Atlantic Ocean. Countless breakwaters are scattered along their coasts, including within the Gulf of St. Lawrence. Communities in the Great Lakes region that benefit from breakwaters include Toronto, along the shores of Lake Ontario, and Port Elgin, along the shores of Lake Huron.

2022-09-27_-_OceanReefMarina-5.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2022 2024-01-08 23:59:03

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2024) Doctorate

Gist

A doctorate is usually the most advanced degree someone can get in an academic discipline, higher education experts say.

Summary:

Types of doctorate degrees

There are two types of doctorate degrees available to earn: academic and professional. Each type is a terminal degree, meaning it’s the highest degree you can earn and shows true mastery over a subject. The type of doctorate you earn will largely depend on what you want to study.

Academic doctorate

An academic doctorate, often called a PhD (short for Doctor of Philosophy), is a research degree that typically requires completing a dissertation. Students enrolled in a PhD program may be interested in working in academia as a professor or conducting research in their field. However, a growing number of PhD students go on to apply their specialized knowledge and skill set to various careers outside of academia as well.

Examples of academic doctorates include:

* Doctor of Philosophy (PhD)
* Doctor of Education (EdD)
* Doctor of Design (DDes)
* Doctor of Fine Arts (DFA)
* Doctor of Nursing Science (DNS)
* Doctor of Theology (ThD)

Professional doctorate

A professional doctorate is also referred to as an applied doctorate, and has more to do with a specific profession, such as medicine, law, or business. Students in professional doctorate programs enroll to learn specific knowledge and skills they will need to pursue their chosen career path. 

Examples of professional doctorates include:

* Doctor of Business Administration (DBA)
* Doctor of Dental Surgery (DDS)
* Doctor of Dental Medicine (DMD)
* Doctor of Podiatric Medicine (DPM)
* Doctor of Chiropractic (DC)
* Doctor of Veterinary Medicine (DVM)
* Doctor of Naturopathic Medicine (ND)
* Juris Doctor (JD)

Details

A doctorate (from Latin doctor, meaning "teacher") or doctoral degree is a postgraduate academic degree awarded by universities and some other educational institutions, derived from the ancient formalism licentia docendi ("licence to teach").

In most countries, a research degree qualifies the holder to teach at university level in the degree's field or work in a specific profession. There are a number of doctoral degrees; the most common is the Doctor of Philosophy (PhD), awarded in many different fields, ranging from the humanities to scientific disciplines.

Many universities also award honorary doctorates to individuals deemed worthy of special recognition, either for scholarly work or other contributions to the university or society.

History:

Middle Ages

The term doctor derives from Latin, meaning "teacher" or "instructor". The doctorate (Latin: doctoratus) appeared in medieval Europe as a license to teach Latin (licentia docendi) at a university. Its roots can be traced to the early church in which the term doctor referred to the Apostles, church fathers, and other Christian authorities who taught and interpreted the Bible.

The right to grant a licentia docendi (i.e. the doctorate) was originally reserved to the Catholic Church, which required the applicant to pass a test, take an oath of allegiance, and pay a fee. The Third Council of the Lateran of 1179 guaranteed access—at that time essentially free of charge—to all able applicants. Applicants were tested for aptitude. This right remained a bone of contention between the church authorities and the universities, slowly distancing themselves from the Church. In 1213 the right was granted by the pope to the University of Paris, where it became a universal license to teach (licentia ubique docendi). However, while the licentia continued to hold a higher prestige than the bachelor's degree (baccalaureus), the latter was ultimately reduced to an intermediate step to the master's degree (magister) and doctorate, both of which now became the accepted teaching qualifications. According to Keith Allan Noble (1994), the first doctoral degree was awarded in medieval Paris around 1150 by the University of Paris.

George Makdisi theorizes that the ijazah issued in early Islamic madrasahs was the origin of the doctorate later issued in medieval European universities. Alfred Guillaume and Syed Farid al-Attas agree that there is a resemblance between the ijazah and the licentia docendi. However, Toby Huff and others reject Makdisi's theory. Devin J. Stewart notes a difference in the granting authority (individual professor for the ijzazah and a corporate entity in the case of the university doctorate).

17th and 18th centuries

The doctorate of philosophy developed in Germany in the 17th century (likely c. 1652). The term "philosophy" does not refer here to the field or academic discipline of philosophy; it is used in a broader sense under its original Greek meaning of "love of wisdom". In most of Europe, all fields (history, philosophy, social sciences, mathematics, and natural philosophy/natural sciences) were traditionally known as philosophy, and in Germany and elsewhere in Europe the basic faculty of liberal arts was known as the "faculty of philosophy". The Doctorate of Philosophy adheres to this historic convention, even though most degrees are not for the study of philosophy. Chris Park explains that it was not until formal education and degree programs were standardized in the early 19th century that the Doctorate of Philosophy was reintroduced in Germany as a research degree, abbreviated as Dr. phil. (similar to Ph.D. in Anglo-American countries). Germany, however, differentiated then in more detail between doctorates in philosophy and doctorates in the natural sciences, abbreviated as Dr. rer. nat. and also doctorates in the social/political sciences, abbreviated as Dr. rer. pol., similar to the other traditional doctorates in medicine (Dr. med.) and law (Dr. jur.).

University doctoral training was a form of apprenticeship to a guild. The traditional term of study before new teachers were admitted to the guild of "Masters of Arts" was seven years, matching the apprenticeship term for other occupations. Originally the terms "master" and "doctor" were synonymous, but over time the doctorate came to be regarded as a higher qualification than the master's degree.

University degrees, including doctorates, were originally restricted to men. The first women to be granted doctorates were Juliana Morell in 1608 at Lyons or maybe Avignon (she "defended theses" in 1606 or 1607, although claims that she received a doctorate in canon law in 1608 have been discredited), Elena Cornaro Piscopia in 1678 at the University of Padua, Laura Bassi in 1732 at Bologna University, Dorothea Erxleben in 1754 at Halle University and María Isidra de Guzmán y de la Cerda in 1785 at Complutense University, Madrid.

Modern times

The use and meaning of the doctorate have changed over time and are subject to regional variations. For instance, until the early 20th century, few academic staff or professors in English-speaking universities held doctorates, except for very senior scholars and those in holy orders. After that time, the German practice of requiring lecturers to have completed a research doctorate spread. Universities' shift to research-oriented education (based upon the scientific method, inquiry, and observation) increased the doctorate's importance. Today, a research doctorate (PhD) or its equivalent (as defined in the US by the NSF) is generally a prerequisite for an academic career. However, many recipients do not work in academia.

Professional doctorates developed in the United States from the 19th century onward. The first professional doctorate offered in the United States was the M.D. at Kings College (now Columbia University) after the medical school's founding in 1767. However, this was not a professional doctorate in the modern American sense. It was awarded for further study after the qualifying Bachelor of Medicine (M.B.) rather than a qualifying degree. The MD became the standard first degree in medicine in the US during the 19th century, but as a three-year undergraduate degree. It did not become established as a graduate degree until 1930. As the standard qualifying degree in medicine, the MD gave that profession the ability (through the American Medical Association, established in 1847 for this purpose) to set and raise standards for entry into professional practice.

In the shape of the German-style PhD, the modern research degree was first awarded in the US in 1861, at Yale University. This differed from the MD in that the latter was a vocational "professional degree" that trained students to apply or practice knowledge rather than generate it, similar to other students in vocational schools or institutes. In the UK, research doctorates initially took higher doctorates in Science and Letters, first introduced at Durham University in 1882. The PhD spread to the UK from the US via Canada and was instituted at all British universities from 1917. The first (titled a DPhil) was awarded at the University of Oxford.

Following the MD, the next professional doctorate in the US, the Juris Doctor (JD), was established by the University of Chicago in 1902. However, it took a long time to be accepted, not replacing the Bachelor of Laws (LLB) until the 1960s, by which time the LLB was generally taken as a graduate degree. Notably, the JD and LLB curriculum were identical, with the degree being renamed as a doctorate, and it (like the MD) was not equivalent to the PhD, raising criticism that it was "not a 'true Doctorate'". When professional doctorates were established in the UK in the late 1980s and early 1990s, they did not follow the US model. Still, they were set up as research degrees at the same level as PhDs but with some taught components and a professional focus for research work.

Now usually called higher doctorates in the United Kingdom, the older-style doctorates take much longer to complete since candidates must show themselves to be leading experts in their subjects. These doctorates are less common than the PhD in some countries and are often awarded honoris causa. The habilitation is still used for academic recruitment purposes in many countries within the EU. It involves either a long new thesis (a second book) or a portfolio of research publications. The habilitation (highest available degree) demonstrates independent and thorough research, experience in teaching and lecturing, and, more recently, the ability to generate supportive funding. The habilitation follows the research doctorate, and in Germany, it can be a requirement for appointment as a Privatdozent or professor.

Types

Since the Middle Ages, the number and types of doctorates awarded by universities have proliferated throughout the world. Practice varies from one country to another. While a doctorate usually entitles a person to be addressed as "doctor", the use of the title varies widely depending on the type and the associated occupation.

Research doctorate

Research doctorates are awarded in recognition of publishable academic research, at least in principle, in a peer-reviewed academic journal. The best-known research degree title in the English-speaking world, is Doctor of Philosophy (abbreviated Ph.D., PhD or, at some British universities, DPhil) awarded in many countries throughout the world. In the U.S., for instance, although the most typical research doctorate is the PhD, accounting for about 98% of the research doctorates awarded, there are more than 15 other names for research doctorates. Other research-oriented doctorates (some having a professional practice focus) include the Doctor of Education (Ed.D. or EdD), the Doctor of Science (D.Sc. or Sc.D.), Doctor of Arts (D.A.), Doctor of Juridical Science (J.S.D. or S.J.D.), Doctor of Musical Arts (D.M.A.), Doctor of Professional Studies/Professional Doctorate (ProfDoc or DProf), Doctor of Public Health (Dr.P.H.), Doctor of Social Science (D.S.Sc. or DSocSci), Doctor of Management (D.M. or D.Mgt.), Doctor of Business Administration (D.B.A. or DBA), the UK Doctor of Management (DMan), various doctorates in engineering, such as the US Doctor of Engineering (D.Eng., D.E.Sc. or D.E.S., also awarded in Japan and South Korea), the UK Engineering Doctorate (EngD), the German engineering doctorate Doktoringenieur (Dr.-Ing.) the German natural science doctorate Doctor rerum naturalium (Dr. rer. nat.) and the economics and social science doctorate Doctor rerum politicarum (Dr. rer. pol.). The UK Doctor of Medicine (MD or MD (Res)) and Doctor of Dental Surgery (DDS) are research doctorates. The Doctor of Theology (Th.D., D.Th. or ThD), Doctor of Practical Theology (DPT) and the Doctor of Sacred Theology (S.T.D., or D.S.Th.) are research doctorates in theology.

Criteria for research doctorates vary but typically require completion of a substantial body of original research, which may be presented as a single thesis or dissertation, or as a portfolio of shorter project reports (thesis by publication). The submitted dissertation is assessed by a committee of, typically, internal, and external examiners. It is then typically defended by the candidate during an oral examination (called viva (voce) in the UK and India) by the committee, which then awards the degree unconditionally, awards the degree conditionally (ranging from corrections in grammar to additional research), or denies the degree. Candidates may also be required to complete graduate-level courses in their field and study research methodology.

Criteria for admission to doctoral programs vary. Students may be admitted with a bachelor's degree in the U.S. and the U.K. However, elsewhere, e.g. in Finland and many other European countries, a master's degree is required. The time required to complete a research doctorate varies from three years, excluding undergraduate study, to six years or more.

Licentiate

Licentiate degrees vary widely in their meaning, and in a few countries are doctoral-level qualifications. Sweden awards the licentiate degree as a two-year qualification at the doctoral level and the doctoral degree (PhD) as a four-year qualification. Sweden originally abolished the Licentiate in 1969 but reintroduced it in response to demands from business. Finland also has a two-year doctoral level licentiate degree, similar to Sweden's. Outside of Scandinavia, the licentiate is usually a lower-level qualification. In Belgium, the licentiate was the basic university degree prior to the Bologna Process and was equivalent to a bachelor's degree. In France and other countries, it is the bachelor's-level qualification in the Bologna process. In the Pontifical system, the Licentiate in Sacred Theology (STL) is equivalent to an advanced master's degree, or the post-master's coursework required in preparation for a doctorate (i.e. similar in level to the Swedish/Finnish licentiate degree). While other licences (such as the Licence in Canon Law) are at the level of master's degrees.

Higher doctorate and post-doctoral degrees

A higher tier of research doctorates may be awarded based on a formally submitted portfolio of published research of an exceptionally high standard. Examples include the Doctor of Science (DSc or ScD), Doctor of Divinity (DD), Doctor of Letters (DLitt or LittD), Doctor of Law or Laws (LLD), and Doctor of Civil Law (DCL) degrees found in the UK, Ireland and some Commonwealth countries, and the traditional doctorates in Scandinavia.

The habilitation teaching qualification (facultas docendi or "faculty to teach") under a university procedure with a thesis and an exam is commonly regarded as belonging to this category in Germany, Austria, France, Liechtenstein, Switzerland, Poland, etc. The degree developed in Germany in the 19th century "when holding a doctorate seemed no longer sufficient to guarantee a proficient transfer of knowledge to the next generation". In many federal states of Germany, the habilitation results in an award of a formal "Dr. habil." degree or the holder of the degree may add "habil." to their research doctorate such as "Dr. phil. habil." or "Dr. rer. nat. habil." In some European universities, especially in German-speaking countries, the degree is insufficient to have teaching duties without professor supervision (or teaching and supervising PhD students independently) without an additional teaching title such as Privatdozent. In Austria, the habilitation bestows the graduate with the facultas docendi, venia legendi. Since 2004, the honorary title of "Privatdozent" (before this, completing the habilitation resulted in appointment as a civil servant). In many Central and Eastern Europe countries, the degree gives venia legendi, Latin for "the permission to lecture", or ius docendi, "the right to teach", a specific academic subject at universities for a lifetime. The French academic system used to have a higher doctorate, called the "state doctorate" (doctorat d'État), but, in 1984, it was superseded by the habilitation (Habilitation à diriger des recherches, "habilitation to supervise (doctoral and post-doctoral) research", abbreviated HDR) which is the prerequisite to supervise PhDs and to apply to Full Professorships.

While this section has focused on earned qualifications conferred in virtue of published work or the equivalent, a higher doctorate may also be presented on an honorary basis by a university — at its own initiative or after a nomination — in recognition of public prestige, institutional service, philanthropy, or professional achievement. In a formal listing of qualifications, and often in other contexts, an honorary higher doctorate will be identified using language like "DCL, honoris causa", "HonLLD", or "LittD h.c.".

Professional doctorate

Depending on the country, professional doctorates may also be research degrees at the same level as PhDs. The relationship between research and practice is considered important and professional degrees with little or no research content are typically aimed at professional performance. Many professional doctorates are named "Doctor of [subject name] and abbreviated using the form "D[subject abbreviation]" or "[subject abbreviation]D", or may use the more generic titles "Professional Doctorate", abbreviated "ProfDoc" or "DProf", "Doctor of Professional Studies" (DPS)  or "Doctor of Professional Practice" (DPP).

In the US, professional doctorates (formally "doctor's degree – professional practice" in government classifications) are defined by the US Department of Education's National Center for Educational Statistics as degrees that require a minimum of six years of university-level study (including any pre-professional bachelor's or associate degree) and meet the academic requirements for professional licensure in the discipline. The definition for a professional doctorate does not include a requirement for either a dissertation or study beyond master's level, in contrast to the definition for research doctorates ("doctor's degree – research/scholarship"). However, individual programs may have different requirements. There is also a category of "doctor's degree – other" for doctorates that do not fall into either the "professional practice" or "research/scholarship" categories. All of these are considered doctoral degrees.

In contrast to the US, many countries reserve the term "doctorate" for research degrees. If, as in Canada and Australia, professional degrees bear the name "Doctor of ...", etc., it is made clear that these are not doctorates. Examples of this include Doctor of Pharmacy (PharmD), Doctor of Medicine (MD), Doctor of Dental Surgery (DDS), Doctor of Nursing Practice (DNP), and Juris Doctor (JD). Contrariwise, for example, research doctorates like Doctor of Business Administration (DBA), Doctor of Education (EdD) and Doctor of Social Science (DSS) qualify as full academic doctorates in Canada though they normally incorporate aspects of professional practice in addition to a full dissertation. In the Philippines, the University of the Philippines Open University offers a Doctor of Communication (DComm) professional doctorate.

All doctorates in the UK and Ireland are third cycle qualifications in the Bologna Process, comparable to US research doctorates. Although all doctorates are research degrees, professional doctorates normally include taught components, while the name PhD/DPhil is normally used for doctorates purely by thesis. Professional, practitioner, or practice-based doctorates such as the DClinPsy, MD, DHSc, EdD, DBA, EngD and DAg are full academic doctorates. They are at the same level as the PhD in the national qualifications frameworks; they are not first professional degrees but are "often post-experience qualifications" in which practice is considered important in the research context. In 2009 there were 308 professional doctorate programs in the UK, up from 109 in 1998, with the most popular being the EdD (38 institutions), DBA (33), EngD/DEng (22), MD/DM (21), and DClinPsy/DClinPsych/ClinPsyD (17). Similarly, in Australia, the term "professional doctorate" is sometimes applied to the Scientiae Juridicae Doctor (SJD), which, like the UK professional doctorates, is a research degree.

Honorary doctorate

When a university wishes to formally recognize an individual's contributions to a particular field or philanthropic efforts, it may choose to grant a doctoral degree honoris causa ('for the sake of the honor'), waiving the usual requirements for granting the degree. Some universities do not award honorary degrees, for example, Cornell University, the University of Virginia, and Massachusetts Institute of Technology.

Additional Information

Degree, in education, is any of several titles conferred by colleges and universities to indicate the completion of a course of study or the extent of academic achievement.

The hierarchy of degrees dates back to the universities of 13th-century Europe, which had faculties organized into guilds. Members of the faculties were licensed to teach, and degrees were in effect the professional certifications that they had attained the guild status of a “master.” There was originally only one degree in European higher education, that of master or doctor. The baccalaureate, or bachelor’s degree, was originally simply a stage toward mastership and was awarded to a candidate who had studied the prescribed texts in the trivium (grammar, rhetoric, and logic) for three or four years and had successfully passed examinations held by his masters. The holder of the bachelor’s degree had thus completed the first stage of academic life and was enabled to proceed with a course of study for the degree of master or doctor. After completing those studies, he was examined by the chancellor’s board and by the faculty and, if successful, received a master’s or doctor’s degree, which admitted him into the teachers’ guild and was a certificate of fitness to teach at any university.

The terms master, doctor, and professor were all equivalent. The degree of doctor of civil law was first awarded at the University of Bologna in the second half of the 12th century, and similar degrees came to be awarded in canon law, medicine, grammar, logic, and philosophy. At the University of Paris, however, the term “master” was more commonly used, and the English universities of Oxford and Cambridge adopted the Parisian system. In many universities, the certified scholar in the faculties of arts or grammar was called a master, whereas in the faculties of philosophy, theology, medicine, and law he was called a doctor. Perhaps because it was necessary to become a master of arts before proceeding to the other studies, the doctorate came to be esteemed as the higher title. (The common Anglo-American degrees “master of arts” and “doctor of philosophy” stem from this usage.) In German universities, the titles master and doctor were also at first interchangeable, but the term doctor soon came to be applied to advanced degrees in all faculties, and the German usage was eventually adopted throughout the world.

In the United States and Great Britain, the modern gradation of academic degrees is usually bachelor (or baccalaureate), master, and doctor. The bachelor’s degree marks the completion of undergraduate study, usually amounting to four years. The master’s degree involves one to two years’ additional study, while the doctorate usually involves a lengthier period of work. British and American universities customarily grant the bachelor’s as the first degree in arts or sciences. After one or two more years of coursework, the second degree, M.A. or M.S., may be obtained by examination or the completion of a piece of research. At the universities of Oxford and Cambridge, holders of a B.A. can receive an M.A. six or seven years after entering the university simply by paying certain fees. The degree of doctor of philosophy (Ph.D.) is usually offered by all universities that admit advanced students and is granted after prolonged study and either examination or original research. A relatively new degree in the United States is that of associate, which is awarded by junior or community colleges after a two-year course of study; it has a relatively low status.

The rapid expansion of specialization produced a growing variety of specific academic degrees in American, British, and other English-speaking higher education systems in the 20th century. More than 1,500 different degrees are now awarded in the United States, for example, with the largest number in science, technology, engineering, medicine, and education. The commonest degrees, however, are still the B.A. and the B.S., to which the signature of a special field may be added (e.g., B.S.Pharm., or Bachelor of Science in Pharmacy). These special fields have their corresponding designations at the graduate levels.

With some exceptions, intermediate degrees, such as those of bachelor and master, have been abandoned in the universities of continental Europe. In the second half of the 20th century, the French degree system was undergoing change as part of a major university reform. The baccalauréat is conferred upon French students who have successfully completed secondary studies and admits the student to the university. Students who obtain the licence, which is awarded after three or four years of university study, are qualified to teach in secondary schools or to go on to higher-level studies. Currently, maîtrise (master’s) degrees are also being awarded. Maîtrise holders who pass a competitive examination receive a certification known as the agrégation and are permitted to teach university undergraduates. Doctorats are awarded in both arts and sciences.

In Germany the doctorate is the only degree granted, but there is a tendency to add signatures such as Dr.rer.nat. (Doktor rerum naturalium) in natural sciences and Dr.Ing. (Doktor-Ingenieur) in engineering. For students who do not wish to meet the doctoral requirements, diploma examinations are offered.

In Russia diplomas are awarded on completion of a four- or five-year university course. The candidate of science (kandidat nauk) degree is awarded after several years of practical and academic work and completion of a thesis and is comparable to the American Ph.D. Doctor of science (doktor nauk) degrees are awarded only by a special national commission, in recognition of original and important research.

In Japan the usual degrees are the gakushi (bachelor), granted after four years of study, and hakushi (doctor), requiring from two to five years of additional study. A master’s degree (shushi) may also be granted.

In addition to earned degrees, universities and colleges award honorary degrees, such as L.H.D. (Doctor of Humanities), Litt.D. (Doctor of Literature), and D.C.L. (Doctor of Civil Law), as a recognition of distinction without regard to academic attainment.

Doctorate_Psychology_Degrees.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2023 2024-01-09 20:28:18

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2025) Automobile Body

Summary

The basic form of the modem automobile body is older horse driver carriage. They have a single seat type body construction which provides less safety to the passenger from weather. Larger and more stylish bodies were developed and manufactured with passage of time to provide increased space, safety or protection to the passengers.

Body is the super-structure for all vehicles. It may either be constructed separately and bolted to the chassis or manufactured integral with the chassis (i.e. Frameless construction). The chassis and the body make the complete vehicle.

A body consists of windows and doors, engine cover, roof, luggage cover etc. The electrical system in the body is connected to the chassis electrical units so that the battery and the generator/altenator can furnish the required electrical energy to the system.

Requirements of Vehicle Body

The vehicle body should fulfill the following requirements.

1. It must be strong enough to withstand all types of forces acting on the vehicle. The  forces are including the weight of the car, inertia, luggage, braking and cornering forces.
2. Stresses induced in the body should be distributed evenly to all portions.
3. Weight of the body should be as minimum as possible.
4. It should be able to cope with impact loads of reasonable magnitude.
5. It should have reasonable fatigue life.
6. It must provide adequate space for both passengers and the luggage.
7. It should have minimum number of components.
8. It must have sufficient torsional stiffness i.e., ability to resist the twisting stresses produced by irregular road surface.
9. It should have good access to the engine and suspension elements.
10. It must ensure a quite ride, easy entry and exit.
11. It should create minimum vibration during running.
12. The shape of the body should be minimum drag.
13. It is easy to manufacture as well as cheap in cost.
14. It should be designed in such a way that passengers and luggage are protected from bad weather.
15. It should give appeal finish in shape and colour.

Types of Vehicle Body

For different types of auto-vehicles, passenger space and overall dimensions vary. Various types of bodies for different vehicles can be listed as below.

1. Car
2. Straight truck
3. Truck-half body
4. Truck-platform type
5. Tractor
6. Tractor with articulated trailer
7. Tanker
8. Dumper truck
9. Delivery van
10. Station wagon
11. Pick-up
12. Jeep
13. Buses
14. Mini-buses
15. Three wheeler (i.e., Auto rickshaw)

The car bodies have great resistance to wind. For high-speed vehicles, a special attention is given to streamline the body. The streamlining is the process for shaping the body to reduce resistance. It is mainly used for racing cars.

Straight truck vehicle bodies are constructed into two parts. One is driver cabin and other one is goods carriage. Goods carriage is a closed type with particular standard height. These vehicles are used to carry goods which are affected by weather conditions. Example. Vegetables, sugar, rice, sea foods etc.

Truck half body is having driver cabin as usual but the goods carriage has open at the top. It is used to carry various goods which are not affected by weather. Truck platform type has also a separate driver cabin. Its goods carriage is a platform type. It usually carries goods such as iron billets, barrels, concrete slabs etc.

Tractor consists of small length body in addition to driven cabin. Usually, an articulated trailer is attached to the rear end of the trailer. This trailer has various cabins. Figure 1.25 shows different types of bodies normally designed for different vehicles. It may be an open type or a closed type depending on the purpose of use. It is used to carry passenger cars, mopeds, motor cycles etc. Most of these vehicles have six wheels.

Tanker is the vehicle which consists of a tank to carry fluids of various natures. The tank may be welded or bolted to the chassis frame behind the driver cabin. The tank has an opening at the top to pour  fluid and a drain math at the bottom to drain the fluid.

Dumper truck has heavy goods carrying panel with open top in the rear side. The rear side can be tilted up and down by hydraulic cylinders. It is used to carry brick, stones, marbles etc.

Details

Automobile is usually four-wheeled vehicle designed primarily for passenger transportation and commonly propelled by an internal-combustion engine using a volatile fuel.

Automotive design

The major functional components of an automobile.

The modern automobile is a complex technical system employing subsystems with specific design functions. Some of these consist of thousands of component parts that have evolved from breakthroughs in existing technology or from new technologies such as electronic computers, high-strength plastics, and new alloys of steel and nonferrous metals. Some subsystems have come about as a result of factors such as air pollution, safety legislation, and competition between manufacturers throughout the world.

Passenger cars have emerged as the primary means of family transportation, with an estimated 1.4 billion in operation worldwide. About one-quarter of these are in the United States, where more than three trillion miles (almost five trillion kilometres) are traveled each year. In recent years, Americans have been offered hundreds of different models, about half of them from foreign manufacturers. To capitalize on their proprietary technological advances, manufacturers introduce new designs ever more frequently. With some 70 million new units built each year worldwide, manufacturers have been able to split the market into many very small segments that nonetheless remain profitable.

New technical developments are recognized to be the key to successful competition. Research and development engineers and scientists have been employed by all automobile manufacturers and suppliers to improve the body, chassis, engine, drivetrain, control systems, safety systems, and emission-control systems.

These outstanding technical advancements are not made without economic consequences. According to a study by Ward’s Communications Incorporated, the average cost for a new American car increased $4,700 (in terms of the value of the dollar in 2000) between 1980 and 2001 because of mandated safety and emission-control performance requirements (such as the addition of air bags and catalytic converters). New requirements continued to be implemented in subsequent years. The addition of computer technology was another factor driving up car prices, which increased by 29 percent between 2009 and 2019. This is in addition to the consumer costs associated with engineering improvements in fuel economy, which may be offset by reduced fuel purchases.

Vehicle design depends to a large extent on its intended use. Automobiles for off-road use must be durable, simple systems with high resistance to severe overloads and extremes in operating conditions. Conversely, products that are intended for high-speed, limited-access road systems require more passenger comfort options, increased engine performance, and optimized high-speed handling and vehicle stability. Stability depends principally on the distribution of weight between the front and rear wheels, the height of the centre of gravity and its position relative to the aerodynamic centre of pressure of the vehicle, suspension characteristics, and the selection of which wheels are used for propulsion. Weight distribution depends principally on the location and size of the engine. The common practice of front-mounted engines exploits the stability that is more readily achieved with this layout. The development of aluminum engines and new manufacturing processes has, however, made it possible to locate the engine at the rear without necessarily compromising stability.

Body

The Fiat 600, introduced in 1956, was an inexpensive, practical car with simple, elegant styling that instantly made it an icon of postwar Italy. Its rear-mounted transverse engine produced sufficient power and saved enough space to allow the passenger compartment to accommodate four people easily.
Automotive body designs are frequently categorized according to the number of doors, the arrangement of seats, and the roof structure. Automobile roofs are conventionally supported by pillars on each side of the body. Convertible models with retractable fabric tops rely on the pillar at the side of the windshield for upper body strength, as convertible mechanisms and glass areas are essentially nonstructural. Glass areas have been increased for improved visibility and for aesthetic reasons.

An automobile being manufactured on an assembly line.

The high cost of new factory tools makes it impractical for manufacturers to produce totally new designs every year. Completely new designs usually have been programmed on three- to six-year cycles with generally minor refinements appearing during the cycle. In the past, as many as four years of planning and new tool purchasing were needed for a completely new design. Computer-aided design (CAD), testing by use of computer simulations, and computer-aided manufacturing (CAM) techniques may now be used to reduce this time requirement by 50 percent or more. See machine tool: Computer-aided design and computer-aided manufacturing (CAD/CAM).

Automotive bodies are generally formed out of sheet steel. The steel is alloyed with various elements to improve its ability to be formed into deeper depressions without wrinkling or tearing in manufacturing presses. Steel is used because of its general availability, low cost, and good workability. For certain applications, however, other materials, such as aluminum, fibreglass, and carbon-fibre reinforced plastic, are used because of their special properties. Polyamide, polyester, polystyrene, polypropylene, and ethylene plastics have been formulated for greater toughness, dent resistance, and resistance to brittle deformation. These materials are used for body panels. Tooling for plastic components generally costs less and requires less time to develop than that for steel components and therefore may be changed by designers at a lower cost.

To protect bodies from corrosive elements and to maintain their strength and appearance, special priming and painting processes are used. Bodies are first dipped in cleaning baths to remove oil and other foreign matter. They then go through a succession of dip and spray cycles. Enamel and acrylic lacquer are both in common use. Electrodeposition of the sprayed paint, a process in which the paint spray is given an electrostatic charge and then attracted to the surface by a high voltage, helps assure that an even coat is applied and that hard-to-reach areas are covered. Ovens with conveyor lines are used to speed the drying process in the factory. Galvanized steel with a protective zinc coating and corrosion-resistant stainless steel are used in body areas that are more likely to corrode.

Chassis

In most passenger cars through the middle of the 20th century, a pressed-steel frame—the vehicle’s chassis—formed a skeleton on which the engine, wheels, axle assemblies, transmission, steering mechanism, brakes, and suspension members were mounted. The body was flexibly bolted to the chassis during a manufacturing process typically referred to as body-on-frame construction. This process is used today for heavy-duty vehicles, such as trucks, which benefit from having a strong central frame, subjected to the forces involved in such activities as carrying freight, including the absorption of the movements of the engine and axle that is allowed by the combination of body and frame.

In modern passenger-car designs, the chassis frame and the body are combined into a single structural element. In this arrangement, called unit-body (or unibody) construction, the steel body shell is reinforced with braces that make it rigid enough to resist the forces that are applied to it. Separate frames or partial “stub” frames have been used for some cars to achieve better noise-isolation characteristics. The heavier-gauge steel present in modern component designs also tends to absorb energy during impacts and limit intrusion in accidents.

Engine

An internal-combustion engine goes through four strokes: intake, compression, combustion (power), and exhaust. As the piston moves during each stroke, it turns the crankshaft.

The typical sequence of cycle events in a four-stroke diesel engine involves a single intake valve, fuel-injection nozzle, and exhaust valve, as shown here. Injected fuel is ignited by its reaction to compressed hot air in the cylinder, a more efficient process than that of the spark-ignition internal-combustion engine.
A wide range of engines has been used experimentally and in automotive production. The most successful for automobiles has been the gasoline-fueled reciprocating-piston internal-combustion engine, operating on a four-stroke cycle, while diesel engines are widely used for trucks and buses. The gasoline engine was originally selected for automobiles because it could operate more flexibly over a wide range of speeds, and the power developed for a given weight engine was reasonable; it could be produced by economical mass-production methods; and it used a readily available, moderately priced fuel. Reliability, compact size, exhaust emissions, and range of operation later became important factors.

There has been an ongoing reassessment of these priorities with new emphasis on the reduction of greenhouse gases (see greenhouse effect) or pollution-producing characteristics of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements that previously were not close to being economically feasible. Several limited-production battery-powered electric vehicles are marketed today. In the past they had not proved to be competitive, because of costs and operating characteristics. The gasoline engine, with new emission-control devices to improve emission performance, has been challenged in recent years by hybrid power systems that combine gasoline or diesel engines with battery systems and electric motors. Such designs are, however, more complex and therefore more costly.

The evolution of higher-performance engines in the United States led the industry away from long, straight engine cylinder layouts to compact six- and eight-cylinder V-type layouts for larger cars (with horsepower ratings up to about 350). Smaller cars depend on smaller four-cylinder engines. European automobile engines were of a much wider variety, ranging from 1 to 12 cylinders, with corresponding differences in overall size, weight, piston displacement, and cylinder bores. A majority of the models had four cylinders and horsepower ratings up to 120. Most engines had straight or in-line cylinders. There were, however, several V-type models and horizontally opposed two- and four-cylinder makes. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. Increased interest in improved fuel economy brought a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. Variable valve timing to improve performance and lower emissions has been achieved by manufacturers in all parts of the world. Electronic controls automatically select the better of two profiles on the same cam for higher efficiency when engine speeds and loads change.

Fuel

Specially formulated gasoline is essentially the only fuel used for automobile operation, although diesel fuels are used for many trucks and buses and a few automobiles, and compressed liquefied hydrogen has been used experimentally. The most important requirements of a fuel for automobile use are proper volatility, sufficient antiknock quality, and freedom from polluting by-products of combustion. The volatility is reformulated seasonally by refiners so that sufficient gasoline vaporizes, even in extremely cold weather, to permit easy engine starting. Antiknock quality is rated by the octane number of the gasoline. The octane number requirement of an automobile engine depends primarily on the compression ratio of the engine but is also affected by combustion-chamber design, the maintenance condition of engine systems, and chamber-wall deposits. In the 21st century regular gasoline carried an octane rating of 87 and high-test in the neighbourhood of 93.

Automobile manufacturers have lobbied for regulations that require the refinement of cleaner-burning gasolines, which permit emission-control devices to work at higher efficiencies. Such gasoline was first available at some service stations in California, and from 2017 the primary importers and refiners of gasoline throughout the United States were required to remove sulfur particles from fuel to an average level of 10 parts per million (ppm).

Vehicle fleets fueled by natural gas have been in operation for several years. Carbon monoxide and particulate emissions are reduced by 65 to 90 percent. Natural-gas fuel tanks must be four times larger than gasoline tanks for equivalent vehicles to have the same driving range. This compromises cargo capacity.

Ethanol (ethyl alcohol) is often blended with gasoline (15 parts to 85 parts) to raise its octane rating, which results in a smoother-running engine. Ethanol, however, has a lower energy density than gasoline, which results in decreased range per tankful.

Lubrication

All moving parts of an automobile require lubrication. Without it, friction would increase power consumption and damage the parts. The lubricant also serves as a coolant, a noise-reducing cushion, and a sealant between engine piston rings and cylinder walls. The engine lubrication system incorporates a gear-type pump that delivers filtered oil under pressure to a system of drilled passages leading to various bearings. Oil spray also lubricates the cams and valve lifters.

Wheel bearings and universal joints require a fairly stiff grease; other chassis joints require a soft grease that can be injected by pressure guns. Hydraulic transmissions require a special grade of light hydraulic fluid, and manually shifted transmissions use a heavier gear oil similar to that for rear axles to resist heavy loads on the gear teeth. Gears and bearings in lightly loaded components, such as generators and window regulators, are fabricated from self-lubricating plastic materials. Hydraulic fluid is also used in other vehicle systems in conjunction with small electric pumps and motors.

Cooling system

Almost all automobiles employ liquid cooling systems for their engines. A typical automotive cooling system comprises (1) a series of channels cast into the engine block and cylinder head, surrounding the combustion chambers with circulating water or other coolant to carry away excessive heat, (2) a radiator, consisting of many small tubes equipped with a honeycomb of fins to radiate heat rapidly, which receives and cools hot liquid from the engine, (3) a centrifugal-type water pump with which to circulate coolant, (4) a thermostat, which maintains constant temperature by automatically varying the amount of coolant passing into the radiator, and (5) a fan, which draws fresh air through the radiator.

For operation at temperatures below 0 °C (32 °F), it is necessary to prevent the coolant from freezing. This is usually done by adding some compound, such as ethylene glycol, to depress the freezing point of the coolant. By varying the amount of additive, it is possible to protect against freezing of the coolant down to any minimum temperature normally encountered. Coolants contain corrosion inhibitors designed to make it necessary to drain and refill the cooling system only every few years.

Air-cooled cylinders operate at higher, more efficient temperatures, and air cooling offers the important advantage of eliminating not only freezing and boiling of the coolant at temperature extremes but also corrosion damage to the cooling system. Control of engine temperature is more difficult, however, and high-temperature-resistant ceramic parts are required when design operating temperatures are significantly increased.

Pressurized cooling systems have been used to increase effective operating temperatures. Partially sealed systems using coolant reservoirs for coolant expansion if the engine overheats were introduced in the early 1970s. Specially formulated coolants that do not deteriorate over time eliminate the need for annual replacement.

The electrical system comprises a storage battery, generator, starting (cranking) motor, lighting system, ignition system, and various accessories and controls. Originally, the electrical system of the automobile was limited to the ignition equipment. With the advent of the electric starter on a 1912 Cadillac model, electric lights and horns began to replace the kerosene and acetylene lights and the bulb horns. Electrification was rapid and complete, and, by 1930, 6-volt systems were standard everywhere.

Increased engine speeds and higher cylinder pressures made it increasingly difficult to meet high ignition voltage requirements. The larger engines required higher cranking torque. Additional electrically operated features—such as radios, window regulators, and multispeed windshield wipers—also added to system requirements. To meet these needs, 12-volt systems replaced the 6-volt systems in the late 1950s around the world.

The ignition system provides the spark to ignite the air-fuel mixture in the cylinders of the engine. The system consists of the spark plugs, coil, distributor, and battery. In order to jump the gap between the electrodes of the spark plugs, the 12-volt potential of the electrical system must be stepped up to about 20,000 volts. This is done by a circuit that starts with the battery, one side of which is grounded on the chassis and leads through the ignition switch to the primary winding of the ignition coil and back to the ground through an interrupter switch. Interrupting the primary circuit induces a high voltage across the secondary terminal of the coil. The high-voltage secondary terminal of the coil leads to a distributor that acts as a rotary switch, alternately connecting the coil to each of the wires leading to the spark plugs.

Solid-state or transistorized ignition systems were introduced in the 1970s. These distributor systems provided increased durability by eliminating the frictional contacts between breaker points and distributor cams. The breaker point was replaced by a revolving magnetic-pulse generator in which alternating-current pulses trigger the high voltage needed for ignition by means of an amplifier electronic circuit. Changes in engine ignition timing are made by vacuum or electronic control unit (microprocessor) connections to the distributor.

Exploded view of an automotive alternator. The engine's turning crankshaft, connected to the alternator's pulley by a belt, turns the magnetic rotor inside the stationary stator assembly, generating an alternating current. The diode assembly rectifies the alternating current, producing direct current, which is used to meet the demands of the vehicle's electrical system, including recharging the battery.
The source of energy for the various electrical devices of the automobile is a generator, or alternator, that is belt-driven from the engine crankshaft. The design is usually an alternating-current type with built-in rectifiers and a voltage regulator to match the generator output to the electric load and also to the charging requirements of the battery, regardless of engine speed.

A lead-acid battery serves as a reservoir to store excess output of the generator. This provides energy for the starting motor and power for operating other electric devices when the engine is not running or when the generator speed is not sufficiently high for the load.

The starting motor drives a small spur gear so arranged that it automatically moves in to mesh with gear teeth on the rim of the flywheel as the starting-motor armature begins to turn. When the engine starts, the gear is disengaged, thus preventing damage to the starting motor from overspeeding. The starting motor is designed for high current consumption and delivers considerable power for its size for a limited time.

Headlights must satisfactorily illuminate the highway ahead of the automobile for driving at night or in inclement weather without temporarily blinding approaching drivers. This was achieved in modern cars with double-filament bulbs with a high and a low beam, called sealed-beam units. Introduced in 1940, these bulbs found widespread use following World War II. Such units could have only one filament at the focal point of the reflector. Because of the greater illumination required for high-speed driving with the high beam, the lower beam filament was placed off centre, with a resulting decrease in lighting effectiveness. Separate lamps for these functions can also be used to improve illumination effectiveness.

Dimming is automatically achieved on some cars by means of a photocell-controlled switch in the lamp circuit that is triggered by the lights of an oncoming car. Lamp clusters behind aerodynamic plastic covers permitted significant front-end drag reduction and improved fuel economy. In this arrangement, steerable headlights became possible with an electric motor to swivel the lamp assembly in response to steering wheel position. The regulations of various governments dictate brightness and field of view requirements for vehicle lights.

Signal lamps and other special-purpose lights have increased in usage since the 1960s. Amber-coloured front and red rear signal lights are flashed as a turn indication; all these lights are flashed simultaneously in the “flasher” (hazard) system for use when a car is parked along a roadway or is traveling at a low speed on a high-speed highway. Marker lights that are visible from the front, side, and rear also are widely required by law. Red-coloured rear signals are used to denote braking, and cornering lamps, in connection with turning, provide extra illumination in the direction of an intended turn. Backup lights provide illumination to the rear and warn anyone behind the vehicle when the driver is backing up. High-voltage light-emitting diodes (LEDs) have been developed for various signal and lighting applications.

Transmission

The main elements of the power train of a front-wheel-drive automobile are the transversely mounted engine and the transmission, which transfers the torque, or turning energy, of the engine to the drive wheels through a short drive shaft.

The gasoline engine must be disconnected from the driving wheels when it is started and when idling. This characteristic necessitates some type of unloading and engaging device to permit gradual application of load to the engine after it has been started. The torque, or turning effort, that the engine is capable of producing is low at low crankshaft speeds, increasing to a maximum at some fairly high speed representing the maximum, or rated, horsepower.

The efficiency of an automobile engine is highest when the load on the engine is high and the throttle is nearly wide open. At moderate speeds on level pavement, the power required to propel an automobile is only a fraction of this. Under normal driving conditions at constant moderate speed, the engine may operate at an uneconomically light load unless some means is provided to change its speed and power output.

The transmission is such a speed-changing device. Installed in the power train between the engine and the driving wheels, it permits the engine to operate at a higher speed when its full power is needed and to slow down to a more economical speed when less power is needed. Under some conditions, as in starting a stationary vehicle or in ascending steep grades, the torque of the engine is insufficient, and amplification is needed. Most devices employed to change the ratio of the speed of the engine to the speed of the driving wheels multiply the engine torque by the same factor by which the engine speed is increased.

The simplest automobile transmission is the sliding-spur gear type with three or more forward speeds and reverse. The desired gear ratio is selected by manipulating a shift lever that slides a spur gear into the proper position to engage the various gears. A clutch is required to engage and disengage gears during the selection process. The necessity of learning to operate a clutch is eliminated by an automatic transmission. Most automatic transmissions employ a hydraulic torque converter, a device for transmitting and amplifying the torque produced by the engine. Each type provides for manual selection of reverse and low ranges that either prevent automatic upshifts or employ lower gear ratios than are used in normal driving. Grade-retard provisions are also sometimes included to supply dynamic engine braking on hills. Automatic transmissions not only require little skill to operate but also make possible better performance than is obtainable with designs that require clutch actuation.

In hydraulic transmissions, shifting is done by a speed-sensitive governing device that changes the position of valves that control the flow of hydraulic fluid. The vehicle speeds at which shifts occur depend on the position of the accelerator pedal, and the driver can delay upshifts until higher speed is attained by depressing the accelerator pedal further. Control is by hydraulically engaged bands and multiple-disk clutches running in oil, either by the driver’s operation of the selector lever or by speed- and load-sensitive electronic control in the most recent designs. Compound planetary gear trains with multiple sun gears and planet pinions have been designed to provide a low forward speed, intermediate speeds, a reverse, and a means of locking into direct drive. This unit is used with various modifications in almost all hydraulic torque-converter transmissions. All transmission control units are interconnected with vehicle emission control systems that adjust engine timing and air-to-fuel ratios to reduce exhaust emissions.

Oil in the housing is accelerated outward by rotating vanes in the pump impeller and, reacting against vanes in the turbine impeller, forces them to rotate, as shown schematically in the figure. The oil then passes into the stator vanes, which redirect it to the pump. The stator serves as a reaction member providing more torque to turn the turbine than was originally applied to the pump impeller by the engine. Thus, it acts to multiply engine torque by a factor of up to 2 1/2 to 1.

Blades in all three elements are specially contoured for their specific function and to achieve particular multiplication characteristics. Through a clutch linkage, the stator is allowed gradually to accelerate until it reaches the speed of the pump impeller. During this period torque multiplication gradually drops to approach 1 to 1.

The hydraulic elements are combined with two or more planetary gear sets, which provide further torque multiplication between the turbine and the output shaft.

Continuously (or infinitely) variable transmissions provide a very efficient means of transferring engine power and, at the same time, automatically changing the effective input-to-output ratio to optimize economy by keeping the engine running within its best power range. Most designs employ two variable-diameter pulleys connected by either a steel or high-strength rubber V-belt. The pulleys are split so that effective diameters may be changed by an electrohydraulic actuator to change the transmission ratio. This permits the electronic control unit to select the optimum ratio possible for maximum fuel economy and minimum emissions at all engine speeds and loads. Originally these units were limited to small cars, but belt improvements have made them suitable for larger cars.

Other mechanical subsystems:

Axles

Power is conveyed from the transmission to the rear axle of rear-wheel-drive vehicles by a drive shaft and universal joints. As body lines were progressively lowered, the floor level came closer to the drive shaft, necessitating floor humps or tunnels to provide clearance. The adoption of hypoid or offset spiral bevel gears in the rear axle provided an increase in this clearance by lowering the drive pinion below the centre of the axle shafts.

The ring gear of the rear axle surrounds the housing of a differential gear train that serves as an equalizer in dividing the torque between the two driving wheels while permitting one to turn faster than the other when rounding corners. The axle shafts terminate in bevel gears that are connected by several smaller bevel gears mounted on radial axles attached to the differential housing and carried around with it by the ring gear. In its simplest form this differential has the defect that one driving wheel may spin when it loses traction, and the torque applied to the wheel, being equal to that of the slipping wheel, will not be sufficient to drive the car. Several differentials have been developed to overcome this difficulty.

Articulated rear axles offer individual wheel suspension at the rear as well as the front. Individual rear suspension not only eliminates the heavy rear axle housing but also permits lowered bodies with no floor humps, because the transmission and differential gears can be combined in a housing mounted on a rear cross member moving with the body under suspension-spring action. In some instances, articulated or swing axles that have tubular housings surrounding the axle shafts terminate in spherical head segments that fit into matching sockets formed in the sides of the central gear housing. Universal joints within the spherical elements permit the axle shafts to move with the actions of the suspension springs. The gear housing is supported by a rear cross member of the chassis and moves with the sprung portion of the vehicle, as does the drive shaft. Other types eliminate the axle shaft housings and drive the wheels through two open shafts fitted with universal joints. The wheels are then individually supported by radius rods or other suitable linkage. Individually suspended wheels are simplified for rear-engine, rear-wheel-drive cars and front-engine, front-wheel-drive mechanisms. A combined transmission and differential assembly can form a unit with the engine. Two short transverse drive shafts, each having universal joints at both ends, transmit power to the wheels.

Brakes

Originally, most systems for stopping vehicles were mechanically actuated drum brakes with internally expanding shoes; i.e., foot pressure exerted on the brake pedal was carried directly to semicircular brake shoes by a system of flexible cables. Mechanical brakes, however, were difficult to keep adjusted so that equal braking force was applied at each wheel; and, as vehicle weights and speeds increased, more and more effort on the brake pedal was demanded of the driver.

Mechanical brakes were replaced by hydraulic systems, in which the brake pedal is connected to pistons in master cylinders and thence by steel tubing with flexible sections to individual cylinders at the wheels. Front and rear hydraulic circuits are separated. The wheel cylinders are located between the movable ends of the brake shoes, and each is fitted with two pistons that are forced outward toward the ends of the cylinder by the pressure of the fluid between them. As these pistons move outward, they push the brake shoes against the inner surface of the brake drum attached to the wheel. The larger diameter of the piston in the master cylinder provides a hydraulic force multiplication at the wheel cylinder that reduces the effort required of the driver.

Vacuum-assisted power brake for an automobile. A constant vacuum is maintained in the brake booster by the engine. When the brake pedal is depressed, a poppet valve opens, and air rushes into a pressure chamber on the driver's side of the booster. The pressure exerted by this air against the vacuum pushes a piston, thus assisting the pressure exerted by the driver on the pedal. The piston in turn exerts pressure on the master cylinder, from which brake fluid is forced to act on the brakes.

Further increases in vehicle weights and speeds made even hydraulic brakes difficult for drivers to operate effectively, and automobiles consequently were equipped with power brake systems. These are virtually the same as the hydraulic system except that the piston of the master cylinder is multiplied by power assists of several types instead of by foot pressure on the pedal.

Overheating of the brake drums and shoes causes the brakes to fade and lose their effectiveness when held in engagement for a considerable length of time. This problem has been attacked by the use of aluminum cooling fins bonded to the outside of the brake drums to increase the rate of heat transfer to the air. Vents in the wheels are provided to increase the air circulation for cooling.

A disc brake assembly. Wheel rotation is slowed by friction when the hydraulic pistons squeeze the caliper, pressing the brake pads (shoe and lining assemblies) against the spinning disc (rotor), which is bolted to the wheel hub.
Disc brakes, originally developed for aircraft, are ubiquitous, in spite of their higher cost, because of their fade resistance. Although there are some four-wheel systems, usually discs are mounted on the front wheels, and conventional drum units are retained at the rear. They have been standard on most European automobiles since the 1950s and most American models since the mid-1970s. Each wheel has a hub-mounted disc and a brake unit or caliper rigidly attached to the suspension. The caliper employs two friction-pad assemblies, one on each side of the disc. When the brake is applied, hydraulic pressure forces the friction pads against the disc. This arrangement is self-adjusting, and the ability of the discs to dissipate heat rapidly in the open airstream makes them practically immune to fading.

Antilock braking systems (ABS) became available in the late 1980s and since then have become standard on a growing number of passenger cars. ABS installations consist of wheel-mounted sensors that input wheel rotation speed into a microprocessor. When wheel rotation increases because of tire slippage or loss of traction, the control unit signals a hydraulic or electric modulator to regulate brake line pressure to forestall impending wheel lockup. The brake continues to function as the system cyclically releases and applies pressure, similar to but much faster than a driver rapidly pumping the brake pedal on a non-ABS-equipped automobile. The wheels continue to roll, retaining the driver’s ability to steer the vehicle and stop in a shorter distance.

Parking brakes usually are of the mechanical type, applying force only to the rear brake shoes by means of a flexible cable connected to a hand lever or pedal. On cars with automatic transmissions, an additional lock is usually provided in the form of a pawl that can be engaged, by placing the shift lever in the “park” position, to prevent the drive shaft and rear wheels from turning. The service brake pedal must be applied to permit shifting the transmission out of the park position. This eliminates the possibility of undesired vehicle motion that could be caused by accidental movement of the transmission control.

Steering

Automobiles are steered by a system of gears and linkages that transmit the motion of the steering wheel to the pivoted front wheel hubs. The gear mechanism, located at the lower end of the shaft carrying the steering wheel, is usually a worm-and-nut or cam-and-lever combination that rotates a shaft with an attached crank arm through a small angle as the steering wheel is turned. Tie rods attached to the arm convey its motion to the wheels. In cornering, the inner wheel must turn through a slightly greater angle than the outer wheel, because the inner wheel negotiates a sharper turn. The geometry of the linkage is designed to provide for this.

When the front wheels are independently suspended, the steering must be designed so that the wheels are not turned as the tie rods lengthen and shorten as a result of spring action. The point of linkage attachment to the steering gear must be placed so that it can move vertically with respect to the wheel mountings without turning the wheels.

As the engine and passenger compartment in automobiles beginning in the 1930s in Europe and the United States were moved forward to improve riding comfort and road-handling characteristics, the distribution of weight between the front and rear wheels was shifted toward the front. The weight carried on the front wheels increased to more than half of the total vehicle weight, and consequently the effort necessary to turn the wheels in steering increased. Larger, heavier cars with wider tires and lower tire pressure also contributed to drag between tires and road that had to be overcome in steering, particularly in parking. Considerable reduction in the work of steering resulted from increasing the efficiency of the steering gears and introducing better bearings in the front wheel linkage. Additional ease of turning the steering wheel was accomplished by increasing the overall steering gear ratio (the number of degrees of steering-wheel turn required to turn the front wheels one degree). However, large steering gear ratios made high-speed maneuverability more difficult, because the steering wheel had to be turned through greater angles. Moreover, steering mechanisms of higher efficiency were also more reversible; that is, road shocks were transmitted more completely from the wheels and had to be overcome to a greater extent by the driver. This caused a dangerous situation on rough roads or when a front tire blew out, because the wheel might be jerked from the driver’s hands.

Power steering gear was developed to solve the steadily increasing steering problems. Power steering was first applied to heavy trucks and military vehicles early in the 1930s, and hundreds of patents were granted for devices to help the driver turn the steering wheel. Most of the proposed devices were hydraulic; some were electrical and some mechanical. In hydraulic systems, a pump driven by the engine maintains the fluid under pressure. A valve with a sensing device allows the fluid to enter and leave the power cylinder as necessary. Speed-sensitive systems are available to provide larger ratios for reduced effort at low speeds and lower ratios for steering at high speeds. Four-wheel steering systems in which the rear wheels turn in the opposite direction of the front wheels have had limited commercial use.

Suspension

The riding comfort and handling qualities of an automobile are greatly affected by the suspension system, in which the suspended portion of the vehicle is attached to the wheels by elastic members in order to cushion the impact of road irregularities. The specific nature of attaching linkages and spring elements varies widely among automobile models. The best rides are made possible by independent suspension systems, which permit the wheels to move independently of each other. In these systems the unsprung weight of the vehicle is decreased, softer springs are permissible, and front-wheel vibration problems are minimized. Spring elements used for automobile suspension members, in increasing order of their ability to store elastic energy per unit of weight, are leaf springs, coil springs, torsion bars, rubber-in-shear devices, and air springs.

The leaf spring, although comparatively inelastic, has the important advantage of accurately positioning the wheel with respect to the other chassis components, both laterally and fore and aft, without the aid of auxiliary linkages.

An important factor in spring selection is the relationship between load and deflection known as the spring rate, defined as the load in pounds divided by the deflection of the spring in inches. A soft spring has a low rate and deflects a greater distance under a given load. A coil or a leaf spring retains a substantially constant rate within its operating range of load and will deflect 10 times as much if a force 10 times as great is applied to it. The torsion bar, a long spring-steel element with one end held rigidly to the frame and the other twisted by a crank connected to the axle, can be designed to provide an increasing spring rate.

A soft-spring suspension provides a comfortable ride on a relatively smooth road, but the occupants move up and down excessively on a rough road. The springs must be stiff enough to prevent a large deflection at any time because of the difficulty in providing enough clearance between the sprung portion of the vehicle and the unsprung portion below the springs. Lower roof heights make it increasingly difficult to provide the clearance needed for soft springs. Road-handling characteristics also suffer because of what is known as sway, or roll, the sidewise tilting of the car body that results from centrifugal force acting outward on turns. The softer the suspension, the more the outer springs are compressed and the inner springs expanded. In addition, the front end dives more noticeably when braking with soft front springs.

Air springs offer several advantages over metal springs, one of the most important being the possibility of controlling the spring rate. Inherently, the force required to deflect the air unit increases with greater deflection, because the air is compressed into a smaller space and greater pressure is built up, thus progressively resisting further deflection.

A combination hydraulic-fluid-and-air suspension system has been developed in which the elastic medium is a sealed-in, fixed mass of air, and no air compressor is required. The hydraulic portion of each spring is a cylinder mounted on the body sill and fitted with a plunger that is pivotally attached to the wheel linkage to form a hydraulic strut. Each spring cylinder has a spherical air chamber attached to its outer end. The sphere is divided into two chambers by a flexible diaphragm, the upper occupied by air and the lower by hydraulic fluid that is in communication with the hydraulic cylinder through a two-way restrictor valve. This valve limits the rate of movement of the plunger in the cylinder, since fluid must be pushed into the sphere when the body descends and returned when it rises. This damping action thus controls the motion of the wheel with respect to the sprung portion of the vehicle supported by the spring.

So-called active suspensions incorporate a microprocessor to vary the orifice size of the restrictor valve in a hydraulic suspension or shock absorber (a mechanical device that dampens the rate of energy stored and released by the springs). This changes the effective spring rate. Control inputs may be vehicle speed, load, acceleration, lateral force, or a driver preference.

Tires

The pneumatic rubber tire is the point of contact between the automobile and the road surface. It functions to provide traction for acceleration and braking and limits the transmission of road vibrations to the automobile body. Inner tubes within tires were standard until the 1950s, when seals between the tire and the wheel were developed, leading to tubeless tires, now used almost universally.

Tire tread designs are tailored for the characteristics of the surface on which the vehicle is intended to operate. Deep designs provide gripping action in loose soil and snow, while smooth surfaces provide maximum contact area for applications such as racing. Current passenger car treads are a compromise between these extremes.

A typical tire casing is fabricated from layers, or plies, of varying proportions of rubber compounds reinforced with synthetic and carbon fibres or steel wire. The composition of the reinforcement and the angle of its application to the axis of the tread affect the ability of the tire to respond to sidewise forces created during cornering. They also affect harshness or vibration-transmission characteristics.

By 1990, longitudinal-, bias-, and radial-ply constructions were in use, with layers of two, four, or more plies, depending on the load capacity of the design. An additional factor relating to the load capacity of a particular construction is the pressure to which the tire is inflated. New designs also have lower height-to-width ratios to increase the road-contact area while maintaining a low standing height for the tire and consequently the car.

Security systems

Motor vehicle theft has been a problem since the start of the automobile age. The 1900 Leach automobile featured a removable steering wheel that the driver could carry away to prevent unauthorized vehicle use. More recently, sophisticated electronic alarms, some of which incorporate radio beacons, and more tamper-resistant wiring and electronic locks have been produced. Through the use of wireless technology, vehicles equipped with Global Positioning System (GPS) satellite navigation systems may be tracked and recovered when stolen.

From its beginnings the automobile posed serious hazards to public safety. Vehicle speed and weight provided an impact capacity for occupants and pedestrians that produced great numbers of fatalities (13,000 in 1920 in the United States alone and many more in Europe, as well as many serious injuries). During the 20th century the rates of death and injury declined significantly in terms of vehicle miles, but, because of the increased number of vehicles on the road, total fatalities declined only slightly. During 2005–10, however, fatalities declined by 25 percent for reasons that are not understood. This downward trend continued in the following decade. Most fatal accidents occur on either city streets or secondary roads. New divided roadway designs are relatively safer. Driver training, vehicle maintenance, highway improvement, and law enforcement were identified as key areas responsible for improving safety, but the basic design of the vehicle itself and the addition of special safety features were significant factors in worldwide reduction of fatal accidents. Rates vary from country to country. Safety features of automobiles come under two distinct headings: accident avoidance and occupant protection.

Accident-avoidance systems are designed to help the driver maintain better control of the car. The dual-master-cylinder brake system is a good example. This protects the driver against sudden loss of brake line pressure. Front and rear brake lines are separated so that if one fails, the other will continue to function.

Systems for protecting occupants in the event of an accident fall into four major classes: maintenance of passenger-compartment integrity, occupant restraints, interior-impact energy-absorber systems, and exterior-impact energy absorbers. Statistics indicate a far higher chance for survival among accident victims remaining inside the passenger compartment. Passenger-compartment integrity depends significantly on the proper action of the doors, which must remain closed in the event of an accident and must be sufficiently secure to prevent intrusion. Door-latch mechanisms have been designed to resist forward, rearward, and sideward forces and incorporate two-stage catches, so that the latch may hold if the primary stage fails. Reinforcement beams in doors are designed to deflect impact forces downward to the more rigid frame structure below the door. Forces are directed through reinforced door pillars and hinges.

Occupant restraints are used to help couple the passenger to the car. They permit decelerating with the car rather than free flight into the car structure or into the air. A combination of lap and shoulder belts is the most common restraint system. The belts consist of web fabrics that are required by regulations in various countries to withstand 6,000-pound (2,700-kg) test loading and are bolted to the car underbody and roof rail. Button-type latch release mechanisms are provided for buckles.

Another line of engineering development has centred on passive restraints that do not require any action by the occupant. In particular, commercial air bags were introduced in the 1980s, and all new automobiles sold in the United States since 1998 (1999 for light trucks) have required both driver and front passenger air bags. When a vehicle equipped with an air bag undergoes a “hard” impact, roughly in excess of 10 miles (16 km) per hour, a crash sensor sends an electrical signal that triggers an explosion which generates nitrogen gas to inflate air bags located in the steering column, front dashboard, and possibly other locations. Air bags burst from their locations and inflate to a position between occupants and the car structure in less than one-tenth of a second. The inflated air bags absorb impact energy from occupants by forcing gas out through a series of ports or orifices in the air bag fabric. Air bags collapse in about one second, thereby allowing occupants to exit the vehicle.

It has been estimated that 46 percent of front-seat fatalities could be eliminated by air bags when they are used in conjunction with lap or lap-and-shoulder belts. This is a 10 percent improvement over the use of lap and shoulder belt systems alone. The front-mounted air bag does not provide protection in side or rear crashes or in prolonged impacts from rollovers. Additional side-mounted air bags, however, provide a measure of protection in side impacts and are available in some vehicle models.

Interior-impact energy-absorbing devices augment restraint systems by absorbing energy from the occupant while minimizing injuries. The energy-absorbing steering column, introduced in 1967, is a good example of such a device. Instrument panels, windshield glass, and other surfaces that may be struck by an unrestrained occupant may be designed to absorb energy in a controlled manner.

Exterior-impact energy-absorbing devices include the structural elements of the chassis and body, which may be tailored to deform in a controlled manner to decelerate the automobile more gradually and, as a result, leave less force to be experienced by the occupants. Stress risers in the form of section irregularities have been built into front frame members of some cars. These are designed to buckle under severe loads and absorb energy in the process.

Emission controls

By-products of the operation of the gasoline engine include carbon monoxide, oxides of nitrogen, and hydrocarbons (unburned fuel compounds), each of which is a pollutant. To control the air pollution resulting from these emissions, governments establish quality standards and perform inspections to ensure that standards are met. Standards have become progressively more stringent, and the equipment necessary to meet them has become more complex.

Various engine modifications that alter emission characteristics have been successfully introduced. These include adjusted air-fuel ratios, lowered compression ratios, retarded spark timing, reduced combustion chamber surface-to-volume ratios, and closer production tolerances. To improve drivability (“responsiveness”) of some arrangements, preheated air from a heat exchanger on the exhaust manifold is ducted to the air cleaner.

The undesired evaporation of gasoline hydrocarbons into the air has been controlled by sealing the fuel tank and venting the tank through a liquid-vapour separator into a canister containing activated charcoal. During engine operation these vapours are desorbed and burned in the engine.

Among emission-control devices developed in the 1970s were catalytic converters (devices to promote combustion of hydrocarbons in the exhaust), exhaust-gas-recirculation systems, manifold reactors, fuel injection, and unitized ignition elements.

A catalytic converter consists of an insulated chamber containing a porous bed, or substrate, coated with catalytic material through which hot exhaust gas must pass before being discharged into the air. The catalyst is one of a variety of metal oxides, usually platinum or palladium, which are heated by exhaust gas to about 500 °C (900 °F). At this temperature unburned hydrocarbons and carbon monoxide are further oxidized, while oxides of nitrogen are chemically reduced in a second chamber with a different catalyst. Problems with catalysts involve their intolerance for leaded fuels and the need to prevent overheating.

Exhaust-gas recirculation is a technique to control oxides of nitrogen, which are formed by the chemical reaction of nitrogen and oxygen at high temperatures during combustion. Either reducing the concentrations of these elements or lowering peak cycle temperatures will reduce the amount of nitrogen oxides produced. To achieve this, exhaust gas is piped from the exhaust manifold to the intake manifold. This dilutes the incoming fuel-air mixture and effectively lowers combustion temperature. The amount of recirculation is a function of throttle position but averages about 2 percent.

Manifold reactors are enlarged and insulated exhaust manifolds into which air is injected and in which exhaust gas continues to burn. The effectiveness of such units depends on the amount of heat generated and the length of time the gas is within the manifold. Stainless steel and ceramic materials are used to provide durability at high operating temperatures (approaching 1,300 °C [about 2,300 °F]).

Fuel injection, as a replacement for carburetion, is almost universally employed to reduce exhaust emissions. The precise metering of fuel for each cylinder provides a means of ensuring that the chemically correct air-to-fuel ratio is being burned in the engine. This eliminates cylinder-to-cylinder variations and the tendency of cylinders that are most remote from the carburetor to receive less fuel than is desired. A variety of metering and control systems are commercially available. Timed injection, in which a small quantity of gasoline is squirted into each cylinder or intake-valve port during the intake stroke of the piston, is employed on a number of cars.

In several timed-injection systems, individual pumps at each intake valve are regulated (timed) by a microprocessor that monitors intake vacuum, engine temperature, ambient-air temperature, and throttle position and adjusts the time and duration of injection accordingly.

In the early 21st century motor vehicles were being driven more than 3.2 trillion miles per year in the United States. This is an increase of more than 58 percent in 30 years.

Component systems of a typical electric automobile and hybrid gasoline-electric automobile.
Modern electric cars and trucks have been manufactured in small numbers in Europe, Japan, and the United States since the 1980s. However, electric propulsion is only possible for relatively short-range vehicles, using power from batteries or fuel cells. In a typical system, a group of lead-acid batteries connected in a series powers electric alternating-current (AC) induction motors to propel the vehicle. When nickel–metal hydride batteries are substituted, the driving range is doubled. A solid-state rectifier, or power inverter, changes the direct current (DC) supplied by the battery pack to an AC output that is controlled by the driver using an accelerator pedal to vary the output voltage. Because of the torque characteristics of electric motors, conventional gear-type transmissions are not needed in most designs. Weight and drag reduction, as well as regenerative systems to recover energy that would otherwise be lost, are important considerations in extending battery life. Batteries may be recharged in six hours from a domestic electrical outlet.

Conventional storage-battery systems do not have high power-to-weight ratios for acceleration or energy-to-weight ratios for driving range to match gasoline-powered general-purpose vehicles. Special-purpose applications, however, may be practical because of the excellent low-emission characteristics of the system. Such systems have been used to power vehicles on the Moon and in specialized small vehicles driven within factories.

Several hybrid vehicles are now being produced. They combine an efficient gasoline engine with a lightweight, high-output electric motor that produces extra power when needed. During normal driving, the motor becomes a generator to recharge the battery pack. This eliminates the need to plug the car into an electrical outlet for recharging. The primary advantage of hybrids is that the system permits downsizing the engine and always operating in its optimum efficiency range through the use of advanced electronic engine and transmission controls.

Experimental systems

The U.S. automotive industry spends $18 billion or more on research and development of future products in a typical year—the most spent by any industry in the United States. Increasing pressure from various governments is requiring manufacturers to develop very low- and zero-emission vehicles. Authorities in California have estimated that motor vehicles produce 40 percent of the greenhouse gases that they consider to be responsible for climate change. To meet this challenge, manufacturers are working on a timetable to produce more efficient vehicle designs.

Expansion of the total potential automotive market in the future and concern for the environment may be expected to change cars of the future. Special-purpose vehicles designed for specific urban or rural functions, with appropriate power systems for each type of use, may be needed. Possibilities include solar, steam, gas turbine, new hybrid combinations, and other power sources.

Steam power plants have been reexamined in the light of modern technology and new materials. The continuous-combustion process used to heat the steam generator offers potentially improved emission characteristics.

Gas turbines have been tested extensively and have good torque characteristics, operate on a wide variety of fuels, have high power-to-weight ratios, meet emission standards, and offer quiet operation. Some studies have shown that the advantages of the system are best realized in heavy-duty vehicles operating on long, nearly constant speed runs. Efficiencies and operating characteristics can be improved by increasing operating temperatures. This may become commercially feasible utilizing ceramic materials that are cost-effective. Successful designs require regenerative systems to recover energy from hot exhaust gas and transfer it to incoming air. This improves fuel economy, reduces exhaust temperatures to safer levels, and eliminates the need for a muffler in some designs.

A number of other designs have been studied involving variations of engine combustion cycles such as turbocharged gasoline and diesel (two- and four-stroke) designs. Rotary engines have been produced in Germany and Japan, but they have been almost entirely discontinued because of exhaust emission control complexity. Variable valve timing can optimize performance and economy and provide a more constant engine torque output at different engine speeds. By delaying the opening of the engine exhaust valve, exhaust gas is effectively recirculated to reduce tailpipe emissions. Electro-hydraulic valves that totally replace the complexity of camshaft designs, or idlers that may be moved to change the geometry of the camshaft timing chain and retard valve opening, may be used for this purpose.

Solar-powered electric demonstration vehicles have been built by universities and manufacturers. Solar collector areas have proved to be too large for conventional cars, however. Development continues on solar cell design.

Microprocessors have become increasingly important in improving fuel economy and reducing undesirable exhaust emissions for all vehicle types. Research to develop so-called intelligent vehicles that can assist the driver and even operate without driver intervention, at least on special roads, has made some progress, with assistive technology becoming increasingly standard. These developments have been made possible by highly reliable solid-state digital computers and similarly reliable electronic sensors. The automobile industry has worked with governmental bodies to link vehicles to their environments using advanced telecommunication signals, electronic systems, and digital computers, both within the vehicle and aboard satellites and in other remote locations. Applications may be divided into functions for basic vehicle system assistance, safety and security applications, and information and entertainment systems.

The automobile industry is responsible for about two-thirds of the rubber, one-half of the platinum, one-third of the aluminum, one-seventh of the steel, and one-tenth of the copper consumed in the United States each year. About four-fifths of the material in a car is recyclable, and in the United States 19 out of 20 scrapped cars are recycled. Because the automobile is likely to remain an important part of the transportation system, it requires continuing improvement in safety and emission control as well as performance and cost.

xbody-car-construction.jpg.pagespeed.ic.8ZtuVq0WYQ.webp


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2024 2024-01-10 00:08:33

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2026) Injury

Gist

Injury is harm done to a person's or an animal's body, especially in an accident.

Summary

Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants.

Injuries can be caused in many ways, such as mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury.

Taxonomic range:

Animals

Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors.

Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent.

Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury.

Humans

Injuries to humans elicit an elaborate response including emergency medicine, trauma surgery, and pain management.

Injury in humans has been studied extensively for its importance in medicine. Much of medical practice including emergency medicine and pain management is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence, activity when injured and the role of human intent. Injuries can cause psychological harm including post-traumatic stress disorder in addition to physical harm.

Plants

In plants, injuries result from the eating of plant parts by herbivorous animals including insects and mammals, from damage to tissues by plant pathogens such as bacteria and fungi, which may gain entry after herbivore damage or in other ways, and from abiotic factors such as heat, freezing, flooding, lightning, and pollutants such as ozone. Plants respond to injury by signalling that damage has occurred, by secreting materials to seal off the damaged area, by producing antimicrobial chemicals, and in woody plants by regrowing over wounds.

Cell injury

Cell injury is a variety of changes of stress that a cell suffers due to external as well as internal environmental changes. Amongst other causes, this can be due to physical, chemical, infectious, biological, nutritional or immunological factors. Cell damage can be reversible or irreversible. Depending on the extent of injury, the cellular response may be adaptive and where possible, homeostasis is restored. Cell death occurs when the severity of the injury exceeds the cell's ability to repair itself. Cell death is relative to both the length of exposure to a harmful stimulus and the severity of the damage caused.

Details

Physical injury

Physical injuries include those caused by mechanical trauma, heat and cold, electrical discharges, changes in pressure, and radiation. Mechanical trauma is an injury to any portion of the body from a blow, crush, cut, or penetrating wound. The complications of mechanical trauma are usually related to fracture, hemorrhage, and infection. They do not necessarily have to appear immediately after occurrence of the injury. Slow internal bleeding may remain masked for days and lead to an eventual emergency. Similarly, wound infection and even systemic infection are rarely detectable until many days after the damage. All significant mechanical injuries must therefore be kept under observation for days or even weeks.

Injuries from cold or heat

Among physical injuries are injuries caused by cold or heat. Prolonged exposure of tissue to freezing temperatures causes tissue damage known as frostbite. Several factors predispose to frostbite, such as malnutrition leading to a loss of the fatty layer under the skin, lack of adequate clothing, and any type of insufficiency of the peripheral blood vessels, all of which increase the loss of body heat.

When the entire body is exposed to low temperatures over a long period, the result can be alarming. At first blood is diverted from the skin to deeper areas of the body, resulting in anoxia (lack of oxygen) and damage to the skin and the tissues under the skin, including the walls of the small vessels. This damage to the small blood vessels leads to swelling of the tissues beneath the skin as fluid seeps out of the vessels. When the exposure is prolonged, it leads eventually to cooling of the blood itself. Once this has occurred, the results are catastrophic. All the vital organs become affected, and death usually ensues.

Burns may be divided into three categories depending on severity. A first-degree burn is the least destructive and affects the most superficial layer of skin, the epidermis. Sunburn is an example of a first-degree burn. The symptoms are pain and some swelling. A second-degree burn is a deeper and hence more severe injury. It is characterized by blistering and often considerable edema (swelling). A third-degree burn is extremely serious; the entire thickness of the skin is destroyed, along with deeper structures such as muscles. Because the nerve endings are destroyed in such burns, the wound is surprisingly painless in the areas of worst involvement.

The outlook in burn injuries is dependent on the age of the victim and the percent of total body area affected. Loss of fluid and electrolytes and infection associated with loss of skin provide the major causes of burn mortality.

Electrical injuries

The injurious effects of an electrical current passing through the body are determined by its voltage, its amperage, and the resistance of the tissues in the pathway of the current. It must be emphasized that exposure to electricity can be harmful only if there is a contact point of entry and a discharge point through which the current leaves the body. If the body is well insulated against such passage, at the point of either entry or discharge, no current flows and no injury results. The voltage of current refers to its electromotive force, the amperage to its intensity. With high-voltage discharges, such as are encountered when an individual is struck by lightning, the major effect is to disrupt nervous impulses; death is usually caused by interruption of the regulatory impulses of the heart. In low-voltage currents, such as are more likely to be encountered in accidental exposure to house or industrial currents, death is more often due to the stimulation of nerve pathways that cause sustained contractions of muscles and may in this way block respiration. If the electrical shock does not produce immediate death, serious illness may result from the damage incurred by organs in the pathway of the electrical current passing through the body.

Pressure-change injuries

Physical injuries from pressure change are of two general types: (1) blast injury and (2) the effects of too-rapid changes in the atmospheric pressure in the environment. Blast injuries may be transmitted through air or water; their effect depends on the area of the body exposed to the blast. If it is an air blast, the entire body is subject to the strong wave of compression, which is followed immediately by a wave of lowered pressure. In effect the body is first violently squeezed and then suddenly overexpanded as the pressure waves move beyond the body. The chest or abdomen may suffer injuries from the compression, but it is the negative pressure following the wave that induces most of the damage, since overexpansion leads to rupture of the lungs and of other internal organs, particularly the intestines. If the blast injury is transmitted through water, the victim is usually floating, and only that part of the body underwater is exposed. An individual floating on the surface of the water may simply be popped out of the water like a cork and totally escape injury.

Decompression sickness is a disease caused by a too-rapid reduction in atmospheric pressure. Underwater divers, pilots of unpressurized aircraft, and persons who work underwater or below the surface of the Earth are subject to this disorder. As the atmospheric pressure lessens, dissolved gases in the tissues come out of solution. If this occurs slowly, the gases diffuse into the bloodstream and are eventually expelled from the body; if this occurs too quickly, bubbles will form in the tissues and blood. The oxygen in these bubbles is rapidly dissolved, but the nitrogen, which is a significant component of air, is less soluble and persists as bubbles of gas that block small blood vessels. Affected individuals suffer excruciating pain, principally in the muscles, which causes them to bend over in agony—hence the term “bends” used to describe this disorder.

Radiation injury

Radiation can result in both beneficial and dangerous biological effects. There are basically two forms of radiation: particulate, composed of very fast-moving particles (alpha and beta particles, neutrons, and deuterons), and electromagnetic radiation such as gamma rays and X-rays. From a biological point of view, the most important attribute of radiant energy is its ability to cause ionization—to form positively or negatively charged particles in the body tissues that it encounters, thereby altering and, in some cases, damaging the chemical composition of the cells. DNA is highly susceptible to ionizing radiation. Cells and tissues may therefore die because of damage to enzymes, because of the inability of the cell to survive with a defective complement of DNA, or because cells are unable to divide. The cell is most susceptible to irradiation during the process of division. The severity of radiation injury is dependent on the penetrability of the radiation, the area of the body exposed to radiation, and the duration of exposure, variables that determine the total amount of radiant energy absorbed.

When the radiation exposure is confined to a part of the body and is delivered in divided doses, a frequent practice in the treatment of cancer, its effect depends on the vulnerability of the cell types in the body to this form of energy. Some cells, such as those that divide actively, are particularly sensitive to radiation. In this category are the cells of the bone marrow, spleen, lymph nodes, gender glands, and lining of the stomach and intestines. In contrast, permanently nondividing cells of the body such as nerve and muscle cells are resistant to radiation. The goal of radiation therapy of tumours is to deliver a dosage to the tumours that is sufficient to destroy the cancer cells without too severely injuring the normal cells in the pathway of the radiation. Obviously, when an internal cancer is treated, the skin, underlying fat, muscles, and nearby organs are unavoidably exposed to the radiation. The possibility of delivering effective doses of radiation to the unwanted cancer depends on the ability of the normal cells to withstand the radiation. However, as is the case in drug therapy, radiation treatment is a two-edged sword with both positive and negative aspects.

Finally, there are probable deleterious effects of radiation in producing congenital malformations, certain leukemias, and possibly some genetic disorders.

Additional Information:

An injury is any physiological damage to living tissue caused by immediate physical stress. Injuries to humans can occur intentionally or unintentionally and may be caused by blunt trauma, penetrating trauma, burning, toxic exposure, asphyxiation, or overexertion. Injuries can occur in any part of the body, and different symptoms are associated with different injuries.

Treatment of a major injury is typically carried out by a health professional and varies greatly depending on the nature of the injury. Traffic collisions are the most common cause of accidental injury and injury-related death among humans. Injuries are distinct from chronic conditions, psychological trauma, infections, or medical procedures, though injury can be a contributing factor to any of these.

Several major health organizations have established systems for the classification and description of human injuries.

Occurrence

Injuries may be intentional or unintentional. Intentional injuries may be acts of violence against others or self-inflicted against one's own person. Accidental injuries may be unforeseeable, or they may be caused by negligence. In order, the most common types of unintentional injuries are traffic accidents, falls, drowning, burns, and accidental poisoning. Certain types of injuries are more common in developed countries or developing countries. Traffic injuries are more likely to kill pedestrians than drivers in developing countries. Scalding burns are more common in developed countries, while open-flame injuries are more common in developing countries.

As of 2021, approximately 4.4 million people are killed due to injuries each year worldwide, constituting nearly 8% of all deaths. 3.16 million of these injuries are unintentional, and 1.25 million are intentional. Traffic accidents are the most common form of deadly injury, causing about one-third of injury-related deaths. One-sixth are caused by killing oneself, and one-tenth are caused by homicide. Tens of millions of individuals require medical treatment for nonfatal injuries each year, and injuries are responsible for about 10% of all years lived with disability. Men are twice as likely to be killed through injury than women. In 2013, 367,000 children under the age of five died from injuries, down from 766,000 in 1990.

Classification systems

The World Health Organization (WHO) developed the International Classification of External Causes of Injury (ICECI). Under this system, injuries are classified by mechanism of injury, objects/substances producing injury, place of occurrence, activity when injured, the role of human intent, and additional modules. These codes allow the identification of distributions of injuries in specific populations and case identification for more detailed research on causes and preventive efforts.

The United States Bureau of Labor Statistics developed the Occupational Injury and Illness Classification System (OIICS). Under this system injuries are classified by nature, part of body affected, source and secondary source, and event or exposure. The OIICS was first published in 1992 and has been updated several times since. The Orchard Sports Injury and Illness Classification System (OSIICS), previously OSICS, is used to classify injuries to enable research into specific sports injuries.

The injury severity score (ISS) is a medical score to assess trauma severity. It correlates with mortality, morbidity, and hospitalization time after trauma. It is used to define the term major trauma (polytrauma), recognized when the ISS is greater than 15. The AIS Committee of the Association for the Advancement of Automotive Medicine designed and updates the scale.

Mechanisms:

Trauma

Traumatic injury is caused by an external object making forceful contact with the body, resulting in a wound. Major trauma is a severe traumatic injury that has the potential to cause disability or death. Serious traumatic injury most often occurs as a result of traffic collisions. Traumatic injury is the leading cause of death in people under the age of 45.

Blunt trauma injuries are caused by the forceful impact of an external object. Injuries from blunt trauma may cause internal bleeding and bruising from ruptured capillaries beneath the skin, abrasion from scraping against the superficial epidermis, lacerated tears on the skin or internal organs, or bone fractures. Crush injuries are a severe form of blunt trauma damage that apply large force to a large area over a longer period of time. Penetrating trauma injuries are caused by external objects entering the tissue of the body through the skin. Low-velocity penetration injuries are caused by sharp objects, such as stab wounds, while high-velocity penetration injuries are caused by ballistic projectiles, such as gunshot wounds or injuries caused by shell fragments. Perforated injuries result in an entry wound and an exit wound, while puncture wounds result only in an entry wound. Puncture injuries result in a cavity in the tissue.

Burns

Burn injury is caused by contact with extreme temperature, chemicals, or radiation. The effects of burns vary depending on the depth and size. Superficial or first-degree burns only affect the epidermis, causing pain for a short period of time. Superficial partial-thickness burns cause weeping blisters and require dressing. Deep partial-thickness burns are dry and less painful due to the burning away of the skin and require surgery. Full-thickness or third-degree burns affect the entire dermis and is susceptible to infection. Fourth-degree burns reach deep tissues such as muscles and bones, causing loss of the affected area.

Thermal burns are the most common type of burn, caused by contact with excessive heat, including contact with flame, contact with hot surfaces, or scalding burns caused by contact with hot water or steam. Frostbite is a type of burn caused by contact with excessive cold, causing cellular injury and deep tissue damage through the crystallization of water in the tissue. Friction burns are caused by friction with external objects, resulting in a burn and abrasion. Radiation burns are caused by exposure to ionizing radiation. Most radiation burns are sunburns caused by ultraviolet radiation or high exposure to radiation through medical treatments such as repeated radiography or radiation therapy.

Electrical burns are caused by contact with electricity as it enters and passes through the body. They are often deeper than other burns, affecting lower tissues as electricity penetrates the skin, and the full extent of electrical burns are often obscured. They will also cause extensive destruction of tissue at the entry and exit points. Electrical injuries in the home are often minor, while high tension power cables cause serious electrical injuries in the workplace. Lightning strikes can also cause severe electrical injuries. Fatal electrical injuries are often caused by tetanic spasm inducing respiratory arrest or interference with the heart causing cardiac arrest.

Chemical burns are caused by contact with corrosive substances such as acid or alkali. Chemical burns are rarer than most other burns, though there are many chemicals that can damage tissue. The most common chemical-related injuries are those caused by carbon monoxide, ammonia, chlorine, hydrochloric acid, and sulfuric acid. Some chemical weapons induce chemical burns, such as white phosphorus. Most chemical burns are treated with extensive application of water to remove the chemical contaminant, though some burn-inducing chemicals react with water to create more severe injuries. The ingestion of corrosive substances can cause chemical burns to the larynx and stomach.

Other mechanisms

Toxic injury is caused by the ingestion, inhalation, injection, or absorption of a toxin. This may occur through an interaction caused by a drug or the ingestion of a poison. Different toxins may cause different types of injuries, and many will cause injury to specific organs. Toxins in gases, dusts, aerosols, and smoke can be inhaled, potentially causing respiratory failure. Respiratory toxins can be released by structural fires, industrial accidents, domestic mishaps, or through chemical weapons. Some toxicants may affect other parts of the body after inhalation, such as carbon monoxide.

Asphyxia causes injury to the body from a lack of oxygen. It can be caused by drowning, inhalation of certain substances, strangulation, blockage of the airway, traumatic injury to the airway, apnea, and other means. The most immediate injury caused by asphyxia is hypoxia, which can in turn cause acute lung injury or acute respiratory distress syndrome as well as damage to the circulatory system. The most severe injury associated with asphyxiation is cerebral hypoxia and ischemia, in which the brain receives insufficient oxygen or blood, resulting in neurological damage or death. Specific injuries are associated with water inhalation, including alveolar collapse, atelectasis, intrapulmonary shunting, and ventilation perfusion mismatch. Simple asphyxia is caused by a lack of external oxygen supply. Systemic asphyxia is caused by exposure to a compound that prevents oxygen from being transported or used by the body. This can be caused by azides, carbon monoxide, cyanide, smoke inhalation, hydrogen sulfide, methemoglobinemia-inducing substances, opioids, or other systemic asphyxiants. Ventilation and oxygenation are necessary for treatment of asphyxiation, and some asphyxiants can be treated with antidotes.

Injuries of overuse or overexertion can occur when the body is strained through use, affecting the bones, muscles, ligaments, or tendons. Sports injuries are often overuse injuries such as tendinopathy. Over-extension of the ligaments and tendons can result in sprains and strains, respectively. Repetitive sedentary behaviors such as extended use of a computer or a physically repetitive occupation may cause a repetitive strain injury. Extended use of brightly lit screens may also cause eye strain.

Locations:

Abdomen

Abdominal trauma includes injuries to the stomach, intestines, liver, pancreas, kidneys, gallbladder, and spleen. Abdominal injuries are typically caused by traffic accidents, assaults, falls, and work-related injuries, and physical examination is often unreliable in diagnosing blunt abdominal trauma. Splenic injury can cause low blood volume or blood in the peritoneal cavity. The treatment and prognosis of splenic injuries are dependent on cardiovascular stability. The gallbladder is rarely injured in blunt trauma, occurring in about 2% of blunt abdominal trauma cases. Injuries to the gallbladder are typically associated with injuries to other abdominal organs. The intestines are susceptible to injury following blunt abdominal trauma. The kidneys are protected by other structures in the abdomen, and most injuries to the kidney are a result of blunt trauma. Kidney injuries typically cause blood in the urine.

Due to its location in the body, pancreatic injury is relatively uncommon but more difficult to diagnose. Most injuries to the pancreas are caused by penetrative trauma, such as gunshot wounds and stab wounds. Pancreatic injuries occur in under 5% of blunt abdominal trauma cases. The severity of pancreatic injury depends primarily on the amount of harm caused to the pancreatic duct. The stomach is also well protected from injury due to its heavy layering, its extensive blood supply, and its position relative to the rib cage. As with pancreatic injuries, most traumatic stomach injuries are caused by penetrative trauma, and most civilian weapons do not cause long-term tissue damage to the stomach. Blunt trauma injuries to the stomach are typically caused by traffic accidents. Ingestion of corrosive substances can cause chemical burns to the stomach. Liver injury is the most common type of organ damage in cases of abdominal trauma. The liver's size and location in the body makes injury relatively common compared to other abdominal organs, and blunt trauma injury to the liver is typically treated with nonoperative management. Liver injuries are rarely serious, though most injuries to the liver are concomitant with other injuries, particularly to the spleen, ribs, pelvis, or spinal cord. The liver is also susceptible to toxic injury, with overdose of paracetamol being a common cause of liver failure.

Face

Facial trauma may affect the eyes, nose, ears, or mouth. Nasal trauma is a common injury and the most common type of facial injury. Oral injuries are typically caused by traffic accidents or alcohol-related violence, though falls are a more common cause in young children. The primary concerns regarding oral injuries are that the airway is clear and that there are no concurrent injuries to other parts of the head or neck. Oral injuries may occur in the soft tissue of the face, the hard tissue of the mandible, or as dental trauma.

The ear is susceptible to trauma in head injuries due to its prominent location and exposed structure. Ear injuries may be internal or external. Injuries of the external ear are typically lacerations of the cartilage or the formation of a hematoma. Injuries of the middle and internal ear may include a perforated eardrum or trauma caused by extreme pressure changes. The ear is also highly sensitive to blast injury. The bones of the ear are connected to facial nerves, and ear injuries can cause paralysis of the face. Trauma to the ear can cause hearing loss.

Eye injuries often take place in the cornea, and they have the potential to permanently damage vision. Corneal abrasions are a common injury caused by contact with foreign objects. The eye can also be injured by a foreign object remaining in the cornea. Radiation damage can be caused by exposure to excessive light, often caused by welding without eye protection or being exposed to excessive ultraviolet radiation, such as sunlight. Exposure to corrosive chemicals can permanently damage the eyes, causing blindness if not sufficiently irrigated. The eye is protected from most blunt injuries by the infraorbital margin, but in some cases blunt force may cause an eye to hemorrhage or tear. Overuse of the eyes can cause eye strain, particularly when looking at brightly lit screens for an extended period.

Heart

Cardiac injuries affect the heart and blood vessels. Blunt cardiac injury in a common injury caused by blunt trauma to the heart. It can be difficult to diagnose, and it can have many effects on the heart, including contusions, ruptures, acute valvular disorders, arrhythmia, or heart failure. Penetrative trauma to the heart is typically caused by stab wounds or gunshot wounds. Accidental cardiac penetration can also occur in rare cases from a fractured sternum or rib. Stab wounds to the heart are typically survivable with medical attention, though gunshot wounds to the heart are not. The right ventricle is most susceptible to injury due to its prominent location. The two primary consequences of traumatic injury to the heart are severe hemorrhaging and fluid buildup around the heart.

Musculoskeletal

Musculoskeletal injuries affect the skeleton and the muscular system. Soft tissue injuries affect the skeletal muscles, ligaments, and tendons. Ligament and tendon injuries account for half of all musculoskeletal injuries. Ligament sprains and tendon strains are common injuries that do not require intervention, but the healing process is slow. Physical therapy can be used to assist reconstruction and use of injured ligaments and tendons. Torn ligaments or tendons typically require surgery. Skeletal muscles are abundant in the body and commonly injured when engaging in athletic activity. Muscle injuries trigger an inflammatory response to facilitate healing. Blunt trauma to the muscles can cause contusions and hematomas. Excessive tensile strength can overstretch a muscle, causing a strain. Strains may present with torn muscle fibers, hemorrhaging, or fluid in the muscles. Severe muscle injuries in which a tear extends across the muscle can cause total loss of function. Penetrative trauma can cause laceration to muscles, which may take an extended time to heal. Unlike contusions and strains, lacerations are uncommon in sports injuries.

Traumatic injury may cause various bone fractures depending on the amount of force, direction of the force, and width of the area affected. Pathologic fractures occur when a previous condition weakens the bone until it can be easily fractured. Stress fractures occur when the bone is overused or suffers under excessive or traumatic pressure, often during athletic activity. Hematomas occur immediately following a bone fracture, and the healing process often takes from six weeks to three months to complete, though continued use of the fractured bone will prevent healing. Articular cartilage damage may also affect function of the skeletal system, and it can cause posttraumatic osteoarthritis. Unlike most bodily structures, cartilage cannot be healed once it is damaged.

Nervous system

Injuries to the nervous system include brain injury, spinal cord injury, and nerve injury. Trauma to the brain causes traumatic brain injury (TBI), causing "long-term physical, emotional, behavioral, and cognitive consequences". Mild TBI, including concussion, often occurs during athletic activity, military service, or as a result of untreated epilepsy, and its effects are typically short-term. More severe injuries to the brain cause moderate TBI, which may cause confusion or lethargy, or severe TBI, which may result in a coma or a secondary brain injury. TBI is a leading cause of mortality. Approximately half of all trauma-related deaths involve TBI. Non-traumatic injuries to the brain cause acquired brain injury (ABI). This can be caused by stroke, a brain tumor, poison, infection, cerebral hypoxia, drug use, or the secondary effect of a TBI.

Injury to the spinal cord is not immediately terminal, but it is associated with concomitant injuries, lifelong medical complications, and reduction in life expectancy. It may result in complications in several major organ systems and a significant reduction in mobility or paralysis. Spinal shock causes temporary paralysis and loss of reflexes. Unlike most other injuries, damage to the peripheral nerves is not healed through cellular proliferation. Following nerve injury, the nerves undergo degeneration before regenerating, and other pathways can be strengthened or reprogrammed to make up for lost function. The most common form of peripheral nerve injury is stretching, due to their inherent elasticity. Nerve injuries may also be caused by laceration or compression.

Pelvis

Injuries to the pelvic area include injuries to the bladder, rectum, colon, and reproductive organs. Traumatic injury to the bladder is rare and often occurs with other injuries to the abdomen and pelvis. The bladder is protected by the peritoneum, and most cases of bladder injury are concurrent with a fracture of the pelvis. Bladder trauma typically causes hematuria, or blood in the urine. Ingestion of alcohol may cause distension of the bladder, increasing the risk of injury. A catheter may be used to extract blood from the bladder in the case of hemorrhaging, though injuries that break the peritoneum typically require surgery. The colon is rarely injured by blunt trauma, with most cases occurring from penetrative trauma through the abdomen. Rectal injury is less common than injury to the colon, though the rectum is more susceptible to injury following blunt force trauma to the pelvis.

Injuries to the male reproductive system are rarely fatal and typically treatable through grafts and reconstruction. The elastic nature of the scrotum makes it resistant to injury, accounting for 1% of traumatic injuries. Trauma to the scrotum may cause damage to the testis or the spermatic cord. Trauma to the math can cause penile fracture, typically as a result of vigorous intercourse. Injuries to the female reproductive system are often a result of pregnancy and childbirth or sexual activity. They are rarely fatal, but they can produce a variety of complications, such as chronic discomfort, dyspareunia, infertility, or the formation of fistulas. Age can greatly affect the nature of genital injuries in women due to changes in hormone composition. Childbirth is the most common cause of genital injury to women of reproductive age. Many cultures practice female genital mutilation, which is estimated to affect over 125 million women and girls worldwide as of 2018. Tears and abrasions to the math are common during sexual intercourse, and these may be exacerbated in instances of non-consensual sexual activity.

Respiratory tract

Injuries to the respiratory tract affect the lungs, diaphragm, trachea, bronchus, pharynx, or larynx. Tracheobronchial injuries are rare and often associated with other injuries. Bronchoscopy is necessary for an accurate diagnosis of tracheobronchial injury. The neck, including the pharynx and larynx, is highly vulnerable to injury due to its complex, compacted anatomy. Injuries to this area can cause airway obstruction. Ingestion of corrosive chemicals can cause chemical burns to the larynx. Inhalation of toxic materials can also cause serious injury to the respiratory tract.

Severe trauma to the chest can cause damage to the lungs, including pulmonary contusions, accumulation of blood, or a collapsed lung. The inflammation response to a lung injury can cause acute respiratory distress syndrome. Injuries to the lungs may cause symptoms ranging from shortness of breath to terminal respiratory failure. Injuries to the lungs are often fatal, and survivors often have a reduced quality of life. Injuries to the diaphragm are uncommon and rarely serious, but blunt trauma to the diaphragm can result in the formation of a hernia over time. Injuries to the diaphragm may present in many ways, including abnormal blood pressure, cardiac arrest, gastroinetestinal obstruction, and respiratory insufficiency. Injuries to the diaphragm are often associated with other injuries in the chest or abdomen, and its position between two major cavities of the human body may complicate diagnosis.

Skin

Most injuries to the skin are minor and do not require specialist treatment. Lacerations of the skin are typically repaired with sutures, staples, or adhesives. The skin is susceptible to burns, and burns to the skin often cause blistering. Abrasive trauma scrapes or rubs off the skin, and severe abrasions require skin grafting to repair. Skin tears involve the removal of the epidermis or dermis through friction or shearing forces, often in vulnerable populations such as the elderly. Skin injuries are potentially complicated by foreign bodies such as glass, metal, or dirt that entered the wound, and skin wounds often require cleaning.

Treatment

Much of medical practice is dedicated to the treatment of injuries. Traumatology is the study of traumatic injuries and injury repair. Certain injuries may be treated by specialists. Serious injuries sometimes require trauma surgery. Following serious injuries, physical therapy and occupational therapy are sometimes used for rehabilitation. Medication is commonly used to treat injuries.

Emergency medicine during major trauma prioritizes the immediate consideration of life-threatening injuries that can be quickly addressed. The airway is evaluated, clearing bodily fluids with suctioning or creating an artificial airway if necessary. Breathing is evaluated by evaluating motion of the chest wall and checking for blood or air in the pleural cavity. Circulation is evaluated to resuscitate the patient, including the application of intravenous therapy. Disability is evaluated by checking for responsiveness and reflexes. Exposure is then used to examine the patient for external injury. Following immediate life-saving procedures, a CT scan is used for a more thorough diagnosis. Further resuscitation may be required, including ongoing blood transfusion, mechanical ventilation and nutritional support.

Pain management is another aspect of injury treatment. Pain serves as an indicator to determine the nature and severity of an injury, but it can also worsen an injury, reduce mobility, and affect quality of life. Analgesic drugs are used to reduce the pain associated with injuries, depending on the person's age, the severity of the injury, and previous medical conditions that may affect pain relief. NSAIDs such as aspirin and ibuprofen are commonly used for acute pain. Opioid medications such as fentanyl, methadone, and morphine are used to treat severe pain in major trauma, but their use is limited due to associated long-term risks such as addiction.

Complications

Complications may arise as a result of certain injuries, increasing the recovery time, further exasperating the symptoms, or potentially causing death. The extent of the injury and the age of the injured person may contribute to the likelihood of complications. Infection of wounds is a common complication in traumatic injury, resulting in diagnoses such as pneumonia or sepsis. Wound infection prevents the healing process from taking place and can cause further damage to the body. A majority of wounds are contaminated with microbes from other parts of the body, and infection takes place when the immune system is unable to address this contamination. The surgical removing of devitalized tissue and the use of topical antimicrobial agents can prevent infection.

Hemorrhaging of blood is a common result of injuries, and it can cause several complications. Pooling of blood under the skin can cause a hematoma, particularly after blunt trauma or the suture of a laceration. Hematomas are susceptible to infection and are typically treated compression, though surgery is necessary in severe cases. Excessive blood loss can cause hypovolemic shock in which cellular oxygenation can no longer take place. This can cause tachycardia, hypotension, coma, or organ failure. Fluid replacement is often necessary to treat blood loss. Other complications of injuries include cavitation, development of fistulas, and organ failure.

Social and psychological aspects

Injuries often cause psychological harm in addition to physical harm. Traumatic injuries are associated with psychological trauma and distress, and some victims of traumatic injuries will display symptoms of post-traumatic stress disorder during and after the recovery of the injury. The specific symptoms and their triggers vary depending on the nature of the injury. Body image and self-esteem can also be affected by injury. Injuries that cause permanent disabilities, such as spinal cord injuries, can have severe effects on self-esteem. Disfiguring injuries can negatively affect body image, leading to a lower quality of life. Burn injuries in particular can cause dramatic changes in a person's appearance that may negatively affect body image.

Severe injury can also cause social harm. Disfiguring injuries may also result in stigma due to scarring or other changes in appearance. Certain injuries may necessitate a change in occupation or prevent employment entirely. Leisure activities are similarly limited, and athletic activities in particular may be impossible following severe injury. In some cases, the effects of injury may strain personal relationships, such as marriages. Psychological and social variables have been found to affect the likelihood of injuries among athletes. Increased life stress can cause an increase in the likelihood of athletic injury, while social support can decrease the likelihood of injury. Social support also assists in the recovery process after athletic injuries occur.

Hand-Injury-1024x680.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

#2025 2024-01-11 00:34:05

Jai Ganesh
Administrator
Registered: 2005-06-28
Posts: 46,422

Re: Miscellany

2027) Cargo

Gist

Cargo, also known as freight, refers to goods or produce being transported from one place to another – by water, air or land. Originally, the term “cargo” referred to goods being loaded onboard a vessel.

Summary

Cargo, also known as freight, refers to goods or produce being transported from one place to another – by water, air or land. Originally, the term “cargo” referred to goods being loaded onboard a vessel. These days, however, cargo is used for all types of goods, including those carried by rail, van, truck, or intermodal container.

Though cargo means all goods onboard a transport vehicle, it does not include items such as personnel bags, goods in the storage, equipment or products to support the transport carried onboard. Cargo transport is mainly for commercial purpose for which an air waybill, bill of lading or other receipt is issued by the carrier.

Cargo refers to goods carried by a large vehicle, like a plane, ship, train, or truck. See a giant truck on the highway piled high with boxes, lumber, or new cars? It's carrying cargo.

Cargo originates from the Latin word carricare which means "to load on a cart, or wagon." Cargo can be loaded on a cart, but it's usually loaded on something much bigger. On a ship, cargo is stacked up in huge, colorful metal containers. These containers can be full of all types of cargo, like food, furniture, or electronics. You'll rarely need the plural, but it's formed by adding an -s or more commonly, an -es (cargoes).

Details

In transportation, freight refers to goods conveyed by land, water or air, while cargo refers specifically to freight when conveyed via water or air. In economics, freight refers to goods transported at a freight rate for commercial gain. The term cargo is also used in case of goods in the cold-chain, because the perishable inventory is always in transit towards a final end-use, even when it is held in cold storage or other similar climate-controlled facilities, including warehouses.

Multi-modal container units, designed as reusable carriers to facilitate unit load handling of the goods contained, are also referred to as cargo, especially by shipping lines and logistics operators. When empty containers are shipped each unit is documented as a cargo and when goods are stored within, the contents are termed containerized cargo. Similarly, aircraft ULD boxes are also documented as cargo, with an associated packing list of the items contained within.

(ULD: Unit Load Device)

Description:

Marine

Break bulk / general cargo are goods that are handled and stowed piecemeal to some degree, as opposed to cargo in bulk or modern shipping containers. Typically bundled in batches for hoisting, either with cargo nets, slings, crates, or stacked on trays, pallets or skids; at best (and today mostly) lifted directly into and out of a vessel's holds, but otherwise onto and off its deck, by cranes or derricks present on the dock or on the ship itself. If hoisted on deck instead of straight into the hold, liftable or rolling unit loads, like bags, barrels/vats, boxes, cartons and crates, then have to be man-handled and stowed competently by stevedores. Securing break bulk and general freight inside a vessel, includes the use of dunnage. When no hoisting equipment is available, break bulk would previously be man-carried on and off the ship, over a plank, or by passing via human chain. Since the 1960s, the volume of break bulk cargo has enormously declined worldwide in favour of mass adoption of containers. Bulk cargo, such as salt, oil, tallow, but also scrap metal, is usually defined as commodities that are neither on pallets nor in containers. Bulk cargoes are not handled as individual pieces, the way heavy-lift and project cargo are. Alumina, grain, gypsum, logs, and wood chips, for instance, are bulk cargoes. Bulk cargo is classified as liquid or dry.

Air

Air cargo refers to any goods shipped by air, whereas air freight refers specifically to goods transported in the cargo hold of a dedicated cargo plane. Aircraft were first used to carry mail as cargo in 1911. Eventually manufacturers started designing aircraft for other types of freight as well.

There are many commercial aircraft suitable for carrying cargo such as the Boeing 747 and the more prominent An‑124, which was purposely built for easy conversion into a cargo aircraft. Such large aircraft employ standardized quick-loading containers known as unit load devices (ULDs), comparable to ISO containers on cargo ships. ULDs can be stowed in the lower decks (front and rear) of several wide-body aircraft, and on the main deck of some narrow-bodies. Some dedicated cargo planes have a large opening front for loading.

Air freight shipments are very similar to LTL shipments in terms of size and packaging requirements. However, air freight or air cargo shipments typically need to move at much faster speeds than 800 km or 497 mi per hour. While shipments move faster than standard LTL, air shipments do not always actually move by air. Air shipments may be booked directly with the carriers, through brokers or with online marketplace services. In the US, there are certain restrictions on cargo moving via air freight on passenger aircraft, most notably the transport of rechargeable lithium-ion battery shipments.

Shippers in the US must be approved and be "known" in the Known Shipper Management System before their shipments can be tendered on passenger aircraft.

Rail

Trains are capable of transporting a large number of containers that come from shipping ports. Trains are also used to transport water, cement, grain, steel, wood and coal. They are used because they can carry a large amount and generally have a direct route to the destination. Under the right circumstances, freight transport by rail is more economical and energy efficient than by road, mainly when carried in bulk or over long distances.

The main disadvantage of rail freight is its lack of flexibility. For this reason, rail has lost much of the freight business to road transport. Rail freight is often subject to transshipment costs, since it must be transferred from one mode of transportation to another. Practices such as containerization aim at minimizing these costs. When transporting point-to-point bulk loads such as cement or grain, with specialised bulk handling facilities at the rail sidings, the rail mode of transport remains the most convenient and preferred option.

Many governments are encouraging shippers to increase their use of rail rather than transport because of trains' lower environmental disbenefits.

Road

Many firms, like Parcelforce, FedEx and R+L Carriers transport all types of cargo by road. Delivering everything from letters to houses to cargo containers, these firms offer fast, sometimes same-day, delivery.

A good example of road cargo is food, as supermarkets require deliveries daily to replenish their shelves with goods. Retailers and manufacturers of all kinds rely upon delivery trucks, be they full size semi trucks or smaller delivery vans. These smaller road haulage companies constantly strive for the best routes and prices to ship out their products. Indeed, the level of commercial freight transported by smaller businesses is often a good barometer of healthy economic development as these types of vehicles move and transport anything literally, including couriers transporting parcels and mail. You can see the different types and weights of vehicles that are used to move cargo around .

Less-than-truckload freight

Less than truckload (LTL) cargo is the first category of freight shipment, representing the majority of freight shipments and the majority of business-to-business (B2B) shipments. LTL shipments are also often referred to as motor freight and the carriers involved are referred to as motor carriers.

LTL shipments range from 50 to 7,000 kg (110 to 15,430 lb), being less than 2.5 to 8.5 m (8 ft 2.4 in to 27 ft 10.6 in) the majority of times. The average single piece of LTL freight is 600 kg (1,323 lb) and the size of a standard pallet. Long freight and/or large freight are subject to extreme length and cubic capacity surcharges.

Trailers used in LTL can range from 28 to 53 ft (8.53 to 16.15 m). The standard for city deliveries is usually 48 ft (14.63 m). In tight and residential environments the 28 ft (8.53 m) trailer is used the most.

The shipments are usually palletized, stretch [shrink]-wrapped and packaged for a mixed-freight environment. Unlike express or parcel, LTL shippers must provide their own packaging, as carriers do not provide any packaging supplies or assistance. However, circumstances may require crating or another substantial packaging.

Truckload freight

In the United States, shipments larger than about 7,000 kg (15,432 lb) are typically classified as truckload (TL) freight. This is because it is more efficient and economical for a large shipment to have exclusive use of one larger trailer rather than share space on a smaller LTL trailer.

By the Federal Bridge Gross Weight Formula the total weight of a loaded truck (tractor and trailer, 5-axle rig) cannot exceed 80,000 lb (36,287 kg) in the United States. In ordinary circumstances, long-haul equipment will weigh about 15,000 kg (33,069 lb), leaving about 20,000 kg (44,092 lb) of freight capacity. Similarly a load is limited to the space available in the trailer, normally 48 ft (14.63 m) or 53 ft (16.15 m) long, 2.6 m (102+3⁄8 in) wide, 9 ft 0 in (2.74 m) high and 13 ft 6 in or 4.11 m high overall.

While express, parcel and LTL shipments are always intermingled with other shipments on a single piece of equipment and are typically reloaded across multiple pieces of equipment during their transport, TL shipments usually travel as the only shipment on a trailer. In fact, TL shipments usually deliver on exactly the same trailer as they are picked up on.

Shipment categories

Freight is usually organized into various shipment categories before it is transported. An item's category is determined by:

* the type of item being carried. For example, a kettle could fit into the category 'household goods'.
* how large the shipment is, in terms of both item size and quantity.
* how long the item for delivery will be in transit.

Shipments are typically categorized as household goods, express, parcel, and freight shipments:

* Household goods (HHG) include furniture, art and similar items.
* Express: Very small business or personal items like envelopes are considered overnight express or express letter shipments. These shipments are rarely over a few kilograms or pounds and almost always travel in the carrier's own packaging. Express shipments almost always travel some distance by air. An envelope may go coast to coast in the United States overnight or it may take several days, depending on the service options and prices chosen by the shipper.
* Parcel: Larger items like small boxes are considered parcels or ground shipments. These shipments are rarely over 50 kg (110 lb), with no single piece of the shipment weighing more than about 70 kg (154 lb). Parcel shipments are always boxed, sometimes in the shipper's packaging and sometimes in carrier-provided packaging. Service levels are again variable but most ground shipments will move about 800 to 1,100 km (497 to 684 mi) per day. Depending on the package's origin, it can travel from coast to coast in the United States in about four days. Parcel shipments rarely travel by air and typically move via road and rail. Parcels represent the majority of business-to-consumer (B2C) shipments.
* Freight: Beyond HHG, express, and parcel shipments, movements are termed freight shipments.

Shipping costs

An LTL shipper often realizes savings by utilizing a freight broker, online marketplace or another intermediary, instead of contracting directly with a trucking company. Brokers can shop the marketplace and obtain lower rates than most smaller shippers can obtain directly. In the LTL marketplace, intermediaries typically receive 50% to 80% discounts from published rates, whereas a small shipper may only be offered a 5% to 30% discount by the carrier. Intermediaries are licensed by the DOT and have the requirements to provide proof of insurance.

Truckload (TL) carriers usually charge a rate per kilometre or mile. The rate varies depending on the distance, geographic location of the delivery, items being shipped, equipment type required, and service times required. TL shipments usually receive a variety of surcharges very similar to those described for LTL shipments above. There are thousands more small carriers in the TL market than in the LTL market. Therefore, the use of transportation intermediaries or brokers is widespread.

Another cost-saving method is facilitating pickups or deliveries at the carrier's terminals. Carriers or intermediaries can provide shippers with the address and phone number for the closest shipping terminal to the origin and/or destination. By doing this, shippers avoid any accessorial fees that might normally be charged for liftgate, residential pickup/delivery, inside pickup/delivery, or notifications/appointments.

Shipping experts optimize their service and costs by sampling rates from several carriers, brokers and online marketplaces. When obtaining rates from different providers, shippers may find a wide range in the pricing offered. If a shipper in the United States uses a broker, freight forwarder or another transportation intermediary, it is common for the shipper to receive a copy of the carrier's Federal Operating Authority. Freight brokers and intermediaries are also required by Federal Law to be licensed by the Federal Highway Administration. Experienced shippers avoid unlicensed brokers and forwarders because if brokers are working outside the law by not having a Federal Operating License, the shipper has no protection in case of a problem. Also, shippers typically ask for a copy of the broker's insurance certificate and any specific insurance that applies to the shipment.

Overall, shipping costs have fallen over the past decades. A further drop in shipping costs in the future might be realized through the application of improved 3D printing technologies.

Security concerns

Governments are very concerned with cargo shipment, as it may bring security risks to a country. Therefore, many governments have enacted rules and regulations, administered by a customs agency, for the handling of cargo to minimize risks of terrorism and other crime. Governments are mainly concerned with cargo entering through a country's borders.

The United States has been one of the leaders in securing cargo. They see cargo as a concern to national security. After the terrorist attacks of September 11th, the security of this magnitude of cargo has become highlighted on the over 6 million cargo containers that enter the United States ports each year. The latest US Government response to this threat is the CSI: Container Security Initiative. CSI is a program intended to help increase security for containerized cargo shipped to the United States from around the world. Europe is also focusing on this issue, with several EU-funded projects underway.

Stabilization

Many ways and materials are available to stabilize and secure cargo in various modes of transport. Conventional load securing methods and materials such as steel strapping and plastic/wood blocking and bracing have been used for decades and are still widely used. Present load-securing methods offer several other options, including polyester strapping and lashing, synthetic webbings and dunnage bags, also known as airbags or inflatable bags.

Practical advice on stabilization is given in the International Guidelines on Safe Load Securing for Road Transport.

Additional Information

Long has the difference between cargo and freight confused those involved in the transportation industry. While both terms have similar meanings and are closely associated with transporting goods, experts claim that the difference indeed exists. How can we differentiate between cargo and freight, what meanings do these terms hide, and how can we use them correctly? Here is a comprehensive answer that will clear away any doubts you might have concerning these terms.

Where to Start?

If you are looking to define the difference between cargo and freight, start from the beginning — the meaning of these words. You will notice at a glance that the type of transport they imply is different as well as the type of goods they are associated with. Still, even those with years of experience in this business continue using these terms interchangeably without even trying to perceive the difference. A great majority of them pose the question of whether it is indeed necessary to pay close attention to this issue and how it affects doing business, if at all. On the other hand, there are those who insist on finding out the answer, so for them, the explanation follows.

Freight Has a Wide Range of Meanings

Freight has a considerably wide range of meanings and is frequently used in the transportation and trade industries. The term refers to commercial goods only, which may be one of the crucial differences when compared with the term cargo.

Generally, freight is associated with the volumes of goods transported via truck or train. As genuine professionals in your business, you certainly use the words freight trucks or freight trains regularly, and this undoubtedly proves that the previously mentioned statement is correct. However, the term freight has a few more meanings, and this is where the problems of perceiving the difference between cargo and freight commence.

Namely, freight can be used for almost any cargo transported by train, truck, plane, or ship. Mail is the only type of cargo that does not belong in this group because, as we have already stated, freight refers to commercial goods only.

Finally, very often in the transportation industry, the term freight is used to refer to a charge for transportation services (with the word rate added to make the term freight rate). Let’s see that usage of freight in action juxtaposed to its other usage inside the type of international shipping analysis typical of the industry:

In January this year, experts believed two major factors would affect freight rates in 2020. The behavior of freight rates were to be influenced greatly by overcapacity and IMO 2020 this year, but then the COVID-19 pandemic completely changed everything, majorly impacting the quantity of freight being shipped and the number of sailings taking place while crashing oil prices.

What Precisely Does Cargo Refer To?

Similarly to freight, we also use cargo for volumes of goods but those transported via plane or ship, hence the terms cargo ships and cargo planes. The term can be used for both commercial and personal goods and, unlike freight, can refer to mail as well. Most often, cargo is transported in large containers and the appearance of smart containers in the transport industry will significantly help shipping companies by giving them an important ability to meet even the most demanding requests of their clients. The risks of losing or damaging the cargo in a container will be brought to a minimal level.

Another difference worth mentioning and emphasizing is that cargo is not normally used for any kind of transportation fee charged by the carrier. It is only related to goods and not money. If you need to talk about finances and transportation fees, the above-mentioned freight rate is the term you need.

Freight and Cargo ­– Contemporary Usage

Despite the examples we have provided, it is highly possible that the difference between freight and cargo will disappear in the near future. The fact that more and more people use these terms interchangeably proves that the difference is already blurred. Even going so deeply into this problem and consulting dictionaries for this matter will prove that the difference is almost non-existent. However, those who still want to be on the safe side should study the above-mentioned rules in detail and use the terms freight and cargo in accordance to them.

What Do Freight and Cargo Have in Common

Bearing in mind the blurred difference between cargo and freight, it turns out these terms have a lot in common. Both terms refer to transporting goods. While freight is strictly associated with transporting commercial goods in the import and export business, for example, cargo can be used for your personal items you need to transport whether for moving or for some other reason.

While we use cargo for the goods only, freight can also have a financial connotation. Most probably, freight rate trends are something you need to follow at all times if involved in the transportation industry. Furthermore, they are one of the factors to evaluate when choosing a freight forwarding company.

Finally, freight can be used for all types of cargo transported by truck, train, ship or plane. Mail is the only type of cargo that does not belong to this immense group.

port-1845350_1280-768x512.jpg


It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.

Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.

Offline

Board footer

Powered by FluxBB