You are not logged in.
2276) Gastroenteritis
Gist
Gastroenteritis means inflammation in your stomach and intestine. Inflammation makes these organs feel swollen and sore. It causes symptoms of illness, like nausea, vomiting, abdominal pain and diarrhea. Gastroenteritis often happens when you get an infection in your gastrointestinal (GI) tract.
Summary
Gastroenteritis is an acute infectious syndrome of the stomach lining and the intestine. It is characterized by diarrhea, vomiting, and abdominal cramps. Other symptoms can include nausea, fever, and chills. The severity of gastroenteritis varies from a sudden but transient attack of diarrhea to severe dehydration.
Numerous viruses, bacteria, and parasites can cause gastroenteritis. Microorganisms cause gastroenteritis by secreting toxins that stimulate excessive water and electrolyte loss, thereby causing watery diarrhea, or by directly invading the walls of the gut, triggering inflammation that upsets the balance between the absorption of nutrients and the secretion of wastes.
Viral gastroenteritis, or viral diarrhea, is perhaps the most common type of diarrhea worldwide; rotaviruses, caliciviruses, Norwalk viruses, and adenoviruses are the most common causes. Other forms of gastroenteritis include food poisoning, cholera, and traveler’s diarrhea, which develops within a few days after traveling to a country or region that has unsanitary water or food. Traveler’s diarrhea is caused by exposure to enterotoxin-producing strains of the common intestinal bacterium Escherichia coli.
Details
Gastroenteritis, also known as infectious diarrhea, is an inflammation of the gastrointestinal tract including the stomach and intestine. Symptoms may include diarrhea, vomiting, and abdominal pain. Fever, lack of energy, and dehydration may also occur. This typically lasts less than two weeks. Although it is not related to influenza, in the U.S. and U.K., it is sometimes called the "stomach flu".
Gastroenteritis is usually caused by viruses; however, gut bacteria, parasites, and fungi can also cause gastroenteritis. In children, rotavirus is the most common cause of severe disease. In adults, norovirus and Campylobacter are common causes. Eating improperly prepared food, drinking contaminated water or close contact with a person who is infected can spread the disease. Treatment is generally the same with or without a definitive diagnosis, so testing to confirm is usually not needed.
For young children in impoverished countries, prevention includes hand washing with soap, drinking clean water, breastfeeding babies instead of using formula, and proper disposal of human waste. The rotavirus vaccine is recommended as a prevention for children. Treatment involves getting enough fluids. For mild or moderate cases, this can typically be achieved by drinking oral rehydration solution (a combination of water, salts and sugar). In those who are breastfed, continued breastfeeding is recommended. For more severe cases, intravenous fluids may be needed. Fluids may also be given by a nasogastric tube. Zinc supplementation is recommended in children. Antibiotics are generally not needed. However, antibiotics are recommended for young children with a fever and bloody diarrhea.
In 2015, there were two billion cases of gastroenteritis, resulting in 1.3 million deaths globally. Children and those in the developing world are affected the most. In 2011, there were about 1.7 billion cases, resulting in about 700,000 deaths of children under the age of five. In the developing world, children less than two years of age frequently get six or more infections a year. It is less common in adults, partly due to the development of immunity.
Signs and symptoms
Gastroenteritis usually involves both diarrhea and vomiting. Sometimes, only one or the other is present. This may be accompanied by abdominal cramps. Signs and symptoms usually begin 12–72 hours after contracting the infectious agent. If due to a virus, the condition usually resolves within one week. Some viral infections also involve fever, fatigue, headache and muscle pain. If the stool is bloody, the cause is less likely to be viral and more likely to be bacterial. Some bacterial infections cause severe abdominal pain and may persist for several weeks.
Children infected with rotavirus usually make a full recovery within three to eight days. However, in poor countries treatment for severe infections is often out of reach and persistent diarrhea is common. Dehydration is a common complication of diarrhea. Severe dehydration in children may be recognized if the skin color and position returns slowly when pressed. This is called "prolonged capillary refill" and "poor skin turgor". Abnormal breathing is another sign of severe dehydration. Repeat infections are typically seen in areas with poor sanitation, and malnutrition. Stunted growth and long-term cognitive delays can result.
Reactive arthritis occurs in 1% of people following infections with Campylobacter species. Guillain–Barré syndrome occurs in 0.1%. Hemolytic uremic syndrome (HUS) may occur due to infection with Shiga toxin-producing Escherichia coli or Shigella species. HUS causes low platelet counts, poor kidney function, and low red blood cell count (due to their breakdown). Children are more predisposed to getting HUS than adults. Some viral infections may produce benign infantile seizures.
Additional Information
Gastroenteritis is when your stomach and intestines are irritated and inflamed. This can cause belly pain, cramping, nausea, vomiting, and diarrhea. The cause is typically inflammation triggered by your immune system's response to a viral or bacterial infection. However, infections caused by fungi or parasites or irritation from chemicals can also lead to gastroenteritis.
You may have heard the term "stomach flu." When people say this, they usually mean gastroenteritis caused by a virus. However, it's not actually related to the flu, or influenza, which is a different virus that affects your upper respiratory system (nose, throat, and lungs).
Gastroenteritis Symptoms
Gastroenteritis symptoms often start with little warning. You'll usually get nausea, cramps, diarrhea, and vomiting. Expect to make several trips to the toilet in rapid succession. Other symptoms tend to develop a little later on and include:
* Belly pain
* Loss of appetite
* Chills
* Fatigue
* Body aches
* Fever
Because of diarrhea and vomiting, you also can become dehydrated. Watch for signs of dehydration, such as dry skin, a dry mouth, feeling lightheaded, and being really thirsty. Call your doctor if you have any of these symptoms.
How long does gastroenteritis last?
It depends on what caused it. But generally, acute gastroenteritis lasts about 14 days. Persistent gastroenteritis lasts between 14 and 30 days, and chronic gastroenteritis lasts over 30 days.
Stomach Flu and Children
Children and infants can get dehydrated quickly. If they do, they need to go to the doctor as soon as possible. Some signs of dehydration in kids include:
* Sunken soft spot on your baby's head
* Sunken eyes
* Dry mouth
* No tears come out when they cry
* Not peeing or peeing very little
* Low alertness and energy (lethargy)
* Irritability
When caused by an infection — most often a virus — gastroenteritis is contagious. Young kids are more likely to have severe symptoms. Keep children with gastroenteritis out of day care or school until all their symptoms are gone.
Two vaccines are available by mouth to help protect children from infection with one of the most common causes of viral gastroenteritis: rotavirus. The two vaccines are called RotaTeq and Rotarix. Kids can get them starting at 2 months of age. Ask your doctor if your child should get a vaccine.
Check with your doctor before giving your child any medicine. Doctors don't usually recommend giving kids younger than 5 years over-the-counter drugs to control vomiting. They also don't recommend giving kids younger than 12 drugs to control diarrhea (some doctors won't recommend them for people under 18).
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2277) Refinery/Oil Refinery - I
Gist
A refinery is a facility where raw materials are converted into some valuable substance by having impurities removed.
A refinery is a facility where raw materials are converted into some valuable substance by having impurities removed. At an oil refinery, crude oil is treated and made into gasoline and other petroleum products.
Whenever a material needs to have unwanted parts removed in order to be made into a useable product, it must be refined — clarified or processed. This is done at a plant called a refinery. A sugar refinery, for example, converts sugar cane or beets into familiar white, refined crystals of sugar. Refinery comes from refine, which is rooted in the now-obsolete verb fine, "make fine."
Summary
What do refineries do?
Petroleum refineries convert (refine) crude oil into petroleum products for use as fuels for transportation, heating, paving roads, and generating electricity and as feedstocks for making chemicals. Refining breaks crude oil down into its various components, which are then selectively reconfigured into new products.
A refinery is a production facility composed of a group of chemical engineering unit processes and unit operations refining certain materials or converting raw material into products of value.
Types of refineries
Different types of refineries are as follows:
* Petroleum oil refinery, which converts crude oil into high-octane motor spirit (gasoline/petrol), diesel oil, liquefied petroleum gases (LPG), kerosene, heating fuel oils, hexane, lubricating oils, bitumen, and petroleum coke
* Edible oil refinery which converts cooking oil into a product that is uniform in taste, smell and appearance, and stability
* Natural gas processing plant, which purifies and converts raw natural gas into residential, commercial and industrial fuel gas, and also recovers natural gas liquids (NGL) such as ethane, propane, butanes and pentanes
* Sugar refinery, which converts sugar cane and sugar beets into crystallized sugar and sugar syrups
* Salt refinery, which cleans common salt (NaCl), produced by the solar evaporation of sea water, followed by washing and re-crystallization
* Metal refineries refining metals such as alumina, copper, gold, lead, nickel, silver, uranium, zinc, magnesium and cobalt
* Iron refining, a stage of refining pig iron (typically grey cast iron to white cast iron), before fining, which converts pig iron into bar iron or steel.
Details
The refining process begins with crude oil.
Crude oil is unrefined liquid petroleum. Crude oil is composed of thousands of different chemical compounds called hydrocarbons, all with different boiling points. Science — combined with an infrastructure of pipelines, refineries, and transportation systems - enables crude oil to be transformed into useful and affordable products.
Refining turns crude oil into usable products.
Petroleum refining separates crude oil into components used for a variety of purposes. The crude petroleum is heated and the hot gases are passed into the bottom of a distillation column. As the gases move up the height of the column, the gases cool below their boiling point and condense into a liquid. The liquids are then drawn off the distilling column at specific heights to obtain fuels like gasoline, jet fuel and diesel fuel.
The liquids are processed further to make more gasoline or other finished products.
Some of the liquids undergo additional processing after the distillation process to create other products. These processes include: cracking, which is breaking down large molecules of heavy oils; reforming, which is changing molecular structures of low-quality gasoline molecules; and isomerization, which is rearranging the atoms in a molecule so that the product has the same chemical formula but has a different structure. These processes ensure that every drop of crude oil in a barrel is converted into a usable product.
What Is an Oil Refinery?
An oil refinery is an industrial plant that transforms, or refines crude oil into various usable petroleum products such as diesel, gasoline, and heating oils like kerosene. Oil refineries essentially serve as the second stage in the crude oil production process following the actual extraction of crude oil up-stream, and refinery services are considered to be a down-stream segment of the oil and gas industry.
The first step in the refining process is distillation, where crude oil is heated at extreme temperatures to separate the different hydrocarbons.
Key Takeaways
* An oil refinery is a facility that takes crude oil and distills it into various useful petroleum products such as gasoline, kerosene, or jet fuel.
* Refining is classified as a downstream operation of the oil and gas industry, although many integrated oil companies will operate both extraction and refining services.
* Refineries and oil traders look to the crack spread, the relative difference in production cost and market price of various petroleum products in the derivatives market to hedge their exposure to crude oil prices.
Understanding Oil Refineries
Oil refineries serve an important role in the production of transportation and other fuels. The crude oil components, once separated, can be sold to different industries for a broad range of purposes. Lubricants can be sold to industrial plants immediately after distillation, but other products require more refining before reaching the final user. Major refineries have the capacity to process hundreds of thousand barrels of crude oil daily.
In the industry, the refining process is commonly called the "downstream" sector, while raw crude oil production is known as the "upstream" sector. The term downstream is associated with the concept that oil is sent down the product value chain to an oil refinery to be processed into fuel. The downstream stage also includes the actual sale of petroleum products to other businesses, governments, or private individuals.
According to the U.S. Energy Information Administration (EIA), U.S. refineries produce—from a 42-gallon barrel of crude oil—19 to 20 gallons of motor gasoline, 11 to 12 gallons of distillate fuel (most of which is sold as diesel), and four gallons of jet fuel.
More than a dozen other petroleum products are also produced in refineries. Petroleum refineries produce liquids the petrochemical industry uses to make a variety of chemicals and plastics.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2278) Refinery/Oil Refinery - II
Gist
What is oil refinery?
Petroleum refineries convert (refine) crude oil into petroleum products for use as fuels for transportation, heating, paving roads, and generating electricity and as feedstocks for making chemicals.
Summary
An oil refinery or petroleum refinery is an industrial process plant where petroleum (crude oil) is transformed and refined into products such as gasoline (petrol), diesel fuel, asphalt base, fuel oils, heating oil, kerosene, liquefied petroleum gas and petroleum naphtha. Petrochemical feedstock like ethylene and propylene can also be produced directly by cracking crude oil without the need of using refined products of crude oil such as naphtha. The crude oil feedstock has typically been processed by an oil production plant. There is usually an oil depot at or near an oil refinery for the storage of incoming crude oil feedstock as well as bulk liquid products. In 2020, the total capacity of global refineries for crude oil was about 101.2 million barrels per day.
Oil refineries are typically large, sprawling industrial complexes with extensive piping running throughout, carrying streams of fluids between large chemical processing units, such as distillation columns. In many ways, oil refineries use many different technologies and can be thought of as types of chemical plants. Since December 2008, the world's largest oil refinery has been the Jamnagar Refinery owned by Reliance Industries, located in Gujarat, India, with a processing capacity of 1.24 million barrels (197,000 m^3) per day.
Oil refineries are an essential part of the petroleum industry's downstream sector.
Details:
"Cracking" Crude Oil
An oil refinery runs 24 hours a day, 365 days a year, and requires a large number of employees. Refineries come offline or stop working for a few weeks each year to undergo seasonal maintenance and other repair work. A refinery can occupy as much land as several hundred football fields. Famous oil refining companies include the Koch Pipeline Company, and many others.
Crack or crack spread is a trading strategy used in energy futures to establish a refining margin. Crack is one primary indicator of oil refining companies' earnings. Crack allows refining companies to hedge against the risks associated with crude oil and those associated with petroleum products. By simultaneously purchasing crude oil futures and selling petroleum product futures, a trader is attempting to establish an artificial position in the refinement of oil created through a spread.
Important : The Nelson Complexity Index (NCI) is a measure of the sophistication of an oil refinery, where more complex refineries are able to produce lighter, more heavily refined and valuable products from a barrel of oil.
The proportions of petroleum products a refinery produces from crude oil can also affect crack spreads. Some of these products include asphalt, aviation fuel, diesel, gasoline, and kerosene. In some cases, the proportion produced varies based on demand from the local market.
The mix of products also depends on the kind of crude oil processed. Heavier crude oils are more difficult to refine into lighter products like gasoline. Refineries that use simpler refining processes may be restricted in their ability to produce products from heavy crude oil.
Refinery Services
Oil refining is a purely downstream function, although many of the companies doing it have midstream and even upstream production. This integrated approach to oil production allows companies like Exxon (XOM), Shell (RDS.A), and Chevron (CVX) to take oil from exploration all the way to sale. The refining side of the business is actually hurt by high prices, because demand for many petroleum products, including gas, is price sensitive. However, when oil prices drop, selling value-added products becomes more profitable. Refining pure plays include Marathon Petroleum Corporation (MPC), CVR Energy Inc. (CVI), and Valero Energy Corp (VLO).
One area service companies and refiners agree on is creating more pipeline capacity and transport. Refiners want more pipeline to keep down the cost of transporting oil by truck or rail. Service companies want more pipeline because they make money in the design and laying stages, and get a steady income from maintenance and testing.
Oil Refinery Safety
Oil refineries can be dangerous places to work at times. For example, in 2005 there was an accident at BP's Texas City oil refinery. According to the U.S. Chemical Safety Board, a series of explosions occurred during the restarting of a hydrocarbon isomerization unit. Fifteen workers were killed and 180 others were injured. The explosions occurred when a distillation tower flooded with hydrocarbons and was over-pressurized, causing a geyser-like release from the vent stack.
How Many Oil Refineries Are There in the United States?
As of Jan. 1, 2021, there were 129 operable petroleum refineries in the United States.
U.S. Energy Information Agency. "When was the last refinery built in the United States?"
The last refinery to enter operation was in 2019 in Texas.
How Much Crude Oil Does It Take to Make a Gallon of Gasoline?
One barrel of oil (42 gallons) produces 19 to 20 gallons of gasoline and 11 to 12 gallons of diesel fuel.
What Is the Crack Spread?
In commodities trading, the "crack spread" is the differences in price between a barrel of unrefined crude oil and the refined products (such as gasoline) that are derived from it. Traders look to changes in the crack spread as a market signal for price movements in oil and refined products.
Trade on the Go. Anywhere, Anytime
One of the world's largest crypto-asset exchanges is ready for you. Enjoy competitive fees and dedicated customer support while trading securely. You'll also have access to Binance tools that make it easier than ever to view your trade history, manage auto-investments, view price charts, and make conversions with zero fees. Make an account for free and join millions of traders and investors on the global crypto market.
Additional Information:
How crude oil is refined into petroleum products
Petroleum refineries convert (refine) crude oil into petroleum products for use as fuels for transportation, heating, paving roads, and generating electricity and as feedstocks for making chemicals.
Refining breaks crude oil down into its various components, which are then selectively reconfigured into new products. Petroleum refineries are complex and expensive industrial facilities. All refineries have three basic steps:
* Separation
* Conversion
* Treatment
Separation
Modern separation involves piping crude oil through hot furnaces. The resulting liquids and vapors are discharged into distillation units. All refineries have atmospheric distillation units, but more complex refineries may have vacuum distillation units.
Inside the distillation units, the liquids and vapors separate into petroleum components, called fractions, according to their boiling points. Heavy fractions are on the bottom and light fractions are on the top.
The lightest fractions, including gasoline and liquefied refinery gases, vaporize and rise to the top of the distillation tower, where they condense back to liquids.
Medium weight liquids, including kerosene and distillates, stay in the middle of the distillation tower.
Heavier liquids, called gas oils, separate lower down in the distillation tower, and the heaviest fractions with the highest boiling points settle at the bottom of the tower.
Conversion
After distillation, heavy, lower-value distillation fractions can be processed further into lighter, higher-value products such as gasoline. At this point in the process, fractions from the distillation units are transformed into streams (intermediate components) that eventually become finished products.
The most widely used conversion method is called cracking because it uses heat, pressure, catalysts, and sometimes hydrogen to crack heavy hydrocarbon molecules into lighter ones. A cracking unit consists of one or more tall, thick-walled, rocket-shaped reactors and a network of furnaces, heat exchangers, and other vessels. Complex refineries may have one or more types of crackers, including fluid catalytic cracking units and hydrocracking/hydrocracker units.
Cracking is not the only form of crude oil conversion. Other refinery processes rearrange molecules rather than splitting molecules to add value.
Alkylation, for example, makes gasoline components by combining some of the gaseous byproducts of cracking. The process, which essentially is cracking in reverse, takes place in a series of large, horizontal vessels and tall, skinny towers.
Reforming uses heat, moderate pressure, and catalysts to turn naphtha, a light, relatively low-value fraction, into high-octane gasoline components.
Treatment
The finishing touches occur during the final treatment. To make gasoline, refinery technicians carefully combine a variety of streams from the processing units. Octane level, vapor pressure ratings, and other special considerations determine the gasoline blend.
Storage
Both incoming crude oil and the outgoing final products are stored temporarily in large tanks on a tank farm near the refinery. Pipelines, trains, and trucks carry the final products from the storage tanks to locations across the country.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2279) Training
Gist
Training is the process of learning the skills you need to do a particular job or activity.
Summary
Training is the action of informing or instructing your employees on a certain task in order to help them improve their performance or knowledge. If people are to perform their job to the highest possible standard, they must be effectively and efficiently trained.
Effective training will mean the activities have achieved the specific outcomes required. In addition, your workers need to gain or maintain the skills and knowledge they need to perform their work, direct others to perform work and to supervise work. Lack of training can be attributed to one of the reasons of real quality problems.
Effective training should be cost efficient, while also ensuring that time and money is a good investment.
Details
Training is teaching, or developing in oneself or others, any skills and knowledge or fitness that relate to specific useful competencies. Training has specific goals of improving one's capability, capacity, productivity and performance. It forms the core of apprenticeships and provides the backbone of content at institutes of technology (also known as technical colleges or polytechnics). In addition to the basic training required for a trade, occupation or profession, training may continue beyond initial competence to maintain, upgrade and update skills throughout working life. People within some professions and occupations may refer to this sort of training as professional development. Training also refers to the development of physical fitness related to a specific competence, such as sport, martial arts, military applications and some other occupations.
Types:
Physical training
Physical training concentrates on mechanistic goals: training programs in this area develop specific motor skills, agility, strength or physical fitness, often with an intention of peaking at a particular time.
In military use, training means gaining the physical ability to perform and survive in combat, and learn the many skills needed in a time of war. These include how to use a variety of weapons, outdoor survival skills, and how to survive being captured by the enemy, among many others.
For psychological or physiological reasons, people who believe it may be beneficial to them can choose to practice relaxation training, or autogenic training, in an attempt to increase their ability to relax or deal with stress. While some studies have indicated relaxation training is useful for some medical conditions, autogenic training has limited results or has been the result of few studies.
Occupational skills training
Some occupations are inherently hazardous, and require a minimum level of competence before the practitioners can perform the work at an acceptable level of safety to themselves or others in the vicinity. Occupational diving, rescue, firefighting and operation of certain types of machinery and vehicles may require assessment and certification of a minimum acceptable competence before the person is allowed to practice as a licensed instructor.
On-job training
Some commentators use a similar term for workplace learning to improve performance: "training and development". There are also additional services available online for those who wish to receive training above and beyond what is offered by their employers. Some examples of these services include career counseling, skill assessment, and supportive services. One can generally categorize such training as on-the-job or off-the-job.
The on-the-job training method takes place in a normal working situation, using the actual tools, equipment, documents or materials that trainees will use when fully trained. On-the-job training has a general reputation as most effective for vocational work. It involves employees training at the place of work while they are doing the actual job. Usually, a professional trainer (or sometimes an experienced and skilled employee) serves as the instructor using hands-on practical experience which may be supported by formal classroom presentations. Sometimes training can occur by using web-based technology or video conferencing tools. On-the-job training is applicable on all departments within an organization.
Simulation based training is another method which uses technology to assist in trainee development. This is particularly common in the training of skills requiring a very high degree of practice, and in those which include a significant responsibility for life and property. An advantage is that simulation training allows the trainer to find, study, and remedy skill deficiencies in their trainees in a controlled, virtual environment. This also allows the trainees an opportunity to experience and study events that would otherwise be rare on the job, e.g., in-flight emergencies, system failure, etc., wherein the trainer can run 'scenarios' and study how the trainee reacts, thus assisting in improving his/her skills if the event was to occur in the real world. Examples of skills that commonly include simulator training during stages of development include piloting aircraft, spacecraft, locomotives, and ships, operating air traffic control airspace/sectors, power plant operations training, advanced military/defense system training, and advanced emergency response training like fire training or first-aid training.
Off-the-job training method takes place away from normal work situations — implying that the employee does not count as a directly productive worker while such training takes place. Off-the-job training method also involves employee training at a site away from the actual work environment. It often utilizes lectures, seminars, case studies, role playing, and simulation, having the advantage of allowing people to get away from work and concentrate more thoroughly on the training itself. This type of training has proven more effective in inculcating concepts and ideas. Many personnel selection companies offer a service which would help to improve employee competencies and change the attitude towards the job. The internal personnel training topics can vary from effective problem-solving skills to leadership training.
A more recent development in job training is the On-the-Job Training Plan or OJT Plan. According to the United States Department of the Interior, a proper OJT plan should include: An overview of the subjects to be covered, the number of hours the training is expected to take, an estimated completion date, and a method by which the training will be evaluated.
Religion and spirituality
In religious and spiritual use, the word "training" may refer to the purification of the mind, heart, understanding and actions to obtain a variety of spiritual goals such as (for example) closeness to God or freedom from suffering. Note for example the institutionalised spiritual training of Threefold Training in Buddhism, meditation in Hinduism or discipleship in Christianity. These aspects of training can be short-term or can last a lifetime, depending on the context of the training and which religious group it is a part of.
Artificial-intelligence feedback
Learning processes developed for artificial intelligence are typically also known as training. Evolutionary algorithms, including genetic programming and other methods of machine learning, use a system of feedback based on "fitness functions" to allow computer programs to determine how well an entity performs a task. The methods construct a series of programs, known as a “population” of programs, and then automatically test them for "fitness", observing how well they perform the intended task. The system automatically generates new programs based on members of the population that perform the best. These new members replace programs that perform the worst. The procedure repeats until the achievement of optimum performance. In robotics, such a system can continue to run in real-time after initial training, allowing robots to adapt to new situations and to changes in themselves, for example, due to wear or damage. Researchers have also developed robots that can appear to mimic simple human behavior as a starting point for training.
Additional Information
Employee training and development includes any activity that helps employees acquire new, or improve existing, knowledge or skills. Training is a formal process by which talent development professionals help individuals improve performance at work. Development is the acquisition of knowledge, skill, or attitude that prepares people for new directions or responsibilities. Training is one specific and common form of employee development; other forms include coaching, mentoring, informal learning, self-directed learning, or experiential learning.
What Are the Benefits of Employee Training and Development?
Employee training and development can help employees become better at their jobs and overcome performance gaps that are based on lack of knowledge or skills. This can help organizations and teams be more productive and obtain improved business outcomes, leading to a competitive advantage over other companies.
Training can help organizations be more innovative and agile in responding to change and can help with necessary upskilling and reskilling to help organizations ensure that their labor force meets their current needs. Employee training and development also can help with succession planning by helping to identify high-performing employees and then assisting those employees with the development of the knowledge and skills they need to advance into more senior roles. Employee training and development can be an effective tool for recruiting and retention, since many employees cite a lack of development opportunities at their current job as a primary reason for leaving. Employees who have access to training and development opportunities are more likely to stay at their organizations for a longer period of time and be more engaged while there; in fact, LinkedIn’s 2018 Workplace Learning Report found that 93 percent of employees would stay at a company longer if it invested in their careers. Their 2021 Workplace Leaning Report additionally found that companies with high internal mobility retain employees for twice as long. Finally, some forms of employee training, such as compliance training or safety training, can help organizations avoid lawsuits, workplace injuries, or other adverse outcomes.
What Types of Employee Training and Development Exist?
There are many types of employee training and development. In high performing organizations, training and development initiatives are based on organizational needs, the target audience for the initiative, and the type of knowledge or skill that learners are expected to obtain. Some of the most common types of employee training and development include:
* Technical training is training based on a technical product or task. Technical training if often specifically tailored to a particular job task at a single organization. Skills training is training to help employees develop or practice skills that are necessary for their jobs.
* Soft skills training is a subset of skills training that focuses specifically on soft skills, as opposed to technical or “hard” skills. Soft skills include emotional intelligence, adaptability, creativity, influence, communication, and teamwork. Some trainers refer to soft skills as “power skills” or “professional life skills” to emphasize their importance.
* Compliance training is training on actions that are mandated by a law, agency, or policy outside the organization’s purview. Compliance training is often industry-specific but may include topics such as cybersecurity and sexual harassment.
* Safety training is training that focuses on improving organizational health and safety and reducing workplace injury. It can encompass employee safety, workplace safety, customer safety, and digital and information safety. Safety training can include both training that is required by law and training that organizations offer without legally being required to do so.
* Management development focuses on providing managers with the knowledge and skills that they need to be effective managers and developers of talent. Topics may include accountability, collaboration, communication, engagement, and listening and assessing.
* Leadership development is any activity that increases an individual’s leadership ability or an organization’s leadership capability, including activities such as learning events, mentoring, coaching, self-study, job rotation, and special assignments to develop the knowledge and skills required to lead.
* Executive development provides senior leaders and executives with the knowledge and skills that they need to improve in their roles. In contrast to leadership development, which focuses on helping non-executive employees develop the skills they need to obtain a leadership position, executive development is targeted at people already at a leadership level within their organization.
* Customer service training focuses on providing employees with the knowledge and skills to provide exceptional customer service. Customer service training should include content on essential employee behaviors, service strategies, and service systems.
* Customer education training is when employees—often at technology or SaaS companies—teach customers how to use a company’s products and services. Customer education training differs from traditional employee learning and development because the intended audience is customers, not employees.
* Workforce training focuses on upskilling workers to help them obtain career success. Workforce training programs are often offered by federal, state, or local governments, or by nonprofit organizations. Workforce training may include job-specific content but also may include content on organizational culture, leadership skills, and professionalism. Workforce training is often accessed by people who are new to the workforce or who are trying to enter a new job type or industry.
* Corporate training focuses on helping workers already employed by an organization obtain new knowledge and skills. That company or organization offers training to their internal employees to help them become better at their current jobs, advance in their careers, or close organizational skill gaps.
* Onboarding sometimes known as new employee orientation, is the process through which organizations equip new employees with the knowledge and skills they need to succeed at their jobs.
* Sales enablement is the strategic and cross functional effort to increase the productivity of market-facing teams by providing ongoing and relevant resources throughout the buyer journey to drive business impact. It encompasses sales training, coaching, content creation, process improvement, talent development, and compensation, among other areas.
What Are Examples of Effective Employee Training Methods?
There are many types of employee training and development methods, including:
* Instructor-led training, which can be either in-person or virtual.
* In-person training refers to training in which the instructor is physically in the same room as the learners. This also may be referred to as face-to-face training or classroom training.
* Virtual Instructor-Led Training (VILT) refers to instructor-led training that occurs virtually when the instructor and learners are physically dispersed. VILT takes place through a virtual platform such as Zoom or Webex. VILT also may be referred to as synchronous e-learning, live-online training, synchronous online training, or virtual classroom training
* E-learning is a structured course or learning experience delivered electronically. E-learning can be either asynchronous or synchronous. Asynchronous e-learning is self-paced and may include pre-recorded lecture content and video, visuals and/or text, knowledge quizzes, simulations, games, and other interactive elements.
* Microlearning enhances learning and performance through short pieces of content. Microlearning assets can usually be accessed on-demand when the learner needs them. Common forms of microlearning include how-to videos, self-paced e-learning, games, blogs, job aids, podcasts, infographics, and other visuals.
* Simulation is a broad genre of experiences, including games for entertainment and immersive learning simulations for formal learning programs. Simulations use simulation elements to model and present situations; portraying actions and demonstrating how the actions affect relevant systems, and how those systems produce feedback and results.
* On-the-job training is a delivery system that dispenses training to employees as they need it. As opposed to sending an employee away from work to a training session, on-the-job training allows employees to learn while in the flow of work.
* Coaching is a discipline that helps to enhance individual, team, and organizational performance. Coaching is an interactive process that involves listening, asking powerful questions, strengthening conversations, and creating action plans, with the goal of helping individuals develop towards their preferred future state.
* Mentoring is a reciprocal and collaborative at-will relationship that most often occurs between a senior and junior employee for the purpose of the mentee’s growth, learning, and career development. Mentors often act as role models for their mentee and provide guidance to help them reach their goals.
* Blended learning refers to a training program that includes more than one of the training types referenced above. Traditionally blended learning most often includes a mix of in-person training and e-learning. However, it can refer to any combination of formal and informal learning events, such as classroom instruction, online resources, and on-the-job coaching.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2280) Waterbed
Gist
Waterbeds reduce back problems, help asthma sufferers and have many benefits that are good for your health. Many conditions - including that of perfect health - will derive benefit from a waterbed, as members of the medical profession have long acknowledged.
Summary
This is an unique mattress that is specifically meant to prevent bed-ridden patients from developing bed sores. The water bed also dons an elegant look and ensures maximum comfort. Water beds are proven to help you sleep better, reduce back problems effectively, help asthma sufferers, and have many other benefits that are good for your health. The principles of flotation have been documented to be especially helpful with the following conditions: premature infants and newborns, orthopedic problems, paralysis, severe burns, trauma, auto accidents, plastic surgery, general surgery, cardiac rehabilitation, cystic fibrosis, cerebral palsy, multiple sclerosis, and wheelchair patients. Water beds have become an essential therapeutic fixture in benefiting many patients with different medical problems.
Waterbed can bring relief from pain and provide a cool and soothing sensation. Product features Medical Water Bed (Water mattress). This bed is used by the patients who are to lie on the bed for a long time. The patient with broken legs, waist, coma, cerebral attack, heart patient, arthritis patient who cannot move etc. use. As constantly lying in the same position results in pressure and abrasion in particular places of the body for a long time gives rise to the possibilities of developing sores in those parts of the body. By using this bed the possibility of developing bed sores is eliminated. Waterbed can bring relief from pain and provide a cool and soothing sensation. It is a single textured rubberized fabric having 3 compartments. It is completely leak proof, comfortable, hygienic and durable.
Details
A waterbed, water mattress, or flotation mattress is a bed or mattress filled with water. Waterbeds intended for medical therapies appear in various reports through the 19th century. The modern version, invented in San Francisco and patented in 1971, became a popular consumer item in the United States through the 1980s with up to 20% of the market in 1986 and 22% in 1987. By 2013, they accounted for less than 5% of new bed sales.
Construction
Waterbeds primarily consist of two types, hard-sided beds and soft-sided beds.
A hard-sided waterbed consists of a water-containing mattress inside a rectangular frame of wood resting on a plywood deck that sits on a platform.
A soft-sided waterbed consists of a water-containing mattress inside of a rectangular frame of sturdy foam, zippered inside a fabric casing, which sits on a platform. It looks like a conventional bed and is designed to fit existing bedroom furniture. The platform usually looks like a conventional foundation or box spring, and sits atop a reinforced metal frame.
Early waterbed mattresses, and many inexpensive modern mattresses, have a single water chamber. When the water mass in these "free flow" mattresses is disturbed, significant wave motion can be felt, and they need time to stabilize after a disturbance. Later models employed wave-reducing methods, including fiber batting. Some models only partially reduce wave motion, while more expensive models almost eliminate wave motion.
Water beds are normally heated. If no heater is used, the water will equalize with the room air temperature (around 70 °F). In models with no heater, there are at least several inches of insulation above the water chamber. This partially eliminates the body-contouring benefit of a waterbed, and the ability to control the bed temperature. For these reasons, most waterbeds have temperature control systems. Temperature is controlled via a thermostat and set to personal preference, most commonly around average skin temperature, 30 °C (86 °F). A typical heating pad consumes 150–400 watts of power. Depending on insulation, bedding, temperature, use, and other factors, electricity usage may vary significantly.
Waterbeds are usually constructed from soft polyvinyl chloride (PVC) or similar material. They can be repaired with practically any vinyl repair kit.
Types of waterbed mattresses
* Free flow mattress: Also known as a full wave mattress. It contains only water but no baffles or inserts.
* Semi-waveless mattress: Contains a few fiber inserts and/or baffles to control the water motion and increase support.
* Waveless mattress: Contains many layers of fiber inserts and/or baffles to control the water motion and increase support. Frequently, the better mattresses contain additional layers in the center third of the mattress called special lumbar support.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2281) Foundation (Engineering)
Gist
The base on which something stands. The act of founding or establishing or the state of being founded or established. An endowment or legacy for the perpetual support of an institution such as a school or hospital. Entitled to benefit from the funds of a foundation.
Summary
Foundation is a Part of a structural system that supports and anchors the superstructure of a building and transmits its loads directly to the earth. To prevent damage from repeated freeze-thaw cycles, the bottom of the foundation must be below the frost line. The foundations of low-rise residential buildings are nearly all supported on spread footings, wide bases (usually of concrete) that support walls or piers and distribute the load over a greater area. A concrete grade beam supported by isolated footings, piers, or piles may be placed at ground level, especially in a building without a basement, to support the exterior wall. Spread footings are also used—in greatly enlarged form—for high-rise buildings. Other systems for supporting heavy loads include piles, concrete caisson columns, and building directly on exposed rock. In yielding soil, a floating foundation—consisting of rigid, boxlike structures set at such a depth that the weight of the soil removed to place it equals the weight of the construction supported—may be used.
Details
In engineering, a foundation is the element of a structure which connects it to the ground or more rarely, water (as with floating structures), transferring loads from the structure to the ground. Foundations are generally considered either shallow or deep. Foundation engineering is the application of soil mechanics and rock mechanics (geotechnical engineering) in the design of foundation elements of structures.
Purpose
Foundations provide the structure's stability from the ground:
* To distribute the weight of the structure over a large area in order to avoid overloading the underlying soil (possibly causing unequal settlement).
* To anchor the structure against natural forces including earthquakes, floods, droughts, frost heaves, tornadoes and wind.
* To provide a level surface for construction.
* To anchor the structure deeply into the ground, increasing its stability and preventing overloading.
* To prevent lateral movements of the supported structure (in some cases).
Requirements of a good foundation
The design and the construction of a well-performing foundation must possess some basic requirements:
* The design and the construction of the foundation is done such that it can sustain as well as transmit the dead and the imposed loads to the soil. This transfer has to be carried out without resulting in any form of settlement that can cause stability issues for the structure.
* Differential settlements can be avoided by having a rigid base for the foundation. These issues are more pronounced in areas where the superimposed loads are not uniform in nature.
* Based on the soil and area it is recommended to have a deeper foundation so that it can guard any form of damage or distress. These are mainly caused due to the problem of shrinkage and swelling because of temperature changes.
* The location of the foundation chosen must be an area that is not affected or influenced by future works or factors.
Historic types:
Earthfast or post in ground construction
Buildings and structures have a long history of being built with wood in contact with the ground. Post in ground construction may technically have no foundation. Timber pilings were used on soft or wet ground even below stone or masonry walls. In marine construction and bridge building a crisscross of timbers or steel beams in concrete is called grillage.
Padstones
Perhaps the simplest foundation is the padstone, a single stone which both spreads the weight on the ground and raises the timber off the ground. Staddle stones are a specific type of padstone.
Stone foundations
Dry stone and stones laid in mortar to build foundations are common in many parts of the world. Dry laid stone foundations may have been painted with mortar after construction. Sometimes the top, visible course of stone is hewn, quarried stones. Besides using mortar, stones can also be put in a gabion. One disadvantage is that if using regular steel rebars, the gabion would last much less long than when using mortar (due to rusting). Using weathering steel rebars could reduce this disadvantage somewhat.
Rubble-trench foundations
Rubble trench foundations are a shallow trench filled with rubble or stones. These foundations extend below the frost line and may have a drain pipe which helps groundwater drain away. They are suitable for soils with a capacity of more than 10 tonnes/m^2 (2,000 pounds per square foot).
Modern types:
Shallow foundations
Often called footings, are usually embedded about a meter or so into soil. One common type is the spread footing which consists of strips or pads of concrete (or other materials) which extend below the frost line and transfer the weight from walls and columns to the soil or bedrock.
Another common type of shallow foundation is the slab-on-grade foundation where the weight of the structure is transferred to the soil through a concrete slab placed at the surface. Slab-on-grade foundations can be reinforced mat slabs, which range from 25 cm to several meters thick, depending on the size of the building, or post-tensioned slabs, which are typically at least 20 cm for houses, and thicker for heavier structures.
Another way to install ready-to-build foundations that is more environmentally friendly is to use screw piles. Screw pile installations have also extended to residential applications, with many homeowners choosing a screw pile foundation over other options. Some common applications for helical pile foundations include wooden decks, fences, garden houses, pergolas, and carports.
Deep foundations
Used to transfer the load of a structure down through the upper weak layer of topsoil to the stronger layer of subsoil below. There are different types of deep footings including impact driven piles, drilled shafts, caissons, screw piles, geo-piers and earth-stabilized columns. The naming conventions for different types of footings vary between different engineers. Historically, piles were wood, later steel, reinforced concrete, and pre-tensioned concrete.
Monopile foundation
A type of deep foundation which uses a single, generally large-diameter, structural element embedded into the earth to support all the loads (weight, wind, etc.) of a large above-surface structure.
Many monopile foundations have been used in recent years for economically constructing fixed-bottom offshore wind farms in shallow-water subsea locations. For example, a single wind farm off the coast of England went online in 2008 with over 100 turbines, each mounted on a 4.74-meter-diameter monopile footing in ocean depths up to 16 meters of water.
Floating\barge
A floating foundation is one that sits on a body of water, rather than dry land. This type of foundation is used for some bridges and floating buildings.
Design:
Foundations are designed to have an adequate load capacity depending on the type of subsoil/rock supporting the foundation by a geotechnical engineer, and the footing itself may be designed structurally by a structural engineer. The primary design concerns are settlement and bearing capacity. When considering settlement, total settlement and differential settlement is normally considered. Differential settlement is when one part of a foundation settles more than another part. This can cause problems to the structure which the foundation is supporting. Expansive clay soils can also cause problems.
Additional Information
In engineering, a foundation is the element of a structure which connects it to the ground, and transfers loads from the structure to the ground. Foundations are generally considered either shallow or deep. Foundation engineering is the application of soil mechanics and rock mechanics in the design of foundation elements of structures.
Requirements of a good foundation
The design and the construction of a well-performing foundation must possess some basic requirements that must not be ignored. They are:
* The design and the construction of the foundation is done such that it can sustain as well as transmit the dead and the imposed loads to the soil.
* This transfer has to be carried out without resulting in any form of settlement that can result in any form of stability issues for the structure.
* Differential settlements can be avoided by having a rigid base for the foundation. These issues are more pronounced in areas where the superimposed loads are not uniform in nature.
* Based on the soil and area it is recommended to have a deeper foundation so that it can guard any form of damage or distress. These are mainly caused due to the problem of shrinkage and swelling because of temperature changes.
* The location of the foundation chosen must be an area that is not affected or influenced by future works or factors.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2282) Sweetness
Gist
Sweetness is the quality of being sweet.
Summary
Sweetness is an important and easily identifiable characteristic of glucose- and fructose-containing sweeteners. The sensation of sweetness has been extensively studied. Shallenberger defines sweetness as a primary taste. He furthermore asserts that no two substances can have the same taste. Thus, when compared to sucrose, no other sweetener will have the unique properties of sweetness onset, duration and intensity of sucrose. It is possible to compare the relative sweetness values of various sweeteners, but it must be kept in mind that these are relative values. There will be variations in onset, which is a function of the chirality of the sweetener, variations in duration, which is a function of the molecular weight profile and is impacted by the viscosity, and changes in intensity, which is affected by the solids level and the particular isomers present. Such variables are demonstrated by the performance of fructose in solution. The fructose molecule may exist in any of several forms. The exact concentration of any of these isomers depends on the temperature of the solution. At cold temperatures the sweetest form, β-D-fructopyranose, predominates, but at hot temperatures, fructofuranose forms predominate and the perceived sweetness lessens.
Details
Sweetness is a basic taste most commonly perceived when eating foods rich in sugars. Sweet tastes are generally regarded as pleasurable. In addition to sugars like sucrose, many other chemical compounds are sweet, including aldehydes, ketones, and sugar alcohols. Some are sweet at very low concentrations, allowing their use as non-caloric sugar substitutes. Such non-sugar sweeteners include saccharin, aspartame, sucralose and stevia. Other compounds, such as miraculin, may alter perception of sweetness itself.
The perceived intensity of sugars and high-potency sweeteners, such as aspartame and neohesperidin dihydrochalcone, are heritable, with gene effect accounting for approximately 30% of the variation.
The chemosensory basis for detecting sweetness, which varies between both individuals and species, has only begun to be understood since the late 20th century. One theoretical model of sweetness is the multipoint attachment theory, which involves multiple binding sites between a sweetness receptor and a sweet substance.
Studies indicate that responsiveness to sugars and sweetness has very ancient evolutionary beginnings, being manifest as chemotaxis even in motile bacteria such as E. coli. Newborn human infants also demonstrate preferences for high sugar concentrations and prefer solutions that are sweeter than lactose, the sugar found in breast milk. Sweetness appears to have the highest taste recognition threshold, being detectable at around 1 part in 200 of sucrose in solution. By comparison, bitterness appears to have the lowest detection threshold, at about 1 part in 2 million for quinine in solution. In the natural settings that human primate ancestors evolved in, sweetness intensity should indicate energy density, while bitterness tends to indicate toxicity. The high sweetness detection threshold and low bitterness detection threshold would have predisposed our primate ancestors to seek out sweet-tasting (and energy-dense) foods and avoid bitter-tasting foods. Even amongst leaf-eating primates, there is a tendency to prefer immature leaves, which tend to be higher in protein and lower in fibre and poisons than mature leaves. The "sweet tooth" thus has an ancient heritage, and while food processing has changed consumption patterns, human physiology remains largely unchanged. Biologically, a variant in fibroblast growth factor 21 increases craving for sweet foods.
Examples of sweet substances
A great diversity of chemical compounds, such as aldehydes and ketones, are sweet. Among common biological substances, all of the simple carbohydrates are sweet to at least some degree. Sucrose (table sugar) is the prototypical example of a sweet substance. Sucrose in solution has a sweetness perception rating of 1, and other substances are rated relative to this. For example, another sugar, fructose, is somewhat sweeter, being rated at 1.7 times the sweetness of sucrose. Some of the amino acids are mildly sweet: alanine, glycine, and serine are the sweetest. Some other amino acids are perceived as both sweet and bitter.
The sweetness of 5% solution of glycine in water compares to a solution of 5.6% glucose or 2.6% fructose.
A number of plant species produce glycosides that are sweet at concentrations much lower than common sugars. The most well-known example is glycyrrhizin, the sweet component of licorice root, which is about 30 times sweeter than sucrose. Another commercially important example is stevioside, from the South American shrub Stevia rebaudiana. It is roughly 250 times sweeter than sucrose. Another class of potent natural sweeteners are the sweet proteins such as thaumatin, found in the West African katemfe fruit. Hen egg lysozyme, an antibiotic protein found in chicken eggs, is also sweet.
Some variation in values is not uncommon between various studies. Such variations may arise from a range of methodological variables, from sampling to analysis and interpretation. Indeed, the taste index of 1, assigned to reference substances such as sucrose (for sweetness), hydrochloric acid (for sourness), quinine (for bitterness), and sodium chloride (for saltiness), is itself arbitrary for practical purposes. Some values, such as those for maltose and glucose, vary little. Others, such as aspartame and sodium saccharin, have much larger variation.
Even some inorganic compounds are sweet, including beryllium chloride and lead(II) acetate. The latter may have contributed to lead poisoning among the ancient Roman aristocracy: the Roman delicacy sapa was prepared by boiling soured wine (containing acetic acid) in lead pots.
Hundreds of synthetic organic compounds are known to be sweet, but only a few of these are legally permitted as food additives. For example, chloroform, nitrobenzene, and ethylene glycol are sweet, but also toxic. Saccharin, cyclamate, aspartame, acesulfame potassium, sucralose, alitame, and neotame are commonly used.
Sweetness modifiers
A few substances alter the way sweet taste is perceived. One class of these inhibits the perception of sweet tastes, whether from sugars or from highly potent sweeteners. Commercially, the most important of these is lactisole, a compound produced by Domino Sugar. It is used in some jellies and other fruit preserves to bring out their fruit flavors by suppressing their otherwise strong sweetness.
Two natural products have been documented to have similar sweetness-inhibiting properties: gymnemic acid, extracted from the leaves of the Indian vine Gymnema sylvestre and ziziphin, from the leaves of the Chinese jujube (Ziziphus jujuba). Gymnemic acid has been widely promoted within herbal medicine as a treatment for sugar cravings and diabetes.
On the other hand, two plant proteins, miraculin and curculin, cause sour foods to taste sweet. Once the tongue has been exposed to either of these proteins, sourness is perceived as sweetness for up to an hour afterwards. While curculin has some innate sweet taste of its own, miraculin is by itself quite tasteless.
The sweetness receptor
Despite the wide variety of chemical substances known to be sweet, and knowledge that the ability to perceive sweet taste must reside in taste buds on the tongue, the biomolecular mechanism of sweet taste was sufficiently elusive that as recently as the 1990s, there was some doubt whether any single "sweetness receptor" actually exists.
The breakthrough for the present understanding of sweetness occurred in 2001, when experiments with laboratory mice showed that mice possessing different versions of the gene T1R3 prefer sweet foods to different extents. Subsequent research has shown that the T1R3 protein forms a complex with a related protein, called T1R2, to form a G-protein coupled receptor that is the sweetness receptor in mammals.
Human studies have shown that sweet taste receptors are not only found in the tongue, but also in the lining of the gastrointestinal tract as well as the nasal epithelium, pancreatic islet cells, sperm and testes. It is proposed that the presence of sweet taste receptors in the GI tract controls the feeling of hunger and satiety.
Another research has shown that the threshold of sweet taste perception is in direct correlation with the time of day. This is believed to be the consequence of oscillating leptin levels in blood that may impact the overall sweetness of food. Scientists hypothesize that this is an evolutionary relict of diurnal animals like humans.
Sweetness perception may differ between species significantly. For example, even amongst the primates sweetness is quite variable. New World monkeys do not find aspartame sweet, while Old World monkeys and apes (including most humans) all do. Felids like domestic cats cannot perceive sweetness at all. The ability to taste sweetness often atrophies genetically in species of carnivores who do not eat sweet foods like fruits, including bottlenose dolphins, sea lions, spotted hyenas and fossas.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2283) Iron ore
Gist
Mining iron ore is a high-volume, low-margin business, as the value of iron is significantly lower than base metals. It is highly capital intensive, and requires significant investment in infrastructure such as rail in order to transport the ore from the mine to a freight ship.
Summary
Iron ores are rocks and minerals from which metallic iron can be economically extracted. The ores are usually rich in iron oxides and vary in color from dark grey, bright yellow, or deep purple to rusty red. The iron is usually found in the form of magnetite (Fe3O4, 72.4% Fe), hematite (Fe2O3, 69.9% Fe), goethite (FeO(OH), 62.9% Fe), limonite (FeO(OH)·n(H2O), 55% Fe), or siderite (FeCO3, 48.2% Fe).
Ores containing very high quantities of hematite or magnetite, typically greater than about 60% iron, are known as natural ore or direct shipping ore, and can be fed directly into iron-making blast furnaces. Iron ore is the raw material used to make pig iron, which is one of the main raw materials to make steel—98% of the mined iron ore is used to make steel. In 2011 the Financial Times quoted Christopher LaFemina, mining analyst at Barclays Capital, saying that iron ore is "more integral to the global economy than any other commodity, except perhaps oil".
Metallic iron is virtually unknown on the Earth's surface except as iron-nickel alloys from meteorites and very rare forms of deep mantle xenoliths. Although iron is the fourth-most abundant element in the Earth's crust, composing about 5%, the vast majority is bound in silicate or, more rarely, carbonate minerals, and smelting pure iron from these minerals would require a prohibitive amount of energy. Therefore, all sources of iron used by human industry exploit comparatively rarer iron oxide minerals, primarily hematite.
Prehistoric societies used laterite as a source of iron ore. Prior to the industrial revolution, most iron was obtained from widely-available goethite or bog ore, for example, during the American Revolution and the Napoleonic Wars. Historically, much of the iron ore utilized by industrialized societies has been mined from predominantly hematite deposits with grades of around 70% Fe. These deposits are commonly referred to as "direct shipping ores" or "natural ores". Increasing iron ore demand, coupled with the depletion of high-grade hematite ores in the United States, led after World War II to the development of lower-grade iron ore sources, principally the use of magnetite and taconite.
Iron ore mining methods vary by the type of ore being mined. There are four main types of iron ore deposits worked currently, depending on the mineralogy and geology of the ore deposits. These are magnetite, titanomagnetite, massive hematite, and pisolitic ironstone deposits.
The origin of iron can be ultimately traced to its formation through nuclear fusion in stars, and most of the iron is thought to have originated in dying stars that are large enough to explode as supernovae. The Earth's core is thought to consist mainly of iron, but this is inaccessible from the surface. Some iron meteorites are thought to have originated from asteroids 1,000 km (620 mi) in diameter or larger.
Details
Iron ores occur in igneous, metamorphic (transformed), or sedimentary rocks in a variety of geologic environments. Most are sedimentary, but many have been changed by weathering, and so their precise origin is difficult to determine. The most widely distributed iron-bearing minerals are oxides, and iron ores consist mainly of hematite (Fe2O3), which is red; magnetite (Fe3O4), which is black; limonite or bog-iron ore (2Fe2O3·3H2O), which is brown; and siderite (FeCO3), which is pale brown. Hematite and magnetite are by far the most common types of ore.
Pure magnetite contains 72.4 percent iron, hematite 69.9 percent, limonite 59.8 percent, and siderite 48.2 percent, but, since these minerals never occur alone, the metal content of real ores is lower. Deposits with less than 30 percent iron are commercially unattractive, and, although some ores contain as much as 66 percent iron, there are many in the 50–60 percent range. An ore’s quality is also influenced by its other constituents, which are collectively known as gangue. Silica (SiO2) and phosphorus-bearing compounds (usually reported as P2O5) are especially important because they affect the composition of the metal and pose extra problems in steelmaking.
China, Brazil, Australia, Russia, and Ukraine are the five biggest producers of iron ore, but significant amounts are also mined in India, the United States, Canada, and Kazakhstan. Together, these nine countries produce 80 percent of the world’s iron ore. Brazil, Australia, Canada, and India export the most, although Sweden, Liberia, Venezuela, Mauritania, and South Africa also sell large amounts. Japan, the European Union, and the United States are the major importers.
Mining and concentrating
Most iron ores are extracted by surface mining. Some underground mines do exist, but, wherever possible, surface mining is preferred because it is cheaper.
Lumps and fines:
Crushing
As-mined iron ore contains lumps of varying size, the biggest being more than 1 metre (40 inches) across and the smallest about 1 millimetre (0.04 inch). The blast furnace, however, requires lumps between 7 and 25 millimetres, so the ore must be crushed to reduce the maximum particle size. Crushed ore is divided into various fractions by passing it over sieves through which undersized material falls. In this way, lump or rubble ore (7 to 25 millimetres in size) is separated from the fines (less than 7 millimetres). If the lump ore is of the appropriate quality, it can be charged to the blast furnace without any further processing. Fines, however, must first be agglomerated, which means reforming them into lumps of suitable size by a process called sintering.
Sintering
Iron ore sintering consists of heating a layer of fines until partial melting occurs and individual ore particles fuse together. For this purpose, a traveling-grate machine is used, and the burning of fine coke (known as coke breeze) within the ore generates the necessary heat. Before being delivered to the sinter machine, the ore mixture is moistened to cause fine particles to stick to larger ones, and then the appropriate amount of coke is added. Initially, coke on the upper surface of the bed is ignited when the mixture passes under burners in an ignition hood, but thereafter its combustion is maintained by air drawn through the bed of materials by a suction fan, so that by the time the sinter reaches the end of the machine it has completely fused. The grate on which the sinter mix rests consists of a series of cast-iron bars with narrow spaces between them to allow the air through. After cooling, the sinter is broken up and screened to yield blast-furnace feed and an undersize fraction that is recycled. Modern sinter plants are capable of producing up to 25,000 tons per day. Sintering machines are usually measured by hearth area; the biggest machines are 5 metres (16 feet) wide by 120 metres long, and the effective hearth area is 600 square metres (6,500 square feet).
Concentrates:
Upgrading
Crushing and screening are straightforward mechanical operations that do not alter an ore’s composition, but some ores need to be upgraded before smelting. Concentration refers to the methods of producing ore fractions richer in iron and lower in silica than the original material. Most processes rely on density differences to separate light minerals from heavier ones, so the ore is crushed and ground to release the ore minerals from the gangue. Magnetic techniques also are used.
The upgraded ore, or concentrate, is in the form of a very fine powder that is physically unsuitable for blast furnace use. It has a much smaller particle size than ore fines and cannot be agglomerated by sintering. Instead, concentrates must be agglomerated by pelletizing, a process that originated in Sweden and Germany about 1912–13 but was adapted in the 1940s to deal with low-grade taconite ores found in the Mesabi Range of Minnesota, U.S.
Pelletizing
First, moistened concentrates are fed to a rotating drum or an inclined disc, the tumbling action of which produces soft, spherical agglomerates. These “green” balls are then dried and hardened by firing in air to a temperature in the range of 1,250° to 1,340° C (2,300° to 2,440° F). Finally, they are slowly cooled. Finished pellets are round and have diameters of 10 to 15 millimetres, making them almost the ideal shape for the blast furnace.
The earliest kind of firing equipment was the shaft furnace. This was followed by the grate-kiln and the traveling grate, which together account for more than 90 percent of world pellet output. In shaft furnaces the charge moves down by gravity and is heated by a counterflow of hot combustion gases, but the grate-kiln system combines a horizontal traveling grate with a rotating kiln and a cooler so that drying, firing, and cooling are performed separately. In the traveling-grate process, pellets are charged at one end and dried, preheated, fired, and cooled as they are carried through successive sections of the equipment before exiting at the other end. Traveling grates and grate-kilns have similar capacities, and up to five million tons of pellets can be made in one unit annually.
Iron making
The primary objective of iron making is to release iron from chemical combination with oxygen, and, since the blast furnace is much the most efficient process, it receives the most attention here. Alternative methods known as direct reduction are used in over a score of countries, but less than 5 percent of iron is made this way. A third group of iron-making techniques classed as smelting-reduction is still in its infancy.
The blast furnace
Basically, the blast furnace is a countercurrent heat and oxygen exchanger in which rising combustion gas loses most of its heat on the way up, leaving the furnace at a temperature of about 200° C (390° F), while descending iron oxides are wholly converted to metallic iron. Process control and productivity improvements all follow from a consideration of these fundamental features. For example, the most important advance of the 20th century has been a switch from the use of randomly sized ore to evenly sized sinter and pellet charges. The main benefit is that the charge descends regularly, without sticking, because the narrowing of the range of particle sizes makes the gas flow more evenly, enhancing contact with the descending solids. (Even so, it is impossible to eliminate size variations completely; at the very least, some breakdown occurs between the sinter plant or coke ovens and the furnace.)
Structure
The furnace itself is a tall, vertical shaft that consists of a steel shell with a refractory lining of firebrick and graphite. Five sections can be identified. At the bottom is a parallel-sided hearth where liquid metal and slag collect, and this is surmounted by an inverted truncated cone known as the bosh. Air is blown into the furnace through tuyeres, water-cooled nozzles made of copper and mounted at the top of the hearth close to its junction with the bosh. A short vertical section called the bosh parallel, or the barrel, connects the bosh to the truncated upright cone that is the stack. Finally, the fifth and topmost section, through which the charge enters the furnace, is the throat. The lining in the bosh and hearth, where the highest temperatures occur, is usually made of carbon bricks, which are manufactured by pressing and baking a mixture of coke, anthracite, and pitch. Carbon is more resistant to the corrosive action of molten iron and slag than are the aluminosilicate firebricks used for the remainder of the lining. Firebrick quality is measured by the alumina (Al203) content, so that bricks containing 63 percent alumina are used in the bosh parallel, while 45 percent alumina is adequate for the stack.
Until recently, all blast furnaces used the double-bell system to introduce the charge into the stack. This equipment consists of two cones, called bells, each of which can be closed to provide a gas-tight seal. In operation, material is first deposited on the upper, smaller bell, which is then lowered a short distance to allow the charge to fall onto the larger bell. Next, the small bell is closed, and the large bell is lowered to allow the charge to drop into the furnace. In this way, gas is prevented from escaping into the atmosphere. Because it is difficult to distribute the burden evenly over the furnace cross section with this system, and because the abrasive action of the charge causes the bells to wear so that gas leakage eventually occurs, more and more furnaces are equipped with a bell-less top, in which the rate of material flow from each hopper is controlled by an adjustable gate and delivery to the stack is through a rotating chute whose angle of inclination can be altered. This arrangement gives good control of burden distribution, since successive portions of the charge can be placed in the furnace as rings of differing diameter. The charging pattern that gives the best furnace performance can then be found easily.
The general principles upon which blast-furnace design is based are as follows. Cold charge (mainly ore and coke), entering at the top of the stack, increases in temperature as it descends, so that it expands. For this reason the stack diameter must increase to let the charge move down freely, and typically the stack wall is displaced outward at an angle of 6° to 7° to the vertical. Eventually, melting of iron and slag takes place, and the voids between the solids are filled with liquid so that there is an apparent decrease in volume. This requires a smaller diameter, and the bosh wall therefore slopes inward and makes an angle to the vertical in the range of 6° to 9°. Over the years, the internal lines of the furnace that give it its characteristic shape have undergone a series of evolutionary changes, but the major alteration has been an increase in girth so that the ratio of height to bosh parallel has been progressively reduced as furnaces have become bigger.
For many years, the accepted method of building a furnace was to use the steel shell to give the structure rigidity and to support the stack with steel columns at regular intervals around the furnace. With very large furnaces, however, the mass is too great, so that a different construction must be used in which four large columns are joined to a box girder surrounding the furnace at a level near the top of the stack. The steel shell still takes most of the mass of the stack, but the furnace top is supported independently.
Operation
Solid charge is raised to the top of the furnace either in hydraulically operated skips or by the use of conveyor belts. Air blown into the furnace through the tuyeres is preheated to a temperature between 900° and 1,350° C (1,650° and 2,450° F) in hot-blast stoves, and in some cases it is enriched with up to 25 percent oxygen. The main product, molten pig iron (also called hot metal or blast-furnace iron), is tapped from the bottom of the furnace at regular intervals. Productivity is measured by dividing the output by the internal working volume of the furnace; 2 to 2.5 tons per cubic metre (125 to 150 pounds per cubic foot) can be obtained every 24 hours from furnaces with working volumes of 4,000 cubic metres (140,000 cubic feet).
Two by-products, slag and gas, are also formed. Slag leaves the furnace by the same taphole as the iron (upon which it floats), and its composition generally lies in the range of 30–40 percent silica (SiO2), 5–15 percent alumina (Al2O3), 35–45 percent lime (CaO), and 5–15 percent magnesia (MgO). The gas exiting at the top of the furnace is composed mainly of carbon monoxide (CO), carbon dioxide (CO2), and nitrogen (N2); a typical composition would be 23 percent CO, 22 percent CO2, 3 percent water, and 49 percent N2. Its net combustion energy is roughly one-tenth that of methane. After the dust has been removed, this gas, together with some coke-oven gas, is burned in hot-blast stoves to heat the air blown in through the tuyeres. Hot-blast stoves are in effect temporary heat-storage devices consisting of a combustion chamber and a checkerwork of firebricks that absorb heat during the combustion period. When the stove is hot enough, combustion is stopped and cold air is blown through in the reverse direction, so that the checkerwork surrenders its heat to the air, which then travels to the furnace and enters via the tuyeres. Each furnace has three or four stoves to ensure a continuous supply of hot blast.
Chemistry
The internal workings of a blast furnace used to be something of a mystery, but iron-making chemistry is now well established. Coke burns in oxygen present in the air blast in a combustion reaction taking place near the bottom of the furnace immediately in front of the tuyeres:
Chemical equation.
The heat generated by the reaction is carried upward by the rising gases and transferred to the descending charge. The CO in the gas then reacts with iron oxide in the stack, producing metallic iron and CO2:
Not all the oxygen originally present in the ore is removed like this; some remaining oxide reacts directly with carbon at the higher temperatures encountered in the bosh:
Chemical equation.
Softening and melting of the ore takes place here, droplets of metal and slag forming and trickling down through a layer of coke to collect on the hearth.
The conditions that cause the chemical reduction of iron oxides to occur also affect other oxides. All the phosphorus pentoxide (P2O5) and some of the silica and manganous oxide (MnO) are reduced, while phosphorus, silicon, and manganese all dissolve in the hot metal together with some carbon from the coke.
Direct reduction (DR)
This is any process in which iron is extracted from ore at a temperature below the melting points of the materials involved. Gangue remains in the spongelike product, known as direct-reduced iron, or DRI, and must be removed in a subsequent steelmaking process. Only high-grade ores and pellets made from superconcentrates (66 percent iron) are therefore really suitable for DR iron making.
Direct reduction is used mostly in special circumstances, often linked to cheap supplies of natural gas. Several processes are based on the use of a slightly inclined rotating kiln to which ore, coal, and recycled material are charged at the upper end, with heat supplied by an oil or gas burner. Results are modest, however, compared to gas-based processes, many of which are conducted in shaft furnaces. In the most successful of these, known as the Midrex (after its developer, a division of the Midland-Ross Corporation), a gas reformer converts methane (CH4) to a mixture of carbon monoxide and hydrogen (H2) and feeds these gases to the top half of a small shaft furnace. There descending pellets are chemically reduced at a temperature of 850° C (1,550° F). The metallized charge is cooled in the bottom half of the shaft before being discharged.
Smelting reduction
The scarcity of coking coals for blast-furnace use and the high cost of coke ovens are two reasons for the emergence of this other alternative iron-making process. Smelting reduction employs two units: in the first, iron ore is heated and reduced by gases exiting from the second unit, which is a smelter-gasifier supplied with coal and oxygen. The partially reduced ore is then smelted in the second unit, and liquid iron is produced. Smelting-reduction technology enables a wide range of coals to be used for iron making.
The metal:
Hot metal (blast-furnace iron)
Most blast furnaces are linked to a basic oxygen steel plant, for which the hot metal typically contains 4 to 4.5 percent carbon, 0.6 to 0.8 percent silicon, 0.03 percent sulfur, 0.7 to 0.8 percent manganese, and 0.15 percent phosphorus. Tapping temperatures are in the range 1,400° to 1,500° C (2,550° to 2,700° F); to save energy, the hot metal is transferred directly to the steel plant with a temperature loss of about 100° C (200° F).
The major determinants of the composition of basic iron are the hearth temperature and the choice of iron ores. For instance, carbon content is fixed both by the temperature and by the amounts of other elements present in the iron. Sulfur and silicon are both temperature-dependent and generally vary in opposite directions, a high temperature producing low sulfur and high silicon levels. Furnace size also influences silicon, so that large furnaces yield low-silicon iron. Phosphorus, on the other hand, is determined entirely by the amount present in the original charge. Like silica, manganous oxide is partially reduced by carbon, and its final concentration depends on the hearth temperature and slag composition.
Cast iron
Iron production is relatively unsophisticated. It mostly involves remelting charges consisting of pig iron, steel scrap, foundry scrap, and ferroalloys to give the appropriate composition. The cupola, which resembles a small blast furnace, is the most common melting unit. Cold pig iron and scrap are charged from the top onto a bed of hot coke through which air is blown. Alternatively, a metallic charge is melted in a coreless induction furnace or in a small electric-arc furnace.
There are two basic types of cast iron—namely, white and gray.
White iron
White cast irons are usually made by limiting the silicon content to a maximum of 1.3 percent, so that no graphite is present and all of the carbon exists as cementite (Fe3C). The name white refers to the bright appearance of the fracture surfaces when a piece of the iron is broken in two. White irons are too hard to be machined and must be ground to shape. Brittleness limits their range of applications, but they are sometimes used when wear resistance is required, as in brake linings.
The main use for white irons is as the starting material for malleable cast irons, in which the cementite formed during casting is decomposed by heat treatment. Such irons contain about 0.6 to 1.3 percent silicon, which is enough to promote cementite decomposition during the heat treatment but not enough to produce graphite flakes during casting. Whiteheart malleable iron is made by using an oxidizing atmosphere to remove carbon from the surface of white iron castings heated to a temperature of 900° C (1,650° F). Blackheart malleable iron, on the other hand, is made by annealing white iron in a neutral atmosphere, again at a temperature of 900° C. In this process, cementite is decomposed to form rosette-shaped graphite nodules, which are less embrittling than flakes. Blackheart iron is an important material that is widely used in agricultural and engineering machinery. Even better mechanical properties can be obtained by the addition of small amounts of magnesium or cerium to molten iron, since these elements have the effect of transforming the graphite into spherical nodules. These SG (spheroidal graphite) irons, which are also called ductile irons, are strong and malleable; they are also easy to cast and are sometimes preferred to steel castings and forgings.
Gray iron
Gray cast irons generally contain more than 2 percent silicon, and carbon exists as flakes of graphite embedded in a combination of ferrite and pearlite. The name arises because graphite imparts a dull gray appearance to fracture surfaces. Phosphorus is present in most cast irons, lowering the freezing point and lengthening the solidification period so that gray irons can be cast into intricate shapes. Unfortunately, graphite formation is enhanced by slow solidification, and the crack-inducing effect of graphite flakes reduces the metal’s strength and malleability. Gray cast irons are therefore unsuitable when shock resistance is required, but they are ideal for such purposes as engine cylinder blocks, domestic stoves, and manhole covers. They are easy to machine because the graphite causes the metal to break off in small chips, and they also have a high damping capacity (i.e., they are able to absorb vibration). As a result, gray cast irons are used as frames for rotating machinery such as lathes.
High-alloy iron
The properties of both white and gray cast irons can be enhanced by the inclusion of alloying elements such as nickel (Ni), chromium (Cr), and molybdenum (Mo). For example, Ni-Hard, a white iron containing 4 to 5 percent nickel and up to 1.5 percent chromium, is used to make metalworking rolls. Irons in the Ni-Resist range, which contain 14 to 25 percent nickel, are nonmagnetic and have good heat and corrosion resistance.
Casting methods
Iron castings can be made in many ways, but sand-casting is the most common. First, a pattern of the required shape (slightly enlarged to allow for shrinkage) is made in wood, metal, or plastic. It is then placed in a two-piece molding box and firmly packed in sand that is held together by a bonding agent. After the sand has hardened, the molding box is split open to allow the pattern to be removed and used again, and then the box is reassembled and molten metal poured into the cavity to create the casting.
A greensand casting is made in a sand mold bonded with clay, the name referring not to the colour of the sand but to the fact that the mold is uncured. Dry-sand molds are similar, except that the sand is baked before receiving any metal. Alternatively, hardening can be effected by mixing sodium silicate into the sand to create chemical bonds that make baking unnecessary. For heavy castings, molds made of coarse loam sand backed up with brick and faced with highly refractory material are used.
Sand-casting produces rough surfaces, and a much better finish can be achieved by shell molding. This process involves bringing a mixture of sand and a thermosetting resin into contact with a heated metal pattern to form an envelope or shell of hardened sand. Two half-shells are then assembled to make a mold. Wax patterns also can be used to make one-piece shell molds, the wax being removed by melting before the resin is cured in an oven.
For some high-precision applications, iron is cast into permanent molds made of either cast iron or graphite. It is important, however, to ensure that the molds are warmed before use and that their internal surfaces are given a coating to release the casting after solidification.
Most castings are static in that they rely on gravity to cause the liquid metal to fill the mold. Centrifugal casting, however, uses a rotating mold to produce hollow cylindrical castings, such as cast-iron drainpipes.
Wrought iron
Although it is no longer manufactured, the wrought iron that survives contains less than 0.035 percent carbon. It therefore consists essentially of ferrite, but its strength and malleability are reduced by entrained puddling slag, which is elongated into stringers by rolling. As a result, breaking a bar of wrought iron reveals a fibrous fracture not unlike that of wood. The other elements present are silicon (0.075 to 0.15 percent), sulfur (0.01 to 0.2 percent), phosphorus (0.1 to 0.25 percent), and manganese (0.05 to 0.1 percent). This relative purity is the reason why wrought iron has a reputation for good corrosion resistance.
Iron powder
Iron powders produced by crushing and grinding or by atomizing a stream of molten metal are made into small components by pressing or rolling them into compacts, which are then sintered. The density of the compacts depends on the pressure used, but porous compacts suitable for self-lubricating bearings or filters can be given accurate dimensions by using this technique.
Chemical compounds
Apart from being a source of iron, hematite is used for its reddish colour in cosmetics and as a pigment in paints and roof tiles. Also, when cobalt and nickel oxides are added to hematite, a group of ceramic materials closely related to magnetite, known as ferrites, are formed. These are ferromagnetic (i.e., highly magnetic) and are widely used in computers and in electronic transmission and receiving equipment.
Iron is a constituent of human blood, and various iron compounds have medical uses. Ferric ammonium citrate is an appetite stimulator, and ferrous gluconate, ferrous sulfate, and ferric pyrophosphate are among compounds used to treat anemia. Ferric salts act as coagulants and are applied to wounds to promote healing.
Iron compounds are also widely used in agriculture. For example, ferrous sulfate is applied as a spray to acid-loving plants, and other compounds are used as fungicides.
Additional Information
The iron ore deposits are found in sedimentary rocks. They are formed by the chemical reaction of iron and oxygen mixed in the marine and fresh water. The important iron oxides in these deposits are hematite and magnetite. These are ores from where iron is extracted.
Iron ore formation
The iron ore formation started over 1.8 billion years ago when abundant iron was dissolved in the ocean water which then needed oxygen to make hematite and magnetite. The oxygen was provided when the first organism capable of photosynthesis began releasing oxygen into the waters. This oxygen combined with dissolved iron to form hematite and magnetite. These then deposited on the ocean floor abundantly which are now known as banded iron formation.
Sources of iron ore
Metallic iron is basically obscure on the surface of the Earth aside from as iron-nickel composites from shooting stars and exceptionally uncommon types of profound mantle xenoliths. Albeit iron is the fourth most plentiful component in the Earth's covering, containing around 5%, by far most is bound in silicate or all the more seldom carbonate minerals. The thermodynamic obstructions to isolating unadulterated iron from these minerals are imposing and vitality serious, in this way all wellsprings of iron utilised by human industry misuse relatively rarer iron oxide minerals, fundamentally hematite.
Before the modern upheaval, most iron was acquired from broadly accessible goethite or lowland mineral, for instance amid the American Revolution and the Napoleonic Wars. Ancient social orders utilised laterite as a wellspring of iron mineral. Truly, a great part of the iron mineral used by industrialised social orders has been mined from transcendently hematite stores with grades of around 70% Fe. These stores are usually alluded to as "immediate delivery minerals" or "characteristic metals". Expanding iron metal request, combined with the consumption of high-review hematite minerals in the United States, after World War II prompted to improvement of lower-review press metal sources, basically the usage of magnetite and taconite.
Press metal mining strategies change by the kind of mineral being mined. There are four fundamental sorts of iron-metal stores worked right now, contingent upon the mineralogy and topography of the metal stores. These are magnetite, titanomagnetite, monstrous hematite and pisolitic ironstone stores.
Banded iron formations
Banded iron formations (BIFs) are sedimentary rocks containing over 15% iron made dominatingly out of daintily had relations with iron minerals and silica (as quartz). Banded iron formations happen only in Precambrian shakes, and are regularly feebly to strongly transformed. Banded iron formations may contain press in carbonates (siderite or ankerite) or silicates (minnesotaite, greenalite, or grunerite), however in those mined as iron metals, oxides (magnetite or hematite) are the chief iron mineral. Banded iron formations are known as taconite inside North America.
The mining includes moving enormous measures of metal and waste. The waste comes in two structures, non-metal bedrock in the mine (overburden or inter-burden privately known as mullock), and undesirable minerals which are a characteristic part of the metal shake itself (gangue). The mullock is mined and heaped in waste dumps, and the gangue is isolated amid the beneficiation procedure and is expelled as tailings. Taconite tailings are for the most part the mineral quartz, which is artificially latent. This material is put away in vast, directed water settling lakes.
Magnetite ores
The key monetary parameters for magnetite mineral being financial are the crystallinity of the magnetite, the review of the iron inside the joined iron arrangement have shake, and the contaminant components which exist inside the magnetite think. The size and strip proportion of most magnetite assets is immaterial as a united iron development can be many meters thick, augment several kilometres along strike, and can undoubtedly come to more than three billion or more huge amounts of contained metal.
The normal review of iron at which a magnetite-bearing united iron arrangement gets to be distinctly financial is around 25% iron, which can for the most part yield a 33% to 40% recuperation of magnetite by weight, to create a move evaluating in abundance of 64% iron by weight. The average magnetite press metal focus has under 0.1% phosphorus, 3–7% silica and under 3% aluminium.
Presently magnetite press mineral is mined in Minnesota and Michigan in the U.S., Eastern Canada and Northern Sweden. Magnetite bearing united iron development is presently mined broadly in Brazil, which sends out huge amounts to Asia, and there is an early and huge magnetite press mineral industry in Australia.
Magmatic magnetite ore deposits
Occasionally granite and ultrapotassic igneous rocks segregate magnetite crystals and form masses of magnetite suitable for economic concentration. A few iron ore deposits, notably in Chile, are formed from volcanic flows containing significant accumulations of magnetite phenocrysts. Chilean magnetite iron ore deposits within the Atacama Desert have also formed alluvial accumulations of magnetite in streams leading from these volcanic formations.
Some magnetite skarn and hydrothermal deposits have been worked in the past as high-grade iron ore deposits requiring little beneficiation. There are several granite-associated deposits of this nature in Malaysia and Indonesia.
Other sources of magnetite iron ore include metamorphic accumulations of massive magnetite ore such as at Savage River, Tasmania, formed by shearing of ophiolite ultramafics.
Another, minor, source of iron ores are magmatic accumulations in layered intrusions which contain a typically titanium-bearing magnetite often with vanadium. These ores form a niche market, with specialty smelters used to recover the iron, titanium and vanadium. These ores are beneficiated essentially similar to banded iron formation ores, but usually are more easily upgraded via crushing and screening. The typical titanomagnetite concentrate grades 57% Fe, 12% Ti and 0.5% V2O5.
Beneficiation of iron ore
Lower-grade sources of iron ore generally require beneficiation, using techniques like crushing, milling, gravity or heavy media separation, screening, and silica froth flotation to improve the concentration of the ore and remove impurities. The results, high quality fine ore powders, are known as fines.
Magnetite
Magnetite is attractive, and subsequently effortlessly isolated from the gangue minerals and equipped for creating a high-review think with low levels of polluting influences.
The grain size of the magnetite and its level of mixing together with the silica groundmass decide the pound size to which the stone must be comminuted to empower effective attractive partition to give a high immaculateness magnetite focus. This decides the vitality inputs required to run a processing operation.
Mining of united iron developments includes coarse smashing and screening, trailed by unpleasant pounding and fine granulating to comminute the mineral to the point where the solidified magnetite and quartz are sufficiently fine that the quartz is deserted when the resultant powder is passed under an attractive separator.
By and large most magnetite grouped iron arrangement stores must be ground to in the vicinity of 32 and 45 micrometers keeping in mind the end goal to deliver a low-silica magnetite think. Magnetite focus evaluations are by and large in overabundance of 70% iron by weight and generally are low phosphorus, low aluminum, low titanium and low silica and request a top notch cost.
Hematite
Because of the high thickness of hematite in respect to related silicate gangue, hematite beneficiation as a rule includes a blend of beneficiation strategies.
One strategy depends on passing the finely smashed metal over a slurry containing magnetite or other specialist, for example, ferrosilicon which expands its thickness. At the point when the thickness of the slurry is appropriately adjusted, the hematite will sink and the silicate mineral parts will coast and can be evacuated.
Uses
The primary use of iron ore is in the production of iron. Most of the iron produced is then used to make steel. Steel is used to make automobiles, locomotives, ships, beams used in buildings, furniture, paper clips, tools, reinforcing rods for concrete, bicycles, and thousands of other items. It is the most-used metal by both tonnage and purpose.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2284) Darien Gap
Gist
The Darién Gap is a roadless, 60-mile stretch of rainforest straddling the Colombia-Panama border, was named for being the only break in the Pan-American Highway, a 19,000-mile-long network of roads that otherwise runs uninterrupted from Alaska to the southern tip of Argentina.
Summary
Darién Gap, is an approximately 60-mile (100-km) break in the Pan-American Highway, which otherwise runs continuously from Alaska to the southern tip of Argentina. The gap is located in the thickly vegetated marshland and jungle of the Darién region, which spans the easternmost part of Panama and northwestern Colombia. The Darién Gap is infamously difficult to traverse, but, as the only overland route connecting North and South America, it has become a crucial path for migrants traveling northward to the United States and Canada in the 21st century.
History
The region of Darién, situated largely on the isthmus of Panama, lies between the Caribbean Sea and the Pacific Ocean and at the crossroads of two continents. This seemingly ideal location drew a number of European settlers during the colonial period. Indeed, one of the first European settlements on the American mainland was the Spanish city of Santa María la Antigua del Darién, founded in 1510 on the western side of the Gulf of Urabá. A few years later some colonists left the Darién settlement to found Panama City; eventually, Santa María was abandoned. Another short-lived attempt at colonization was made in the 17th century, when a Scottish trading company founded a settlement about halfway between Portobelo, Panama, and Cartagena, Colombia. The area’s hot humid weather and geography typified by tropical rainforests, mangrove swamps, and low mountain ranges perhaps hampered other settlements, and in subsequent centuries Darién remained sparsely populated. A few Indigenous peoples, including the Chocó (specifically the Embera and Wounaan, or Waunana) and the Kuna (Cuna), have traditionally lived in villages scattered throughout the forest. Estimates of their combined local populations range widely, from 1,200 to some 25,000.
Gap in the Pan-American Highway
Discussions of constructing a single route to connect North America and South America began in the 1920s, and by 1937, 14 countries, including the United States, had signed the Convention on the Pan-American Highway, whereby they agreed to achieve speedy construction of their respective sections of the highway. By the 1970s most of the highway had been constructed except for the crucial region of Darién. By this time environmentalists and Indigenous peoples had raised concerns about the destruction the highway would cause to the area’s sensitive rainforests and marshlands as well as to the Indigenous peoples’ ways of living. The last of the highway project was ultimately suspended for a number of reasons, including the prevention of the spread of foot-and-mouth disease. The highly contagious viral disease that affects most cloven-footed domesticated mammals, including cattle, sheep, goats, and pigs, was then infecting large numbers of livestock in South America. The suspension was lifted in 1992, but opposition to the construction of the highway remained among environmentalists and Indigenous peoples. There were also concerns about facilitating drug trafficking and illegal immigration if the road were continued.
21st-century migration
By the 21st century, notwithstanding the lack of infrastructure in the Darién Gap, drug traffickers and armed guerrilla groups had entrenched themselves in the region, benefiting from the area’s remoteness and ineffective security. In addition, the numbers of migrants crossing the Darién Gap began to increase in the mid-2010s and accelerated in the 2020s. In the early 2010s Panamanian officials recorded an average of 2,400 crossings per year. That figure rose between 2015 and 2021 to approximately 30,000 per year and intensified in 2021 to more than 130,000. In 2023 more than 500,000 individuals made the crossing. Despite the area’s geographic dangers and the threat from criminals, most migrants were crossing the Gap as part of a northward journey to North America. The largest groups were fleeing violence, economic collapse, and political instability in Venezuela, Ecuador, and Haiti. Changes to immigration policies in Central America had driven many of these refugees, who had been unable to obtain legal entry, to take the more hazardous journey across the Darién Gap. This was seen in the data as an exponential rise in Venezuelans making the crossing in 2022, the same year Mexico began to impose new visa requirements for those arriving from Venezuela.
Details
The Darién Gap is a geographic region that connects the American continents, stretching across southern Panama's Darién Province and the northern portion of Colombia's Chocó Department. Consisting of a large watershed, dense rainforest, and mountains, it is known for its remoteness, difficult terrain, and extreme environment, with a reputation as one of the most inhospitable regions in the world. Nevertheless, as the only land bridge between North and South America, the Darién Gap has historically served as a major route for both humans and wildlife.
The geography of the Darién Gap is highly diverse. The Colombian side is dominated primarily by the river delta of the Atrato River, which creates a flat marshland at least 80 km (50 mi) wide. The Tanela River, which flows toward Atrato, was Hispanicized to Darién by 16th Century European conquistadors. The Serranía del Baudó mountain range extends along Colombia's Pacific coast and into Panama. The Panamanian side, in stark contrast, is a mountainous rainforest, with terrain reaching from 60 m (197 ft) in the valley floors to 1,845 m (6,053 ft) at the tallest peak, Cerro Tacarcuna, in the Serranía del Darién.
The Darién Gap is inhabited mostly by the indigenous Embera-Wounaan and Guna peoples; in 1995, it had a reported population of 8,000 among five tribes. The only sizable settlement in the region is La Palma, the capital of Darién Province, with roughly 4,200 residents; other population centers include Yaviza and El Real, both on the Panamanian side.
Owing to its isolation and harsh geography, the Darién Gap is largely undeveloped, with most economic activity consisting of small-scale farming, cattle ranching, and lumber. Criminal enterprises such as human and drug trafficking are widespread. There is no road, not even a primitive one, across the Darién: Colombia and Panama are the only countries in the Americas that share a land border but lack even a rudimentary link. The "Gap" interrupts the Pan-American Highway, which breaks at Yaviza, Panama and resumes at Turbo, Colombia roughly 106 km (66 mi) away. Infrastructure development has long been constrained by logistical challenges, financial costs, and environmental concerns; attempts failed in the 1970s and 1990s. As of 2024, there is no active plan to build a road through the Gap, although there is discussion of reestablishing a ferry service and building a rail link.
Consequently, travel within and across Darién Gap is often conducted with small boats or traditional watercraft such as pirogues. Otherwise, hiking is the only remaining option, and it is strenuous and dangerous. Aside from natural threats such as deadly wildlife, tropical diseases, and frequent heavy rains and flash floods, law enforcement and medical support are nonexistent, resulting in rampant violent crime, and causing otherwise minor injuries to ultimately become fatal.
Despite its perilous conditions, since the 2010s, the Darién Gap has become one of the heaviest migration routes in the world, with hundreds of thousands of migrants, primarily Haitians and Venezuelans, traversing north to the Mexico–United States border. In 2022, there were 250,000 crossings, compared to only 24,000 in 2019. In 2023, more than 520,000 passed through the gap, more than doubling the previous year's number of crossings.
Pan-American Highway
The Pan-American Highway is a system of roads measuring about 30,000 km (19,000 mi) in length that runs north–south through the entirety of North, Central and South America, with the sole exception of a 106 km (66 mi) stretch of marshland and mountains between Panama and Colombia known as the Darién Gap. On the South American side, the Highway terminates at Turbo, Colombia, near 8°6′N 76°40′W. On the Panamanian side, the road terminus, for many years in Chepo, Panama Province, is since 2010 in the town of Yaviza at 8°9′N 77°41′W.
Many people, including local indigenous populations, groups and governments are opposed to completing the Darién portion of the highway. Reasons for opposition include protecting the rainforest, containing the spread of tropical diseases, protecting the livelihood of indigenous peoples in the area, preventing drug trafficking and its associated violence, and preventing foot-and-mouth disease from entering North America. The extension of the highway as far as Yaviza resulted in severe deforestation alongside the highway route within a decade.
Efforts were made for decades to fill this sole gap in the Pan-American Highway. Planning began in 1971 with the help of American funding, but was halted in 1974 after concerns were raised by environmentalists. US support was further blocked by the US Department of Agriculture in 1978, from its desire to stop the spread of foot-and-mouth disease. Another effort to build the road began in 1992, but, by 1994 a United Nations agency reported that the road, and the subsequent development, would cause extensive environmental damage. Cited reasons include evidence that the Darién Gap has prevented the spread of diseased cattle into Central and North America, which have not seen foot-and-mouth disease since 1954, and, since at least the 1970s, this has been a substantial factor in preventing a road link through the Darién Gap. The Embera-Wounaan and Guna are among five tribes, comprising 8,000 people, who have expressed concern that the road would bring about the potential erosion of their cultures by destroying their food sources.
An alternative to the Darién Gap highway would be a river ferry service between Turbo or Necoclí, Colombia and one of several sites along Panama's Caribbean coast. Ferry services such as Crucero Express and Ferry Xpress operated to link the gap, but closed because the service was not profitable. As of 2023, nothing has come of this idea.
Another idea is to use a combination of bridges and tunnels to avoid the environmentally sensitive regions.
Migrants traveling northward
Venezuelan migrants being processed in Ecuador in preparation to make the long journey north to New York City, including crossing the Darién Gap
While the Darién Gap has been considered to be essentially impassable, in the 21st century thousands of migrants, primarily Haitian during the 2010s and Venezuelan during the 2020s, crossed the Darién Gap to reach the United States. By 2021, the number was more than 130,000, increasing to 520,000 in 2023, but dropping to 300,000 in 2024, for the now more organized 2½ day trek, which used to take a week. Of the 334,000 migrants who crossed over the first eight months of 2023, 60% were Venezuelan, motivating the Biden administration to provide foreign assistance to help Panama deport migrants.
The hike, which involves crossing rivers which flood frequently, is unpleasant, demanding, and dangerous, with math and robbery common, and there are numerous fatalities. In 2024 there were 55 known deaths, probably more, and 180 unaccompanied minors were abandoned and looked after by child care institutions, some because their relatives died or got lost, others travelling unaccompanied..
By 2013, the coastal route on the east side of the Darién Isthmus became relatively safe, by taking a motorboat across the Gulf of Uraba from Turbo to Capurganá and then hopping the coast to Sapzurro and hiking from there to La Miel, Panama. All inland routes through the Darién remain highly dangerous. In June 2017, CBS journalist Adam Yamaguchi filmed smugglers leading refugees on a nine-day journey from Colombia to Panama through the Darién.
People from Africa, South Asia, the Middle East, the Caribbean, and China have been known to cross the Darién Gap as a method of migrating to the United States. This route may entail flying to Ecuador to take advantage of its liberal visa policy, and attempting to cross the gap on foot. Journalist Jason Motlagh was interviewed by Sacha Pfeiffer on NPR's nationally syndicated radio show On Point in 2016 concerning his work following migrants through the Darién Gap.[ Journalists Nadja Drost and Bruno Federico were interviewed by Nick Schifrin about their work following migrants through the Darién Gap in mid-2019, and the effects of the COVID-19 pandemic a year later, as part of a series on migration to the United States for PBS NewsHour.
In 2023, people fleeing China travelled to Ecuador, then to Necoclí in Colombia, with the intention of crossing the Darién Gap on foot. The number of Chinese people crossing the Darién Gap increased with each passing month in 2023.
In August 2024, journalist Caitlin Dickinson reported on immigration through the Darién Gap for The Atlantic.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2285) Fashion Design
Gist
Fashion design is the art of applying design, aesthetics, clothing construction and natural beauty to clothing and its accessories. It is influenced by culture and different trends and has varied over time and place.
Summary
Fashion design is the art and practice of creating clothing and accessories. It involves the application of design, aesthetics, and natural beauty to garments and their embellishments. Fashion designers work in a variety of ways, including designing ready-to-wear clothing, haute couture collections, and custom pieces for individual clients. The process of fashion design entails several stages, such as concept creation, sketching, fabric selection, pattern making, garment construction, and final presentation. Fashion design not only focuses on the physical creation of garments but also encompasses the cultural and social influences that shape trends and styles.
Importance of Fashion Design in Society
Fashion design holds a significant place in society due to its influence on culture, identity, and economy. It serves as a form of self-expression, allowing individuals to communicate their personality, mood, and social status through their clothing choices. Fashion reflects societal changes, capturing the spirit of the times and often driving social movements.
Economically, the fashion industry is a major global player, generating employment and contributing to economic growth. It includes a wide range of professions, from designers and models to marketers and retailers, impacting various sectors.
Moreover, fashion design plays a role in innovation and sustainability. Designers are increasingly focusing on sustainable practices, such as using eco-friendly materials and ethical production methods, to address environmental and ethical concerns. Fashion also fosters creativity and artistic expression, pushing the boundaries of what is possible and continually evolving to meet the needs and desires of society.
Details
Fashion design is the art of applying design, aesthetics, clothing construction and natural beauty to clothing and its accessories. It is influenced by culture and different trends and has varied over time and place. "A fashion designer creates clothing, including dresses, suits, pants, and skirts, and accessories like shoes and handbags, for consumers. He or she can specialize in clothing, accessory, or jewelry design, or may work in more than one of these areas."
Fashion designers
Fashion designers work in a variety of ways when designing their pieces and accessories such as rings, bracelets, necklaces and earrings. Due to the time required to put a garment out on the market, designers must anticipate changes to consumer desires. Fashion designers are responsible for creating looks for individual garments, involving shape, color, fabric, trimming, and more.
Fashion designers attempt to design clothes that are functional as well as aesthetically pleasing. They consider who is likely to wear a garment and the situations in which it will be worn, and they work with a wide range of materials, colors, patterns, and styles. Though most clothing worn for everyday wear falls within a narrow range of conventional styles, unusual garments are usually sought for special occasions such as evening wear or party dresses.
Some clothes are made specifically for an individual, as in the case of haute couture or bespoke tailoring. Today, most clothing is designed for the mass market, especially casual and everyday wear, which are commonly known as ready to wear or fast fashion.
Structure
There are different lines of work for designers in the fashion industry. Fashion designers who work full-time for a fashion house, as 'in-house designers', own the designs and may either work alone or as a part of a design team. Freelance designers who work for themselves sell their designs to fashion houses, directly to shops, or to clothing manufacturers. There are quite a few fashion designers who choose to set up their labels, which offers them full control over their designs. Others are self-employed and design for individual clients. Other high-end fashion designers cater to specialty stores or high-end fashion department stores. These designers create original garments, as well as those that follow established fashion trends. Most fashion designers, however, work for apparel manufacturers, creating designs of men's, women's, and children's fashions for the mass market. Large designer brands that have a 'name' as their brand such as Abercrombie & Fitch, Justice, or Juicy are likely to be designed by a team of individual designers under the direction of a design director.
Designing a garment
Garment design includes components of "color, texture, space, lines, pattern, silhouette, shape, proportion, balance, emphasis, rhythm, and harmony". All of these elements come together to design a garment by creating visual interest for consumers.
Fashion designers work in various ways, some start with a vision in their head and later move into drawing it on paper or on a computer, while others go directly into draping fabric onto a dress form, also known as a mannequin. The design process is unique to the designer and it is rather intriguing to see the various steps that go into the process. Designing a garment starts with patternmaking. The process begins with creating a sloper or base pattern. The sloper will fit the size of the model a designer is working with or a base can be made by utilizing standard size charting.
Three major manipulations within patternmaking include dart manipulation, contouring, and added fullness. Dart manipulation allows for a dart to be moved on a garment in various places but does not change the overall fit of the garment. Contouring allows for areas of a garment to fit closer to areas of the torso such as the bust or shoulders. Added fullness increases the length or width of a pattern to change the frame as well as fit of the garment. The fullness can be added on one side, unequal, or equally to the pattern.
A designer may choose to work with certain apps that can help connect all their ideas together and expand their thoughts to create a cohesive design. When a designer is completely satisfied with the fit of the toile (or muslin), they will consult a professional pattern maker who will then create the finished, working version of the pattern out of paper or using a computer program. Finally, a sample garment is made up and tested on a model to make sure it is an operational outfit. Fashion design is expressive, the designers create art that may be functional or non-functional.
Technology within fashion
Technology is "the study and knowledge of the practical, especially industrial, use of scientific discoveries". Technology within fashion has broadened the industry and allowed for faster production processes.
Over the years, there has been an increase in the use of technology within Fashion Design as it offers new platforms for creativity. Technology is constantly changing and there have been innovations within the industry. 3D printing allows a larger area of personalized products and widening originality. Iris van Herpen, a Dutch designer, has showcased the incorporation of 3D printing as her Crystallization used 3D printing for the first time on a runway. The innovation has re-shaped the fashion industry and creates a new area of creativity.
Apps and software have increasingly changed how designers can use technology to create. Adobe Creative Cloud, specifically Photoshop and Illustrator, is a new means of communication for designers and allows ideas to flow. Designers are provided with a space to also create more professional and industry standard specifications such as technical flats and tech packs.
Software such as Browzwear, Clo3D, and Optitex aid designers in the product development stage. Virtual reality has allowed a new way to prototype clothing to originally see designers. This eliminates the need for a live model and fittings, which shortens the production process. 3D modeling within software allows for initial sampling and development stages for partnerships with suppliers before the garments are produced. Mock-ups of designs in the 3D modeling allows for problems to be solved before a final sample is made and sent to a manufacturer.
Technology can also be used and aid within the material of a garment. Material innovation creates a new way for fibers to be re-imagined or for new materials to be constructed. This overall aids in functional and aesthetic purposes for the designer. The material technology has been used with brands such as Werewool and Bananatex. These brands innovate the way designers can construct their garments and provide new materials to be used.
Types of fashion
Garments produced by clothing manufacturers fall into three main categories, although these may be split up into additional, different types.
Haute couture
Until the 1950s, fashion clothing was predominately designed and manufactured on a made-to-measure or haute couture basis (French for high-sewing), with each garment being created for a specific client. A couture garment is made to order for an individual customer, and is usually made from high-quality, expensive fabric, sewn with extreme attention to detail and finish, often using time-consuming, hand-executed techniques. Look and fit take priority over the cost of materials and the time it takes to make. Due to the high cost of each garment, haute couture makes little direct profit for the fashion houses, but is important for prestige and publicity.
Ready-to-wear
Ready-to-wear, or prêt-à-porter, clothes are a cross between haute couture and mass market. They are not made for individual customers, but great care is taken in the choice and cut of the fabric. Clothes are made in small quantities to guarantee exclusivity, so they are rather expensive. Ready-to-wear collections are usually presented by fashion houses each season during a period known as Fashion Week. This takes place on a citywide basis and occurs twice a year. The main seasons of Fashion Week include; spring/summer, fall/winter, resort, swim, and bridal.
Half-way garments are an alternative to ready-to-wear, "off-the-peg", or prêt-à-porter fashion. Half-way garments are intentionally unfinished pieces of clothing that encourage co-design between the "primary designer" of the garment, and what would usually be considered, the passive "consumer". This differs from ready-to-wear fashion, as the consumer is able to participate in the process of making and co-designing their clothing. During the Make{able} workshop, Hirscher and Niinimaki found that personal involvement in the garment-making process created a meaningful "narrative" for the user, which established a person-product attachment and increased the sentimental value of the final product.
Otto von Busch also explores half-way garments and fashion co-design in his thesis, "Fashion-able, Hacktivism and engaged Fashion Design".
Mass market
Currently, the fashion industry relies more on mass-market sales. The mass market caters for a wide range of customers, producing ready-to-wear garments using trends set by the famous names in fashion. They often wait around a season to make sure a style is going to catch on before producing their versions of the original look. To save money and time, they use cheaper fabrics and simpler production techniques which can easily be done by machines. The end product can, therefore, be sold much more cheaply.
There is a type of design called "kutch" originated from the German word kitschig, meaning "trashy" or "not aesthetically pleasing". Kitsch can also refer to "wearing or displaying something that is therefore no longer in fashion".
Income
The median annual wages for salaried fashion designers was $79,290 in May 2023, approximately $38.12 per hour. The middle 50 percent earned an average of 76,700. The lowest 10 percent earned $37,090 and the highest 10 percent earned $160,850. The highest number of employment lies within Apparel, Piece Goods, and Notions Merchant Wholesalers with a percentage of 5.4. The average is 7,820 based on employment. The lowest employment is within Apparel Knitting Mills at .46% of the industry employed, which averages to 30 workers within the specific specialty. In 2016, 23,800 people were counted as fashion designers in the United States.
Geographically, the largest employment state of Fashion designers is New York with an employment of 7,930. New York is considered a hub for fashion designers due to a large percentage of luxury designers and brands.
Fashion industry
Fashion today is a global industry, and most major countries have a fashion industry. Seven countries have established an international reputation in fashion: the United States, France, Italy, United Kingdom, Japan, Germany and Belgium. The "big four" fashion capitals of the fashion industry are New York City, Paris, Milan, and London.
Fashion design terms
* A fashion designer conceives garment combinations of line, proportion, color, and texture. While sewing and pattern-making skills are beneficial, they are not a pre-requisite of successful fashion design. Most fashion designers are formally trained or apprenticed.
* A technical designer works with the design team and the factories overseas to ensure correct garment construction, appropriate fabric choices and a good fit. The technical designer fits the garment samples on a fit model, and decides which fit and construction changes to make before mass-producing the garment.
* A pattern maker (also referred as pattern master or pattern cutter) drafts the shapes and sizes of a garment's pieces. This may be done manually with paper and measuring tools or by using a CAD computer software program. Another method is to drape fabric directly onto a dress form. The resulting pattern pieces can be constructed to produce the intended design of the garment and required size. Formal training is usually required for working as a pattern marker.
* A tailor makes custom designed garments made to the client's measure; especially suits (coat and trousers, jacket and skirt, et cetera). Tailors usually undergo an apprenticeship or other formal training.
* A textile designer designs fabric weaves and prints for clothes and furnishings. Most textile designers are formally trained as apprentices and in school.
* A stylist co-ordinates the clothes, jewelry, and accessories used in fashion photography and catwalk presentations. A stylist may also work with an individual client to design a coordinated wardrobe of garments. Many stylists are trained in fashion design, the history of fashion, and historical costume, and have a high level of expertise in the current fashion market and future market trends. However, some simply have a strong aesthetic sense for pulling great looks together.
* A fashion buyer selects and buys the mix of clothing available in retail shops, department stores, and chain stores. Most fashion buyers are trained in business and/or fashion studies.
* A seamstress sews ready-to-wear or mass-produced clothing by hand or with a sewing machine, either in a garment shop or as a sewing machine operator in a factory. She (or he) may not have the skills to make (design and cut) the garments, or to fit them on a model.
* A dressmaker specializes in custom-made women's clothes: day, math, and evening dresses, business clothes and suits, trousseaus, sports clothes, and lingerie.
* A fashion forecaster predicts what colours, styles and shapes will be popular ("on-trend") before the garments are on sale in stores.
* A model wears and displays clothes at fashion shows and in photographs.
* A fit model aids the fashion designer by wearing and commenting on the fit of clothes during their design and pre-manufacture. Fit models need to be a particular size for this purpose.
* A fashion journalist writes fashion articles describing the garments presented or fashion trends, for magazines or newspapers.
* A fashion photographer produces photographs about garments and other fashion items along with models and stylists for magazines or advertising agencies.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline
2286) Computer Science
Gist
Computer Science is the study of computers and computational systems. Unlike electrical and computer engineers, computer scientists deal mostly with software and software systems; this includes their theory, design, development, and application.
Summary
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software).
Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
Details
Computer science is the study of computers and computing, including their theoretical and algorithmic foundations, hardware and software, and their uses for processing information. The discipline of computer science includes the study of algorithms and data structures, computer and network design, modeling data and information processes, and artificial intelligence. Computer science draws some of its foundations from mathematics and engineering and therefore incorporates techniques from areas such as queueing theory, probability and statistics, and electronic circuit design. Computer science also makes heavy use of hypothesis testing and experimentation during the conceptualization, design, measurement, and refinement of new algorithms, information structures, and computer architectures.
Computer science is considered as part of a family of five separate yet interrelated disciplines: computer engineering, computer science, information systems, information technology, and software engineering. This family has come to be known collectively as the discipline of computing. These five disciplines are interrelated in the sense that computing is their object of study, but they are separate since each has its own research perspective and curricular focus. (Since 1991 the Association for Computing Machinery [ACM], the IEEE Computer Society [IEEE-CS], and the Association for Information Systems [AIS] have collaborated to develop and update the taxonomy of these five interrelated disciplines and the guidelines that educational institutions worldwide use for their undergraduate, graduate, and research programs.)
The major subfields of computer science include the traditional study of computer architecture, programming languages, and software development. However, they also include computational science (the use of algorithmic techniques for modeling scientific data), graphics and visualization, human-computer interaction, databases and information systems, networks, and the social and professional issues that are unique to the practice of computer science. As may be evident, some of these subfields overlap in their activities with other modern fields, such as bioinformatics and computational chemistry. These overlaps are the consequence of a tendency among computer scientists to recognize and act upon their field’s many interdisciplinary connections.
Development of computer science
Computer science emerged as an independent discipline in the early 1960s, although the electronic digital computer that is the object of its study was invented some two decades earlier. The roots of computer science lie primarily in the related fields of mathematics, electrical engineering, physics, and management information systems.
Mathematics is the source of two key concepts in the development of the computer—the idea that all information can be represented as sequences of zeros and ones and the abstract notion of a “stored program.” In the binary number system, numbers are represented by a sequence of the binary digits 0 and 1 in the same way that numbers in the familiar decimal system are represented using the digits 0 through 9. The relative ease with which two states (e.g., high and low voltage) can be realized in electrical and electronic devices led naturally to the binary digit, or bit, becoming the basic unit of data storage and transmission in a computer system.
Electrical engineering provides the basics of circuit design—namely, the idea that electrical impulses input to a circuit can be combined using Boolean algebra to produce arbitrary outputs. (The Boolean algebra developed in the 19th century supplied a formalism for designing a circuit with binary input values of zeros and ones [false or true, respectively, in the terminology of logic] to yield any desired combination of zeros and ones as output.) The invention of the transistor and the miniaturization of circuits, along with the invention of electronic, magnetic, and optical media for the storage and transmission of information, resulted from advances in electrical engineering and physics.
Management information systems, originally called data processing systems, provided early ideas from which various computer science concepts such as sorting, searching, databases, information retrieval, and graphical user interfaces evolved. Large corporations housed computers that stored information that was central to the activities of running a business—payroll, accounting, inventory management, production control, shipping, and receiving.
Theoretical work on computability, which began in the 1930s, provided the needed extension of these advances to the design of whole machines; a milestone was the 1936 specification of the Turing machine (a theoretical computational model that carries out instructions represented as a series of zeros and ones) by the British mathematician Alan Turing and his proof of the model’s computational power. Another breakthrough was the concept of the stored-program computer, usually credited to Hungarian American mathematician John von Neumann. These are the origins of the computer science field that later became known as architecture and organization.
In the 1950s, most computer users worked either in scientific research labs or in large corporations. The former group used computers to help them make complex mathematical calculations (e.g., missile trajectories), while the latter group used computers to manage large amounts of corporate data (e.g., payrolls and inventories). Both groups quickly learned that writing programs in the machine language of zeros and ones was not practical or reliable. This discovery led to the development of assembly language in the early 1950s, which allows programmers to use symbols for instructions (e.g., ADD for addition) and variables (e.g., X). Another program, known as an assembler, translated these symbolic programs into an equivalent binary program whose steps the computer could carry out, or “execute.”
Other system software elements known as linking loaders were developed to combine pieces of assembled code and load them into the computer’s memory, where they could be executed. The concept of linking separate pieces of code was important, since it allowed “libraries” of programs for carrying out common tasks to be reused. This was a first step in the development of the computer science field called software engineering.
Later in the 1950s, assembly language was found to be so cumbersome that the development of high-level languages (closer to natural languages) began to support easier, faster programming. FORTRAN emerged as the main high-level language for scientific programming, while COBOL became the main language for business programming. These languages carried with them the need for different software, called compilers, that translate high-level language programs into machine code. As programming languages became more powerful and abstract, building compilers that create high-quality machine code and that are efficient in terms of execution speed and storage consumption became a challenging computer science problem. The design and implementation of high-level languages is at the heart of the computer science field called programming languages.
Increasing use of computers in the early 1960s provided the impetus for the development of the first operating systems, which consisted of system-resident software that automatically handled input and output and the execution of programs called “jobs.” The demand for better computational techniques led to a resurgence of interest in numerical methods and their analysis, an activity that expanded so widely that it became known as computational science.
The 1970s and ’80s saw the emergence of powerful computer graphics devices, both for scientific modeling and other visual activities. (Computerized graphical devices were introduced in the early 1950s with the display of crude images on paper plots and cathode-ray tube [CRT] screens.) Expensive hardware and the limited availability of software kept the field from growing until the early 1980s, when the computer memory required for bitmap graphics (in which an image is made up of small rectangular pixels) became more affordable. Bitmap technology, together with high-resolution display screens and the development of graphics standards that make software less machine-dependent, has led to the explosive growth of the field. Support for all these activities evolved into the field of computer science known as graphics and visual computing.
Closely related to this field is the design and analysis of systems that interact directly with users who are carrying out various computational tasks. These systems came into wide use during the 1980s and ’90s, when line-edited interactions with users were replaced by graphical user interfaces (GUIs). GUI design, which was pioneered by Xerox and was later picked up by Apple (Macintosh) and finally by Microsoft (Windows), is important because it constitutes what people see and do when they interact with a computing device. The design of appropriate user interfaces for all types of users has evolved into the computer science field known as human-computer interaction (HCI).
The field of computer architecture and organization has also evolved dramatically since the first stored-program computers were developed in the 1950s. So called time-sharing systems emerged in the 1960s to allow several users to run programs at the same time from different terminals that were hard-wired to the computer. The 1970s saw the development of the first wide-area computer networks (WANs) and protocols for transferring information at high speeds between computers separated by large distances. As these activities evolved, they coalesced into the computer science field called networking and communications. A major accomplishment of this field was the development of the Internet.
The idea that instructions, as well as data, could be stored in a computer’s memory was critical to fundamental discoveries about the theoretical behaviour of algorithms. That is, questions such as, “What can/cannot be computed?” have been formally addressed using these abstract ideas. These discoveries were the origin of the computer science field known as algorithms and complexity. A key part of this field is the study and application of data structures that are appropriate to different applications. Data structures, along with the development of optimal algorithms for inserting, deleting, and locating data in such structures, are a major concern of computer scientists because they are so heavily used in computer software, most notably in compilers, operating systems, file systems, and search engines.
In the 1960s the invention of magnetic disk storage provided rapid access to data located at an arbitrary place on the disk. This invention led not only to more cleverly designed file systems but also to the development of database and information retrieval systems, which later became essential for storing, retrieving, and transmitting large amounts and wide varieties of data across the Internet. This field of computer science is known as information management.
Another long-term goal of computer science research is the creation of computing machines and robotic devices that can carry out tasks that are typically thought of as requiring human intelligence. Such tasks include moving, seeing, hearing, speaking, understanding natural language, thinking, and even exhibiting human emotions. The computer science field of intelligent systems, originally known as artificial intelligence (AI), actually predates the first electronic computers in the 1940s, although the term artificial intelligence was not coined until 1956.
Three developments in computing in the early part of the 21st century—mobile computing, client-server computing, and computer hacking—contributed to the emergence of three new fields in computer science: platform-based development, parallel and distributed computing, and security and information assurance. Platform-based development is the study of the special needs of mobile devices, their operating systems, and their applications. Parallel and distributed computing concerns the development of architectures and programming languages that support the development of algorithms whose components can run simultaneously and asynchronously (rather than sequentially), in order to make better use of time and space. Security and information assurance deals with the design of computing systems and software that protects the integrity and security of data, as well as the privacy of individuals who are characterized by that data.
Finally, a particular concern of computer science throughout its history is the unique societal impact that accompanies computer science research and technological advancements. With the emergence of the Internet in the 1980s, for example, software developers needed to address important issues related to information security, personal privacy, and system reliability. In addition, the question of whether computer software constitutes intellectual property and the related question “Who owns it?” gave rise to a whole new legal area of licensing and licensing standards that applied to software and related artifacts. These concerns and others form the basis of social and professional issues of computer science, and they appear in almost all the other fields identified above.
So, to summarize, the discipline of computer science has evolved into the following 15 distinct fields:
* Algorithms and complexity
* Architecture and organization
* Computational science
* Graphics and visual computing
* Human-computer interaction
* Information management
* Intelligent systems
* Networking and communication
* Operating systems
* Parallel and distributed computing
* Platform-based development
* Programming languages
* Security and information assurance
* Software engineering
* Social and professional issues
Computer science continues to have strong mathematical and engineering roots. Computer science bachelor’s, master’s, and doctoral degree programs are routinely offered by postsecondary academic institutions, and these programs require students to complete appropriate mathematics and engineering courses, depending on their area of focus. For example, all undergraduate computer science majors must study discrete mathematics (logic, combinatorics, and elementary graph theory). Many programs also require students to complete courses in calculus, statistics, numerical analysis, physics, and principles of engineering early in their studies.
Algorithms and complexity
An algorithm is a specific procedure for solving a well-defined computational problem. The development and analysis of algorithms is fundamental to all aspects of computer science: artificial intelligence, databases, graphics, networking, operating systems, security, and so on. Algorithm development is more than just programming. It requires an understanding of the alternatives available for solving a computational problem, including the hardware, networking, programming language, and performance constraints that accompany any particular solution. It also requires understanding what it means for an algorithm to be “correct” in the sense that it fully and efficiently solves the problem at hand.
An accompanying notion is the design of a particular data structure that enables an algorithm to run efficiently. The importance of data structures stems from the fact that the main memory of a computer (where the data is stored) is linear, consisting of a sequence of memory cells that are serially numbered 0, 1, 2,…. Thus, the simplest data structure is a linear array, in which adjacent elements are numbered with consecutive integer “indexes” and an element’s value is accessed by its unique index. An array can be used, for example, to store a list of names, and efficient methods are needed to efficiently search for and retrieve a particular name from the array. For example, sorting the list into alphabetical order permits a so-called binary search technique to be used, in which the remainder of the list to be searched at each step is cut in half. This search technique is similar to searching a telephone book for a particular name. Knowing that the book is in alphabetical order allows one to turn quickly to a page that is close to the page containing the desired name. Many algorithms have been developed for sorting and searching lists of data efficiently.
Although data items are stored consecutively in memory, they may be linked together by pointers (essentially, memory addresses stored with an item to indicate where the next item or items in the structure are found) so that the data can be organized in ways similar to those in which they will be accessed. The simplest such structure is called the linked list, in which noncontiguously stored items may be accessed in a pre-specified order by following the pointers from one item in the list to the next. The list may be circular, with the last item pointing to the first, or each element may have pointers in both directions to form a doubly linked list. Algorithms have been developed for efficiently manipulating such lists by searching for, inserting, and removing items.
Pointers also provide the ability to implement more complex data structures. A graph, for example, is a set of nodes (items) and links (known as edges) that connect pairs of items. Such a graph might represent a set of cities and the highways joining them, the layout of circuit elements and connecting wires on a memory chip, or the configuration of persons interacting via a social network. Typical graph algorithms include graph traversal strategies, such as how to follow the links from node to node (perhaps searching for a node with a particular property) in a way that each node is visited only once. A related problem is the determination of the shortest path between two given nodes on an arbitrary graph. (See graph theory.) A problem of practical interest in network algorithms, for instance, is to determine how many “broken” links can be tolerated before communications begin to fail. Similarly, in very-large-scale integration (VLSI) chip design it is important to know whether the graph representing a circuit is planar, that is, whether it can be drawn in two dimensions without any links crossing (wires touching).
The (computational) complexity of an algorithm is a measure of the amount of computing resources (time and space) that a particular algorithm consumes when it runs. Computer scientists use mathematical measures of complexity that allow them to predict, before writing the code, how fast an algorithm will run and how much memory it will require. Such predictions are important guides for programmers implementing and selecting algorithms for real-world applications.
Computational complexity is a continuum, in that some algorithms require linear time (that is, the time required increases directly with the number of items or nodes in the list, graph, or network being processed), whereas others require quadratic or even exponential time to complete (that is, the time required increases with the number of items squared or with the exponential of that number). At the far end of this continuum lie the murky seas of intractable problems—those whose solutions cannot be efficiently implemented. For these problems, computer scientists seek to find heuristic algorithms that can almost solve the problem and run in a reasonable amount of time.
Further away still are those algorithmic problems that can be stated but are not solvable; that is, one can prove that no program can be written to solve the problem. A classic example of an unsolvable algorithmic problem is the halting problem, which states that no program can be written that can predict whether or not any other program halts after a finite number of steps. The unsolvability of the halting problem has immediate practical bearing on software development. For instance, it would be frivolous to try to develop a software tool that predicts whether another program being developed has an infinite loop in it (although having such a tool would be immensely beneficial).
Architecture and organization
Computer architecture deals with the design of computers, data storage devices, and networking components that store and run programs, transmit data, and drive interactions between computers, across networks, and with users. Computer architects use parallelism and various strategies for memory organization to design computing systems with very high performance. Computer architecture requires strong communication between computer scientists and computer engineers, since they both focus fundamentally on hardware design.
At its most fundamental level, a computer consists of a control unit, an arithmetic logic unit (ALU), a memory unit, and input/output (I/O) controllers. The ALU performs simple addition, subtraction, multiplication, division, and logic operations, such as OR and AND. The memory stores the program’s instructions and data. The control unit fetches data and instructions from memory and uses operations of the ALU to carry out those instructions using that data. (The control unit and ALU together are referred to as the central processing unit [CPU].) When an input or output instruction is encountered, the control unit transfers the data between the memory and the designated I/O controller. The operational speed of the CPU primarily determines the speed of the computer as a whole. All of these components—the control unit, the ALU, the memory, and the I/O controllers—are realized with transistor circuits.
Computers also have another level of memory called a cache, a small, extremely fast (compared with the main memory, or random access memory [RAM]) unit that can be used to store information that is urgently or frequently needed. Current research includes cache design and algorithms that can predict what data is likely to be needed next and preload it into the cache for improved performance.
I/O controllers connect the computer to specific input devices (such as keyboards and touch screen displays) for feeding information to the memory, and output devices (such as printers and displays) for transmitting information from the memory to users. Additional I/O controllers connect the computer to a network via ports that provide the conduit through which data flows when the computer is connected to the Internet.
Linked to the I/O controllers are secondary storage devices, such as a disk drive, that are slower and have a larger capacity than main or cache memory. Disk drives are used for maintaining permanent data. They can be either permanently or temporarily attached to the computer in the form of a compact disc (CD), a digital video disc (DVD), or a memory stick (also called a flash drive).
The operation of a computer, once a program and some data have been loaded into RAM, takes place as follows. The first instruction is transferred from RAM into the control unit and interpreted by the hardware circuitry. For instance, suppose that the instruction is a string of bits that is the code for LOAD 10. This instruction loads the contents of memory location 10 into the ALU. The next instruction, say ADD 15, is fetched. The control unit then loads the contents of memory location 15 into the ALU and adds it to the number already there. Finally, the instruction STORE 20 would store that sum into location 20. At this level, the operation of a computer is not much different from that of a pocket calculator.
In general, programs are not just lengthy sequences of LOAD, STORE, and arithmetic operations. Most importantly, computer languages include conditional instructions—essentially, rules that say, “If memory location n satisfies condition a, do instruction number x next, otherwise do instruction y.” This allows the course of a program to be determined by the results of previous operations—a critically important ability.
Finally, programs typically contain sequences of instructions that are repeated a number of times until a predetermined condition becomes true. Such a sequence is called a loop. For example, a loop would be needed to compute the sum of the first n integers, where n is a value stored in a separate memory location. Computer architectures that can execute sequences of instructions, conditional instructions, and loops are called “Turing complete,” which means that they can carry out the execution of any algorithm that can be defined. Turing completeness is a fundamental and essential characteristic of any computer organization.
Logic design is the area of computer science that deals with the design of electronic circuits using the fundamental principles and properties of logic (see Boolean algebra) to carry out the operations of the control unit, the ALU, the I/O controllers, and other hardware. Each logical function (AND, OR, and NOT) is realized by a particular type of device called a gate. For example, the addition circuit of the ALU has inputs corresponding to all the bits of the two numbers to be added and outputs corresponding to the bits of the sum. The arrangement of wires and gates that link inputs to outputs is determined by the mathematical definition of addition. The design of the control unit provides the circuits that interpret instructions. Due to the need for efficiency, logic design must also optimize the circuitry to function with maximum speed and has a minimum number of gates and circuits.
An important area related to architecture is the design of microprocessors, which are complete CPUs—control unit, ALU, and memory—on a single integrated circuit chip. Additional memory and I/O control circuitry are linked to this chip to form a complete computer. These thumbnail-sized devices contain millions of transistors that implement the processing and memory units of modern computers.
VLSI microprocessor design occurs in a number of stages, which include creating the initial functional or behavioral specification, encoding this specification into a hardware description language, and breaking down the design into modules and generating sizes and shapes for the eventual chip components. It also involves chip planning, which includes building a “floor plan” to indicate where on the chip each component should be placed and connected to other components. Computer scientists are also involved in creating the computer-aided design (CAD) tools that support engineers in the various stages of chip design and in developing the necessary theoretical results, such as how to efficiently design a floor plan with near-minimal area that satisfies the given constraints.
Advances in integrated circuit technology have been incredible. For example, in 1971 the first microprocessor chip (Intel Corporation’s 4004) had only 2,300 transistors, in 1993 Intel’s Pentium chip had more than 3 million transistors, and by 2000 the number of transistors on such a chip was about 50 million. The Power7 chip introduced in 2010 by IBM contained approximately 1 billion transistors. The phenomenon of the number of transistors in an integrated circuit doubling about every two years is widely known as Moore’s law.
Fault tolerance is the ability of a computer to continue operation when one or more of its components fails. To ensure fault tolerance, key components are often replicated so that the backup component can take over if needed. Such applications as aircraft control and manufacturing process control run on systems with backup processors ready to take over if the main processor fails, and the backup systems often run in parallel so the transition is smooth. If the systems are critical in that their failure would be potentially disastrous (as in aircraft control), incompatible outcomes collected from replicated processes running in parallel on separate machines are resolved by a voting mechanism. Computer scientists are involved in the analysis of such replicated systems, providing theoretical approaches to estimating the reliability achieved by a given configuration and processor parameters, such as average time between failures and average time required to repair the processor. Fault tolerance is also a desirable feature in distributed systems and networks. For example, an advantage of a distributed database is that data replicated on different network hosts can provide a natural backup mechanism when one host fails.
Computational science
Computational science applies computer simulation, scientific visualization, mathematical modeling, algorithms, data structures, networking, database design, symbolic computation, and high-performance computing to help advance the goals of various disciplines. These disciplines include biology, chemistry, fluid dynamics, archaeology, finance, sociology, and forensics. Computational science has evolved rapidly, especially because of the dramatic growth in the volume of data transmitted from scientific instruments. This phenomenon has been called the “big data” problem.
The mathematical methods needed for computational science require the transformation of equations and functions from the continuous to the discrete. For example, the computer integration of a function over an interval is accomplished not by applying integral calculus but rather by approximating the area under the function graph as a sum of the areas obtained from evaluating the function at discrete points. Similarly, the solution of a differential equation is obtained as a sequence of discrete points determined by approximating the true solution curve by a sequence of tangential line segments. When discretized in this way, many problems can be recast as an equation involving a matrix (a rectangular array of numbers) solvable using linear algebra. Numerical analysis is the study of such computational methods. Several factors must be considered when applying numerical methods: (1) the conditions under which the method yields a solution, (2) the accuracy of the solution, (3) whether the solution process is stable (i.e., does not exhibit error growth), and (4) the computational complexity (in the sense described above) of obtaining a solution of the desired accuracy.
The requirements of big-data scientific problems, including the solution of ever larger systems of equations, engage the use of large and powerful arrays of processors (called multiprocessors or supercomputers) that allow many calculations to proceed in parallel by assigning them to separate processing elements. These activities have sparked much interest in parallel computer architecture and algorithms that can be carried out efficiently on such machines.
Graphics and visual computing
Graphics and visual computing is the field that deals with the display and control of images on a computer screen. This field encompasses the efficient implementation of four interrelated computational tasks: rendering, modeling, animation, and visualization. Graphics techniques incorporate principles of linear algebra, numerical integration, computational geometry, special-purpose hardware, file formats, and graphical user interfaces (GUIs) to accomplish these complex tasks.
Applications of graphics include CAD, fine arts, medical imaging, scientific data visualization, and video games. CAD systems allow the computer to be used for designing objects ranging from automobile parts to bridges to computer chips by providing an interactive drawing tool and an engineering interface to simulation and analysis tools. Fine arts applications allow artists to use the computer screen as a medium to create images, cinematographic special effects, animated cartoons, and television commercials. Medical imaging applications involve the visualization of data obtained from technologies such as X-rays and magnetic resonance imaging (MRIs) to assist doctors in diagnosing medical conditions. Scientific visualization uses massive amounts of data to define simulations of scientific phenomena, such as ocean modeling, to produce pictures that provide more insight into the phenomena than would tables of numbers. Graphics also provide realistic visualizations for video gaming, flight simulation, and other representations of reality or fantasy. The term virtual reality has been coined to refer to any interaction with a computer-simulated virtual world.
A challenge for computer graphics is the development of efficient algorithms that manipulate the myriad of lines, triangles, and polygons that make up a computer image. In order for realistic on-screen images to be presented, each object must be rendered as a set of planar units. Edges must be smoothed and textured so that their underlying construction from polygons is not obvious to the naked eye. In many applications, still pictures are inadequate, and rapid display of real-time images is required. Both extremely efficient algorithms and state-of-the-art hardware are needed to accomplish real-time animation.
Human-computer interaction
Human-computer interaction (HCI) is concerned with designing effective interaction between users and computers and the construction of interfaces that support this interaction. HCI occurs at an interface that includes both software and hardware. User interface design impacts the life cycle of software, so it should occur early in the design process. Because user interfaces must accommodate a variety of user styles and capabilities, HCI research draws on several disciplines including psychology, sociology, anthropology, and engineering. In the 1960s, user interfaces consisted of computer consoles that allowed an operator directly to type commands that could be executed immediately or at some future time. With the advent of more user-friendly personal computers in the 1980s, user interfaces became more sophisticated, so that the user could “point and click” to send a command to the operating system.
Thus, the field of HCI emerged to model, develop, and measure the effectiveness of various types of interfaces between a computer application and the person accessing its services. GUIs enable users to communicate with the computer by such simple means as pointing to an icon with a mouse or touching it with a stylus or forefinger. This technology also supports windowing environments on a computer screen, which allow users to work with different applications simultaneously, one in each window.
Information management
Information management (IM) is primarily concerned with the capture, digitization, representation, organization, transformation, and presentation of information. Because a computer’s main memory provides only temporary storage, computers are equipped with auxiliary disk storage devices that permanently store data. These devices are characterized by having much higher capacity than main memory but slower read/write (access) speed. Data stored on a disk must be read into main memory before it can be processed. A major goal of IM systems, therefore, is to develop efficient algorithms to store and retrieve specific data for processing.
IM systems comprise databases and algorithms for the efficient storage, retrieval, updating, and deleting of specific items in the database. The underlying structure of a database is a set of files residing permanently on a disk storage device. Each file can be further broken down into a series of records, which contains individual data items, or fields. Each field gives the value of some property (or attribute) of the entity represented by a record. For example, a personnel file may contain a series of records, one for each individual in the organization, and each record would contain fields that contain that person’s name, address, phone number, e-mail address, and so forth.
Many file systems are sequential, meaning that successive records are processed in the order in which they are stored, starting from the beginning and proceeding to the end. This file structure was particularly popular in the early days of computing, when files were stored on reels of magnetic tape and these reels could be processed only in a sequential manner. Sequential files are generally stored in some sorted order (e.g., alphabetic) for printing of reports (e.g., a telephone directory) and for efficient processing of batches of transactions. Banking transactions (deposits and withdrawals), for instance, might be sorted in the same order as the accounts file, so that as each transaction is read the system need only scan ahead to find the accounts record to which it applies.
With modern storage systems, it is possible to access any data record in a random fashion. To facilitate efficient random access, the data records in a file are stored with indexes called keys. An index of a file is much like an index of a book; it contains a key for each record in the file along with the location where the record is stored. Since indexes might be long, they are usually structured in some hierarchical fashion so that they can be navigated efficiently. The top level of an index, for example, might contain locations of (point to) indexes to items beginning with the letters A, B, etc. The A index itself may contain not locations of data items but pointers to indexes of items beginning with the letters Ab, Ac, and so on. Locating the index for the desired record by traversing a treelike structure is quite efficient.
Many applications require access to many independent files containing related and even overlapping data. Their information management activities frequently require data from several files to be linked, and hence the need for a database model emerges. Historically, three different types of database models have been developed to support the linkage of records of different types: (1) the hierarchical model, in which record types are linked in a treelike structure (e.g., employee records might be grouped under records describing the departments in which employees work), (2) the network model, in which arbitrary linkages of record types may be created (e.g., employee records might be linked on one hand to employees’ departments and on the other hand to their supervisors—that is, other employees), and (3) the relational model, in which all data are represented in simple tabular form.
In the relational model, each individual entry is described by the set of its attribute values (called a relation), stored in one row of the table. This linkage of n attribute values to provide a meaningful description of a real-world entity or a relationship among such entities forms a mathematical n-tuple. The relational model also supports queries (requests for information) that involve several tables by providing automatic linkage across tables by means of a “join” operation that combines records with identical values of common attributes. Payroll data, for example, can be stored in one table and personnel benefits data in another; complete information on an employee could be obtained by joining the two tables using the employee’s unique identification number as a common attribute.
To support database processing, a software artifact known as a database management system (DBMS) is required to manage the data and provide the user with commands to retrieve information from the database. For example, a widely used DBMS that supports the relational model is MySQL.
Another development in database technology is to incorporate the object concept. In object-oriented databases, all data are objects. Objects may be linked together by an “is-part-of” relationship to represent larger, composite objects. Data describing a truck, for instance, may be stored as a composite of a particular engine, chassis, drive train, and so forth. Classes of objects may form a hierarchy in which individual objects may inherit properties from objects farther up in the hierarchy. For example, objects of the class “motorized vehicle” all have an engine; members of the subclasses “truck” or “airplane” will then also have an engine.
NoSQL, or non-relational databases, have also emerged. These databases are different from the classic relational databases because they do not require fixed tables. Many of them are document-oriented databases, in which voice, music, images, and video clips are stored along with traditional textual information. An important subset of NoSQL are the XML databases, which are widely used in the development of Android smartphone and tablet applications.
Data integrity refers to designing a DBMS that ensures the correctness and stability of its data across all applications that access the system. When a database is designed, integrity checking is enabled by specifying the data type of each column in the table. For example, if an identification number is specified to be nine digits, the DBMS will reject an update attempting to assign a value with more or fewer digits or one including an alphabetic character. Another type of integrity, known as referential integrity, requires that each entity referenced by some other entity must itself exist in the database. For example, if an airline reservation is requested for a particular flight number, then the flight referenced by that number must actually exist.
Access to a database by multiple simultaneous users requires that the DBMS include a concurrency control mechanism (called locking) to maintain integrity whenever two different users attempt to access the same data at the same time. For example, two travel agents may try to book the last seat on a plane at more or less the same time. Without concurrency control, both may think they have succeeded, though only one booking is actually entered into the database.
A key concept in studying concurrency control and the maintenance of data integrity is the transaction, defined as an indivisible operation that transforms the database from one state into another. To illustrate, consider an electronic transfer of funds of $5 from bank account A to account B. The operation that deducts $5 from account A leaves the database without integrity since the total over all accounts is $5 short. Similarly, the operation that adds $5 to account B in itself makes the total $5 too much. Combining these two operations into a single transaction, however, maintains data integrity. The key here is to ensure that only complete transactions are applied to the data and that multiple concurrent transactions are executed using locking so that serializing them would produce the same result. A transaction-oriented control mechanism for database access becomes difficult in the case of a long transaction, for example, when several engineers are working, perhaps over the course of several days, on a product design that may not exhibit data integrity until the project is complete.
As mentioned previously, a database may be distributed in that its data can be spread among different host computers on a network. If the distributed data contains duplicates, the concurrency control problem is more complex. Distributed databases must have a distributed DBMS to provide overall control of queries and updates in a manner that does not require that the user know the location of the data. A closely related concept is interoperability, meaning the ability of the user of one member of a group of disparate systems (all having the same functionality) to work with any of the systems of the group with equal ease and via the same interface.
Additional Information:
What is Computer Science?
Computing is part of everything we do. Computing drives innovation in engineering, business, entertainment, education, and the sciences—and it provides solutions to complex, challenging problems of all kinds.
Computer science is the study of computers and computational systems. It is a broad field which includes everything from the algorithms that make up software to how software interacts with hardware to how well software is developed and designed. Computer scientists use various mathematical algorithms, coding procedures, and their expert programming skills to study computer processes and develop new software and systems.
How is Computer Science Different from IT?
Computer science focuses on the development and testing of software and software systems. It involves working with mathematical models, data analysis and security, algorithms, and computational theory. Computer scientists define the computational principles that are the basis of all software.
Information technology (IT) focuses on the development, implementation, support, and management of computers and information systems. IT involves working both with hardware (CPUs, RAM, hard disks) and software (operating systems, web browsers, mobile applications). IT professionals make sure that computers, networks, and systems work well for all users.
What Careers does Computer Science Offer?
Computing jobs are among the highest paid today, and computer science professionals report high job satisfaction. Most computer scientists hold at least a bachelor's degree in computer science or a related field.
Principal areas of study and careers within computer science include artificial intelligence, computer systems and networks, security, database systems, human-computer interaction, vision and graphics, numerical analysis, programming languages, software engineering, bioinformatics, and theory of computing.
Some common job titles for computer scientists include:
* Computer Programmer
* Information Technology Specialist
* Data Scientist
* Web Optimization Specialist
* Database Administrator
* Systems Analyst
* Web Developer
* Quality Assurance Engineer
* Business Intelligence Analyst
* Systems Engineer
* Product Manager
* Software Engineer
* Hardware Engineer
* Front-End Developer
* Back-End Developer
* Full-Stack Developer
* Mobile Developer
* Network Administrator
* Chief Information Officer
* Security Analyst
* Video Game Developer
* Health Information Technician.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Offline