title
stringlengths 3
69
| text
stringlengths 901
96.7k
| relevans
float64 0.76
0.83
| popularity
float64 0.94
1
| ranking
float64 0.76
0.83
|
---|---|---|---|---|
Sustainable business | A sustainable business, or a green business, is an enterprise which has (or aims to have) a minimal negative impact or potentially a positive effect on the global or local environment, community, society, or economy—a business that attempts to meet the triple bottom line. They cluster under different groupings and the whole is sometimes referred to as "green capitalism". Often, sustainable businesses have progressive environmental and human rights policies. In general, a business is described as green if it matches the following four criteria:
It incorporates principles of sustainability into each of its business decisions.
It supplies environmentally friendly products or services that replace demand for nongreen products and/or services.
It is greener than traditional competition.
It has made an enduring commitment to environmental principles in its business operations.
Terminology
This article is concerned with "green" or sustainable businesses, i.e. businesses which (aim to) have a minimal negative impact or potentially a positive effect on the global or local environment, and with business practices which can be adopted to support these objectives. Thus a sustainable business is one which participates in environmentally friendly or green activities to ensure that all their processes, products, and manufacturing activities adequately address current environmental concerns while maintaining a profit. In other words, it is a business that "meets the needs of the present [world] without compromising the ability of future generations to meet their own needs". It is the process of assessing how to design products that will take advantage of the current environmental situation and how well a company’s products perform with renewable resources.
The Brundtland Report emphasized that sustainability is a three-legged stool of people, planet, and profit. Sustainable businesses within the supply chain try to balance all three through the triple-bottom-line concept—using sustainable development and sustainable distribution to affect the environment, business growth, and society.
To succeed in such an approach, where balancing stakeholder interests and collaborative solutions are key, requires a strategic approach. One philosophy, that includes many different tools and methods, is the concept of Sustainable Enterprise Excellence. Another is the adoption of the concept of responsible growth.
Sustainability is often confused with corporate social responsibility (CSR), though the two are not the same. Bansal and DesJardine (2014) state that the notion of ‘time’ discriminates sustainability from CSR and other similar concepts. Whereas ethics, morality, and norms permeate CSR, sustainability only obliges businesses to make intertemporal trade-offs to safeguard intergenerational equity.
Short-termism is seen as the bane of sustainability. While CSR and sustainability are not the same, they are related to each other. For example, setting salaries, implementing new technologies, and retiring old plants all have an impact on the firm's stakeholders and the natural environment.
Green business has been seen as a possible mediator of economic-environmental relations, and if proliferated, could diversify the economy, even if it has a negligible effect on lowering atmospheric CO2 levels. The definition of "green jobs" is ambiguous, but it is generally agreed that these jobs, the result of green business, should be linked to "clean energy" and contribute to the reduction of greenhouse gases. These corporations can be seen as generators of not only "green energy", but as producers of new "materializes" that are the product of the technologies, these firms developed and deployed.
Environmental sphere
A major initiative of sustainable businesses is to eliminate or decrease the environmental harm caused by the production and consumption of their goods.<ref>Becker, T. (2008). "The Business behind Green, Eliminating fear, uncertainty, and doubt." APICS magazine. vol. 18, no. 2.</ref> The impact of such human activities in terms of the number of greenhouse gases produced can be measured in units of carbon dioxide and is referred to as the carbon footprint. The carbon footprint concept is derived from the ecological footprint analysis, which examines the ecological capacity required to support the consumption of products.
Businesses can adopt a wide range of green initiatives: Tao et al. refer to a variety of "green" business practices including green strategy, green design, green production and green operation. One of the most common examples of a "green" business practice is the act of "going paperless" or sending electronic correspondence in instead of paper when possible. On a higher level, examples of sustainable business practices include: refurbishing used products (e.g., tuning up lightly used commercial fitness equipment for resale); revising production processes to eliminate waste (such as using a more accurate template to cut out designs), and choosing nontoxic raw materials and processes. For example, Canadian farmers have found that hemp is a sustainable alternative to rapeseed in their traditional crop rotation; hemp grown for fiber or seed requires no pesticides or herbicides. Another example is upcycling clothes or textiles, in which businesses can upcycle products to maintain or increase their quality.
Sustainable business leaders also take into account the life cycle costs for the items they produce. Input costs must be considered regarding regulations, energy use, storage, and disposal. Designing for the environment (DFE) is also an element of sustainable business. This process enables users to consider the potential environmental impacts of a product and the process used to make that product.
The many possibilities for adopting green practices have led to considerable pressure being put upon companies from consumers, employees, government regulators, and other stakeholders. Some companies have resorted to "greenwashing" instead of making meaningful changes, merely marketing their products in ways that suggest green practices. For example, various producers in the bamboo fiber industry have been taken to court for advertising their products as "greener" than they are. In their book “Corporate Sustainability in International Comparison”, Schaltegger et al. (2014) analyze the current state of corporate sustainability management and corporate social responsibility across eleven countries. Their research is based on an extensive survey focusing on the companies’ intention to pursue sustainability management (i.e. motivation; issues), the integration of sustainability in the organization (i.e. connecting sustainability to the core business; involving corporate functions; using drivers of business cases for sustainability) and the actual implementation of sustainability management measures (i.e. stakeholder management; sustainability management tools and standards; measurements). An effective way for businesses to contribute towards waste reduction is to remanufacture products so that the materials used can have a longer lifespan.
Sustainable Businesses
The Harvard Business School business historian Geoffrey Jones (academic) traces the historical origins of green business back to pioneering start-ups in organic food and wind and solar energy before World War 1. Among large corporations, Ford Motor Company occupies an odd role in the story of sustainability. Ironically, founder Henry Ford was a pioneer in the sustainable business realm, experimenting with plant-based fuels during the days of the Model T. Ford Motor Company also shipped the Model A truck in crates that then became the vehicle floorboards at the factory destination. This was a form of upcycling, retaining high quality in a closed-loop industrial cycle. Furthermore, the original auto body was made of a stronger-than-steel hemp composite. Today, of course, Fords aren't made of hemp, nor do they run on the most sensible fuel. Currently, Ford's claim to eco-friendly fame is the use of seat fabric made from 100% post-industrial materials and renewable soy foam seat bases. Ford executives recently appointed the company’s first senior vice president of sustainability, environment, and safety engineering. This position is responsible for establishing a long-range sustainability strategy and environmental policy, developing the products and processes necessary to satisfy customers and society as a whole while working toward energy independence. It remains to be seen whether Ford will return to its founder's vision of a petroleum-free automobile, a vehicle powered by the remains of plant matter.
The automobile manufacturer Subaru has also made efforts to tackle sustainability. In 2008 a Subaru assembly plant in Lafayette became the first auto manufacturer to achieve zero landfill status when the plant implemented sustainable policies. The company successfully managed to implement a plan that increased refuse recycling to 99.8%. In 2012, the corporation increased the reuse of Styrofoam by 9%. And from the year 2008 to the year 2012, environmental incidents and accidents were reduced from 18 to 4.
Smaller companies such as Nature's Path, an organic cereal and snack-making business, have also made sustainability gains in the 21st century. CEO Arran Stephens and his associates have ensured that the quickly growing company's products are produced without toxic farm chemicals. Furthermore, employees are encouraged to find ways to reduce consumption. Sustainability is an essential part of corporate discussions. Another example comes from Salt Spring Coffee, a company created in 1996 as a certified organic, fair trade, coffee producer. In recent years they have become carbon neutral, lowering emissions by reducing long-range trucking and using bio-diesel in delivery trucks, upgrading to energy-efficient equipment, and purchasing carbon offsets. The company claims to offer the first carbon-neutral coffee sold in Canada. Salt Spring Coffee was recognized by the David Suzuki Foundation in the 2010 report Doing Business in a New Climate''. A third example comes from Korea, where rice husks are used as nontoxic packaging for stereo components and other electronics. The same material is later recycled to make bricks.
Some companies in the textile industry have been moving towards more sustainable business practices. Specifically, the clothing company Patagonia has focused on reducing consumption and waste. The company limits its environmental impact by ensuring only recycled and organic materials, repairing damaged clothes, and by complying with strong environmental protection standards for its entire supply chain.
Some companies in the mining and specifically gold mining industries are attempting to move towards more sustainable practices, especially given that the industry is one of the most environmentally destructive. Regarding gold mining, Northwestern University scientists have, in the laboratory, discovered an inexpensive and environmentally sustainable method that uses simple cornstarch—instead of cyanide—to isolate gold from raw materials in a selective manner. Such a method can reduce the amount of cyanide released into the environment during gold extraction from raw ore, with one of the Northwestern University scientists, Sir Fraser Stoddart stating that: “The elimination of cyanide from the gold industry is of the utmost importance environmentally". Additionally, the retail jewelry industry is now trying to be more sustainable, with companies using green energy providers and recycling more, as well as preventing the use of mined-so called 'virgin gold' by applying re-finishing methods on pieces and re-selling them. Furthermore, the customer may opt for Fairtrade Gold, which gives a better deal to small-scale and artisanal miners, and is an element of sustainable business. However, not everyone thinks that mining can be sustainable and many believe that much more must be done, noting that mining in general requires greater regional and international legislation and regulation, which is a valid point given the huge impact mining has on the planet and the huge number of products and goods that are made wholly or partly from mined materials.
In the luxury sector, in 2012, the group Kering developed the "Environmental Profit & Loss account" (EP&L) accounting method to track the progress of its sustainability goals, a strategy aligned with the UN Sustainable Development Goals. In 2019, on a request from the President Emmanuel Macron, François-Henri Pinault, Chairman and CEO of the luxury group Kering, presented the Fashion Pact during the summit, an initiative signed by 32 fashion firms committing to concrete measures to reduce their environmental impact. By 2020, 60 firms joined the Fashion Pact.
Fair Trade is a form of sustainable business and among the highest forms of CSR (Corporate Social Responsibility). Organizations that participate in Fair Trade typically adhere to the ten principles of the World Fair Trade Organization (WFTO). Moreover, Fair Trade promotes entrepreneurial development among communities in developing countries and it encourages communities to be responsible and accountable for their economic development via market engagement. Fair Trade is a form of marketing with a strong and direct social benefit beyond the economic supply chain.
Social sphere
Organizations that give back to the community, whether through employees volunteering their time or through charitable donations, are often considered socially sustainable. Organizations can also encourage education in their communities by training their employees and offering internships to younger members of the community. Practices such as these increase the education level and quality of life in the community.
For a business to be truly sustainable, it must sustain not only the necessary environmental resources, but also social resources—including employees, customers (the community), and its reputation.
A term that is directly relates to the social aspect of sustainability is Environmental justice. Sustainability and social justice are directly connected to one another, and seeing these as separate unrelated issues can lead to more problems for the environment and potentially businesses.
Consumers and Marketing
When some purchase goods or services, they may care what a company stands for. This includes social and environmental aspects that may not have seemed important in business in the past. Some consumers may ask for more sustainable goods and services if they feel companies don't care about their impact on the environment. Because ecological awareness can be treated as a choice of personal taste rather than a necessity, it can be a method to try to increase capital from a marketing standpoint. When marketing a product or service it is important that a business follows through with environmental claims. False advertising may lead to distrust among consumers and can ultimately end a company.
Greenwashing
With sustainability becoming more prevalent in the last decade, businesses need to be aware of laws and norms surrounding claims and the potential legal implications. In the United States, the Federal Trade Commission (FTC) Green guides are one rulebook for businesses on how to avoid potentially deceiving consumers with false advertising. This often is a problem when companies make vague or false environmental claims about a product or service they are selling. When this occurs, it can be called "greenwashing". Greenwashing also refers to an act of overexaggerating the beneficial effects a product may have on the environment. When companies do not follow such guides, they may be subject to legal ramifications and harmed reputations. Sustainable businesses often invest in experienced legal practitioners who can understand and can provide counsel on the FTC Guides and other such frameworks.
Organizations
The European community’s Restriction of Hazardous Substances Directive restricts the use of certain hazardous materials in the production of various electronic and electrical products. Waste Electrical and Electronic Equipment (WEEE) directives provide collection, recycling, and recovery practices for electrical goods. The World Business Council for Sustainable Development and the World Resources Institute are two organizations working together to set a standard for reporting on corporate carbon footprints. From October 2013, all quoted companies in the UK are legally required to report their annual greenhouse gas emissions in their directors’ report, under the Companies Act 2006 (Strategic and Directors’ Reports) Regulations 2013.
Lester Brown’s Plan B 2.0 and Hunter Lovins’s Natural Capitalism provide information on sustainability initiatives.
Corporate sustainability strategies
Corporate sustainability strategies can aim to take advantage of sustainable revenue opportunities, while protecting the value of business against increasing energy costs, the costs of meeting regulatory requirements, changes in the way customers perceive brands and products, the volatile price of resources.
Not all eco-strategies can be incorporated into a company's business immediately. The widely practiced strategies include Innovation, Collaboration, Process Improvement and Sustainability reporting.
Innovation & Technology: This method focuses on a company's ability to change its products and services towards better environmental impacts, for example less waste production.
Collaboration: The formation of networks with similar or partner companies facilitates knowledge sharing and propels innovation.
Process Improvement: Continuous process surveying and improvement are essential to reduction in negative impacts. Employee awareness of company-wide sustainability plan further aids the integration of new and improved processes.
Sustainability Reporting: Periodic reporting of company performance in relation to goals encourages performance monitoring internally and transparency and accountability externally. The goals might then be incorporated into the corporate mission.
Greening the Supply Chain: Sustainable procurement is important for any sustainability strategy as a company's impact on the environment is much bigger than the products that they consume. The B Corporation (certification) model is a good example of one that encourages companies to focus on this.
Choosing the Right Leaders: Having CEOs informed about the opportunities from sustainability guides companies in the right steps to being eco-friendly. As the world is slowly transitioning to sustainability, it is important for our company leaders to prioritize and have a sense of urgency.
Companies should adopt a sound measurement and management system to collect data on their sustainability impacts and dependencies, as well as a regular forum for all stakeholders to discuss sustainability issues. The Sustainability Balanced Scorecard is a performance measurement and management system aiming at balancing financial and non-financial as well as short and long-term measures. It explicitly integrates strategically relevant environmental, social and ethical goals into the overall performance management system and supports strategic sustainability management.
Noteworthy examples of sustainable business practices that are often part of corporate sustainability strategies can include: transitioning to renewable energy sources, implementing effective recycling programs, minimizing waste generation in industrial processes, developing eco-friendly product designs, prioritizing the adoption of sustainable packaging materials, fostering an ethical and responsible supply chain, partnering with charities, encouraging volunteerism, upholding equitable treatment of employees, and prioritizing their overall welfare, among numerous other initiatives.
Standards
Enormous economic and population growth worldwide in the second half of the twentieth century aggravated the factors that threaten health and the environment — including ozone depletion, climate change, resource depletion, fouling of natural resources, and extensive loss of biodiversity and habitat. In the past, the standard approaches to environmental problems generated by business and industry have been regulatory-driven "end-of-the-pipe" remediation efforts. In the 1990s, efforts by governments, NGOs, corporations, and investors began to grow to develop awareness and plans for voluntary standards and investment in sustainability by business.
One critical milestone was the establishment of the ISO 14000 standards whose development came as a result of the Rio Summit on the Environment held in 1992. ISO 14001 is the cornerstone standard of the ISO 14000 series. This specifies a framework of control for an Environmental Management System against which an organization can be certified by a third party. Other ISO 14000 Series Standards are actually guidelines, many to help you achieve registration to ISO 14001. They include the following:
ISO 14004 provides guidance on the development and implementation of environmental management systems.
ISO 14010 provides general principles of environmental auditing (now superseded by ISO 19011)
ISO 14011 provides specific guidance on audit an environmental management system (now superseded by ISO 19011)
ISO 14012 provides guidance on qualification criteria for environmental auditors and lead auditors (now superseded by ISO 19011)
ISO 14013/5 provides audit program review and assessment material.
ISO 14020+ labeling issues
ISO 14030+ provides guidance on performance targets and monitoring within an Environmental Management System
ISO 14040+ covers life cycle issues
There are now a wide range of sustainability accounting frameworks that organizations use to measure and disclose on their sustainability impacts and dependencies. These have evolved since the 1990s to encompass metrics spanning a wide range of social, environmental, economic and ethical issues.
Circular business models
While the initial focus of academic, industry, and policy activities was mainly focused on the development of re-X (recycling, remanufacturing, reuse, recovery, etc.), it soon became clear that the technological capabilities increasingly exceed their implementation. For the transition towards a Circular Economy, different stakeholders have to work together. This shifted attention towards business model innovation as a key leverage for 'circular' technology adaption.
Circular business models are business models that are closing, narrowing, slowing, intensifying, and dematerializing loops, to minimize the resource inputs into and the waste and emission leakage out of the organizational system. This comprises recycling measures (closing), efficiency improvements (narrowing), use phase extensions (slowing or extending), a more intense use phase (intensifying), and the substitution of product utility by service and software solutions (dematerializing).
Certification
Challenges and opportunities
Implementing sustainable business practices may have an effect on profits and a firm's financial 'bottom line'. However, during a time where environmental awareness is popular, green strategies are likely to be embraced by employees, consumers, and other stakeholders. Many organizations concerned about the environmental impact of their business are taking initiatives to invest in sustainable business practices. In fact, a positive correlation has been reported between environmental performance and economic performance. Businesses trying to implement sustainable business need to have insights on balancing the social equity, economic prosperity and environmental quality elements.
If an organization’s current business model is inherently unsustainable, becoming truly sustainable requires a complete makeover of the business model (e.g. from selling cars to offering car sharing and other mobility services). This can present a major challenge due to the differences between the old and the new model and the respective skills, resources and infrastructure needed. A new business model can offer major opportunities by entering or even creating new markets and reaching new customer groups. The main challenges faced in the sustainable business practices implementation by businesses in developing countries include lack of skilled personnel, technological challenges, socio-economic challenges, organizational challenges and lack of proper policy framework. Skilled personnel plays a crucial role in quality management, enhanced compliance with international quality standards, and preventative and operational maintenance attitude necessary to ensure sustainable business. In the absence of skilled work forces, companies fail to implement a sustainable business model.
Another major challenge to the effective implementation of sustainable business is organizational challenges. Organizational challenges to the implementation of sustainable business activities arise from the difficulties associated with the planning, implementation and evaluation of sustainable business models. Addressing the organizational challenges for the implementation of sustainable business practices need to begin by analyzing the whole value chain of the business rather than focusing solely on the company's internal operations. Another major challenge is the lack of an appropriate policy framework for sustainable business. Companies often comply with the lowest economic, social and environmental sustainability standards, when in fact the true sustainability can be achieved when the business is focused beyond compliance with integrated strategy and purpose.
Companies leading the way in sustainable business practices can take advantage of sustainable revenue opportunities: according to the Department for Business, Innovation and Skills the UK green economy will grow by 4.9 to 5.5 percent a year by 2015, and the average internal rate of return on energy efficiency investments for large businesses is 48%. A 2013 survey suggests that demand for green products appears to be increasing: 27% of respondents said they are more likely to buy a sustainable product and/or service than 5 years ago. Furthermore, sustainable business practices may attract talent and generate tax breaks.
See also
References
External links
Sustainable Business Ideas For Eco Conscious Entrepreneurs
David O'Brien Centre for Sustainable Enterprise, Concordia University, Montreal
Erb Institute for Global Sustainable Enterprise at the University of Michigan
Center for Sustainable Global Enterprise at Cornell University
Natural Resources Defense Council
Sustainable Business Models - On the New Economy
Magazine MN| Sustainable Business and Eco-innovations
Sustainability-focused consumer business reviews
Sustainability
Sustainable development | 0.775294 | 0.98581 | 0.764292 |
Typology | Typology is the study of various traits and types, or the systematic classification of the types of something according to their common characteristics. Typology is the act of finding, counting and classifying facts with the help of eyes, other senses and logic. Typology may refer to:
Typology (anthropology), human anatomical categorization based on morphological traits
Typology (archaeology), classification of artefacts according to their characteristics
Typology (linguistics), study and classification of languages according to their structural features
Morphological typology, a method of classifying languages
Typology (psychology), a model of personality types
Psychological typologies, classifications used by psychologists to describe the distinctions between people
Typology (statistics), a concept in statistics, research design and social sciences
Typology (theology), the Christian interpretation of some figures and events in the Old Testament as foreshadowing the New Testament
Typology (urban planning and architecture), the classification of characteristics common to buildings or urban spaces
Building typology, relating to buildings and architecture
Farm typology, farm classification by the USDA
Sociopolitical typology, four types, or levels, of a political organization
See also
The Bechers' photographic typologies
Blanchard's transsexualism typology, a controversial classification of trans women
Johnson's Typology, a classification of intimate partner violence (IPV)
Topology (disambiguation)
Type (disambiguation)
Typification, a process of creating standard (typical) social construction based on standard assumptions
Typology of Greek vase shapes, classification of Greek vases
Typography, the art and technique of arranging type to make written language legible, readable and appealing when displayed | 0.772637 | 0.989189 | 0.764284 |
Ideanomics | Ideanomics, Inc. is a global electric vehicle company that is focused on driving the adoption of electric commercial vehicles and associated sustainable energy consumption. It is made up of 5 subsidiaries including: VIA Motors, Solectrac, Treeletrik, Wave, and US Hybrid.
The company provides turn-key vehicle, finance, leasing, and energy management services for commercial fleet operators. Its Ideanomics Mobility division has a strong ‘Made in America’ theme, and boasts a market validated, and revenue producing, deployment of technologies and vehicles for high-growth commercial fleet segments such as last-mile and local delivery, wireless charging, Hydrogen fuel cells, and Agritech.
The company's vehicle division, Ideanomics Mobility, is headed by Robin Mackie who serves as the chairman. The company is headquartered in New York City, New York in the United States.
History
The company was founded in 2004 by Shane McMahon who currently also serves as the chairman. Alf Poor is the incumbent chief executive officer serving since February 2019.
In October 2018, they purchased a building at $5.2M UConn campus in West Hartford to move their operations to.
In June 2021, the company was included in the broad-market Russell 3000 Index. In the same month, it also acquired US Hybrid, zero-emission vehicle manufacturer.
In August 2021, it acquired Via Motors for US$630 million. It has previously acquired US Hybrid, an electric powertrain components and fuel cell engines manufacturer; Solectrac, which is one of two electric tractor manufacturers in the United States; Wave, a wireless charging company.
In March 2022, it acquired Italian electric motorcycle manufacturer Energica.
In July 2024, it was delisted from Nasdaq for failure to meet the minimum bid price and market value of publicly held shares required by Nasdaq Listing Rules 5550(a)(2) and 5550(b)(2).
In August 2024 a Patent Infringement lawsuit filed against IDEX/WAVE by WiTricity
Clients and major sales
In February 2021, AVTA (Antelope Valley Transit Authority) bought 19 electric vans from Ideanomics.
In December 2021, subsidiary US Hybrid received a $5.5 million order from Global Environmental Products (GEP) to electrify street sweepers.
References
External links
American companies established in 2004
2004 establishments in New York City
Electric vehicle manufacturers of the United States
Electric utility vehicles
Electric tractors
Companies listed on the Nasdaq | 0.765385 | 0.998556 | 0.76428 |
Empiricism | In philosophy, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricists argue that empiricism is a more reliable method of finding the truth than purely using logical reasoning, because humans have cognitive biases and limitations which lead to errors of judgement. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. Empiricists may argue that traditions (or customs) arise due to relations of previous sensory experiences.
Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through later experience.
Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.
Empiricism, often used by natural scientists, believes that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification". Empirical research, including experiments and validated measurement tools, guides the scientific method.
Etymology
The English term empirical derives from the Ancient Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which the words experience and experiment are derived.
Background
A central concept in science and the scientific method is that conclusions must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results to engage in reasoned model building and theoretical inquiry.
Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience. In epistemology (theory of knowledge) empiricism is typically contrasted with rationalism, which holds that knowledge may be derived from reason independently of the senses, and in the philosophy of mind it is often contrasted with innatism, which holds that some knowledge and ideas are already present in the mind at birth. However, many Enlightenment rationalists and empiricists still made concessions to each other. For example, the empiricist John Locke admitted that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly, Robert Boyle, a prominent advocate of the experimental method, held that we also have innate ideas. At the same time, the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method".
History
Early empiricism
Between 600 and 200 BCE, the Vaisheshika school of Hindu philosophy, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra. The Charvaka school held similar beliefs, asserting that perception is the only reliable source of knowledge while inference obtains knowledge with uncertainty.
The earliest Western proto-empiricists were the empiric school of ancient Greek medical practitioners, founded in 330 BCE. Its members rejected the doctrines of the dogmatic school, preferring to rely on the observation of phantasiai (i.e., phenomena, the appearances). The Empiric school was closely allied with the Pyrrhonist school of philosophy, which made the philosophical case for their proto-empiricism.
The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of the mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The notion dates back to Aristotle, :
Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses").
This idea was later developed in ancient philosophy by the Stoic school, from about 330 BCE. Stoic epistemology generally emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon."
Islamic Golden Age and Pre-Renaissance (5th to 15th centuries CE)
During the Middle Ages (from the 5th to the 15th century CE) Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi, developing into an elaborate theory by Avicenna (c. 980 – 1037 CE) and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur.
In the 12th century CE, the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebu Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.
A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society.
During the 13th century Thomas Aquinas adopted into scholasticism the Aristotelian position that the senses are essential to the mind. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind.
Renaissance Italy
In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings."
Significantly, an empirical metaphysical system was developed by the Italian philosopher Bernardino Telesio which had an enormous impact on the development of later Italian thinkers, including Telesio's students Antonio Persio and Sertorio Quattromani, his contemporaries Thomas Campanella and Giordano Bruno, and later British philosophers such as Francis Bacon, who regarded Telesio as "the first of the moderns". Telesio's influence can also be seen on the French philosophers René Descartes and Pierre Gassendi.
The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (c. 1520 – 1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperimento. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry.
British empiricism
British empiricism, a retrospective characterization, emerged during the 17th century as an approach to early modern philosophy and modern science. Although both integral to this overarching transition, Francis Bacon, in England, first advocated for empiricism in 1620, whereas René Descartes, in France, laid the main groundwork upholding rationalism around 1640. (Bacon's natural philosophy was influenced by Italian philosopher Bernardino Telesio and by Swiss physician Paracelsus.) Contributing later in the 17th century, Thomas Hobbes and Baruch Spinoza are retrospectively identified likewise as an empiricist and a rationalist, respectively. In the Enlightenment of the late 17th century, John Locke in England, and in the 18th century, both George Berkeley in Ireland and David Hume in Scotland, all became leading exponents of empiricism, hence the dominance of empiricism in British philosophy. The distinction between rationalism and empiricism was not formally made until Immanuel Kant, in Germany, around 1780, who sought to merge the two views.
In response to the early-to-mid-17th-century "continental rationalism", John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written.
There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple were structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes.
A generation later, the Irish Anglican bishop George Berkeley (1685–1753) determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called subjective idealism.
Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And,
Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations.
Hume maintained that no knowledge, even the most basic beliefs about the natural world, can be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method—that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past.
Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt.
Phenomenalism
Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences.
Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist—hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation".
Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin:
Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered.
In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction.
The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. The translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation.
There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man).
Logical empiricism
Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A. J. Ayer, Rudolf Carnap and Hans Reichenbach.
The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the "analytic" (a priori) and the "synthetic" (a posteriori). On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called "verification principle". Any sentence that is not purely logical, or is unverifiable, is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems.
In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such". The central theses of logical positivism (verificationism, the analytic–synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W. V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists.
Pragmatism
In the late 19th and early 20th century, several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking.
Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view.
Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".
In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these, he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique—in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness"—what the Scholastics called its haecceity—that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception.
Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism—though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support", by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today.
John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism, was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori.
See also
Endnotes
References
Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD.
Aristotle, "On the Soul" (De Anima), W. S. Hett (trans.), pp. 1–203 in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, UK, 1936.
Aristotle, Posterior Analytics.
Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari
Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library.
Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31.
Chisolm, R. (1948), "The Problem of Empiricism", Journal of Philosophy 45, 512–17.
Dewey, John (1906), Studies in Logical Theory.
Encyclopædia Britannica, "Empiricism", vol. 4, p. 480.
Hume, D., A Treatise of Human Nature, L.A. Selby-Bigge (ed.), Oxford University Press, London, UK, 1975.
Hume, David. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902. Gutenberg press full-text
James, William (1911), The Meaning of Truth.
Keeton, Morris T. (1962), "Empiricism", pp. 89–90 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Leftow, Brian (ed., 2006), Aquinas: Summa Theologiae, Questions on God, pp. vii et seq.
Macmillan Encyclopedia of Philosophy (1969), "Development of Aristotle's Thought", vol. 1, pp. 153ff.
Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297.
Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503.
Macmillan Encyclopedia of Philosophy (1969), "Mathematics, Foundations of", vol. 5, pp. 188–89.
Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, pp. 192ff.
Macmillan Encyclopedia of Philosophy (1969), "Epistemological Discussion", subsections on "A Priori Knowledge" and "Axioms".
Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131.
Macmillan Encyclopedia of Philosophy (1969), "Thomas Aquinas", subsection on "Theory of Knowledge", vol. 8, pp. 106–07.
Marconi, Diego (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy.
Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
Maxwell, Nicholas (1998), The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford.
Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968.
Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN.
Peirce, C.S., "Lectures on Pragmatism", Cambridge, Massachusetts, March 26 – May 17, 1903. Reprinted in part, Collected Papers, CP 5.14–212. Published in full with editor's introduction and commentary, Patricia Ann Turisi (ed.), Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard "Lectures on Pragmatism", State University of New York Press, Albany, NY, 1997. Reprinted, pp. 133–241, Peirce Edition Project (eds.), The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Indiana University Press, Bloomington, IN, 1998.
Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD.
Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, Massachusetts.
Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, Massachusetts.
Runes, D.D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia.
Solomon, Robert C., and Higgins, Kathleen M. (1996), A Short History of Philosophy, pp. 68–74.
Sorabji, Richard (1972), Aristotle on Memory.
Thornton, Stephen (1987), Berkeley's Theory of Reality, Eprint
Vanzo, Alberto (2014), "From Empirics to Empiricists", Intellectual History Review, 2014, Eprint available here and here.
Ward, Teddy (n.d.), "Empiricism", Eprint.
Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
External links
Empiricist Man
History of science
Justification (epistemology)
Philosophical methodology
Internalism and externalism
Philosophy of science
Epistemological schools and traditions | 0.765012 | 0.999029 | 0.76427 |
Outdoor recreation | Outdoor recreation or outdoor activity refers to recreation done outside, most commonly in natural settings. The activities that encompass outdoor recreation vary depending on the physical environment they are being carried out in. These activities can include fishing, hunting, backpacking, walking and horseback riding — and can be completed individually or collectively. Outdoor recreation is a broad concept that encompasses a varying range of activities and landscapes.
Outdoor recreation is typically pursued for purposes of physical exercise, general wellbeing, and spiritual renewal. While a wide variety of outdoor recreational activities can be classified as sports, they do not all demand that a participant be an athlete. Rather, it is the collectivist idea that is at the fore in outdoor recreation, as outdoor recreation does not necessarily encompass the same degree of competitiveness or rivalry that is embodied in sporting matches or championships. Competition generally is less stressed than in organized individual or team sports.
When the activity involves exceptional excitement, physical challenge, or risk, it is sometimes referred to as "adventure recreation" or "adventure training", rather than an extreme sport.
Other traditional examples of outdoor recreational activities include hiking, camping, mountaineering, cycling, dog walking, canoeing, caving, kayaking, rafting, rock climbing, running, sailing, skiing, sky diving and surfing. As new pursuits, often hybrids of prior ones, emerge, they gain their own identities, such as coasteering, canyoning, fastpacking, and plogging.
In many cities, recreational areas for various outdoor activities are created for the population. These include natural parks, parks, playgrounds, sports facilities but also areas with free sea access such as the beach area of Venice Beach in California, the Promenade des Anglais in Nice or the waterfront of Barcola in Trieste.
Purpose
Outdoor recreation involves any kind of activity within an outdoor environment. Outdoor recreation can include established sports, and individuals can participate without association with teams, competitions or clubs. Activities include backpacking, canoeing, canyoning, caving, climbing, hiking, hill walking, hunting, kayaking, and rafting. Broader groupings include water sports, snow sports, and horseback riding.
People engage in physical activity outdoors as a form of recreation. Various physical activities can be completed individually or communally. Sports which are mainly played indoors or other settings such as fields are able to transition to an outdoor setting for recreational and non-competitive purposes. Outdoor physical activities can help people learn new skills, test stamina and endurance, and participate in social activities.
Outdoor activities are also frequently used as a setting for education and team building.
List of activities
Abseiling
Adrenaline junkie
Adventure park
Adventure travel
Airsoft
All-terrain vehicle riding
Amusement park
Angling
Archery
Aviation
Backpacking
BASE jumping
Benchmarking (geolocating)
Birdwatching
Bungee jumping
Bushcraft
Camping
Canoeing
Canyoning
Caving
Clam digging
Cliff jumping
Coasteering
Cold-weather biking
Corn maze
Cross-country skiing
Cycling
Dog park
Driving
Extreme sport
Fitness trail
Fly fishing
Freerunning
Gardening
Geocaching
Gliding
Grilling
Hang gliding
Hiking
Horseback riding
Hot air ballooning
Hunting
Historical reenactment
Ice climbing
Ice fishing
Ice skating
Jetskiing
Kayaking
Kicksledding
Letterboxing
Metal detecting
Mountain biking
Mountain climbing
Mountaineering
Mushroom hunting
Nordic walking
Off-roading
Overlanding
Orienteering
Outdoor fitness
Outdoor gym
Paragliding
Parasailing
Parkour
Outdoor party
Photography
Picnic
Plogging
Paramotoring
Rafting
Rappelling
Rock climbing
Running
Safari park
Safari
Sandboarding
Scuba diving
Seatrekking
Sightseeing
Skateboarding
Skiing
Slacklining
Sport fishing
Skydiving
Shooting
Skyrunning
Sledding
Snorkeling
Snowboarding
Snowmobiling
Snowshoeing
Standup paddleboarding
Sunbathing
Surfing
Swimming
Tourism
Tree climbing
Trekking
Urban exploration
Water sports
Waterskiing
Windsurfing
Wingsuit flying
Winter swimming
Zip line
Examples
Trekking
Trekking can be understood as an extended walk and involves day hikes, overnight or extended hikes. An example of a day trek is hiking during the day and returning at night to a lodge for a hot meal and a comfortable bed. Physical preparation for trekking includes cycling, swimming, jogging and long walks. Trekking requires experience with basic survival skills, first aid, and orienteering when going for extended hikes or staying out overnight.
Mountain biking
The activity of mountain biking involves steering a mountain cycle over rocky tracks and around boulder-strewn paths. Mountain bikes or ATBs (all-terrain bikes) feature a rugged frame and fork. Their frames are often built of aluminum so they are lightweight and stiff, making them efficient to ride.
Many styles of mountain biking are practiced, including all mountain, downhill, trials, dirt jumping, trail riding, and cross country. The latter two are the most common.
Balance, core strength, and endurance are all physical traits that are required to go mountain biking. Riders also need bike handling skills and the ability to make basic repairs to their bikes. More advanced mountain biking involves technical descents such as down hilling and free riding.
Canyoning
Canyoning is an activity which involves climbing, descending, jumping and trekking through canyons. The sport originates from caving and involves both caving and climbing techniques. Canyoning often includes descents that involve rope work, down-climbing, or jumps that are technical in nature. Canyoning is frequently done in remote and rugged settings and often requires navigational, route-finding and other wilderness skills.
Education
Outdoor education in the United States
Education is also a popular focus of outdoor activity. University outdoor recreation programs are becoming more popular in the United States. Studies have shown that outdoor recreation programs can be beneficial to a student's well-being and stress levels in terms of calming and soothing the mind. Universities in the United States often offer indoor rock climbing walls, equipment rental, ropes courses and trip programming. A few universities give degrees in adventure recreation, which aims to teach graduates how to run businesses in the field of adventure recreation.
Outdoor education in the United Kingdom
In the UK, the house of commons' Education and Skills Committee supports outdoor education. The committee encourages fieldwork projects since it helps in the development of ‘soft’ skills and social skills, particularly in hard to reach children. These activities can also take place on school trips, on visits in the local community or even on the school grounds.
Outdoor enthusiast
Outdoor enthusiast and outdoorsy are terms for a person who enjoys outdoor recreation. The terms outdoorsman, sportsman, woodsman, or bushman have also been used to describe someone with an affinity for the outdoors.
Some famous outdoor enthusiasts include U.S. president Teddy Roosevelt, Robert Baden-Powell, Ernest Hemingway, Ray Mears, Bear Grylls, Doug Peacock, Richard Wiese, Kenneth "Speedy" Raulerson, Earl Shaffer, Jo Gjende, Saxton Pope, Randy Stoltmann, Christopher Camuto, Eva Shockey, Jim Shockey, Henry Pittock, Eddie Bauer, Gaylord DuBois, Euell Gibbons, Clay Perry, Arthur Hasketh Groom, Les Hiddins, Bill Jordan, and Corey Ford. Some pioneering female outdoor enthusiasts include Mary Seacole, Isabella Bird, Emma Rowena Gatewood, Claire Marie Hodges, Mina Benson Hubbard, Beryl Markham, Freya Stark, Margaret Murie, Celia Hunter, Rachel Carson, Terry Tempest Williams, Marjory Stoneman Douglas, Ruth Dyar Mendenhall, and Arlene Blum.
Sparsely populated areas with mountains, lakes, rivers, scenic views, and rugged terrain are popular with outdoor enthusiasts. In the United States, state parks and national parks offer campgrounds and opportunities for recreation of the sort. In the UK, all of rural Scotland and all those areas of England and Wales designated as "right to roam" areas are available for outdoor enthusiasts on foot. Some areas are also open to mountain bikers and to horse riders.
Outdoor recreation and cuisine
Culinary techniques and foods popular with outdoor enthusiasts include dutch ovens, grilling, cooking over "open fires" (often with rock fire rings), fish fries, granola, and trail mix (sometimes referred to as GORP for "good old raisins and peanuts").
International and National Outdoor Recreation Days
Nationally and internationally, a number of days have been designated for the outdoors.
Canadian Rivers Day
Clean Up Australia Day
National Cleanup Day
National Public Lands Day
National Trails Day
World Oceans Day
Global Running Day
See also
Adventure travel
Hazards of outdoor recreation
Health effects of sunlight exposure
Notes
References | 0.768596 | 0.99437 | 0.764269 |
Minimum viable population | Minimum viable population (MVP) is a lower bound on the population of a species, such that it can survive in the wild. This term is commonly used in the fields of biology, ecology, and conservation biology. MVP refers to the smallest possible size at which a biological population can exist without facing extinction from natural disasters or demographic, environmental, or genetic stochasticity. The term "population" is defined as a group of interbreeding individuals in similar geographic area that undergo negligible gene flow with other groups of the species. Typically, MVP is used to refer to a wild population, but can also be used for ex situ conservation (Zoo populations).
Estimation
There is no unique definition of what constitutes a sufficient population for the continuation of a species, because whether a species survives will depend to some extent on random events. Thus, any calculation of a minimum viable population (MVP) will depend on the population projection model used. A set of random (stochastic) projections might be used to estimate the initial population size needed (based on the assumptions in the model) for there to be, (for example) a 95% or 99% probability of survival 1,000 years into the future. Some models use generations as a unit of time rather than years in order to maintain consistency between taxa. These projections (population viability analyses, or PVA) use computer simulations to model populations using demographic and environmental information to project future population dynamics. The probability assigned to a PVA is arrived at after repeating the environmental simulation thousands of times.
Extinction
Small populations are at a greater risk of extinction than larger populations due to small populations having less capacity to recover from adverse stochastic (i.e. random) events. Such events may be divided into four sources:
Demographic stochasticity
Demographic stochasticity is often only a driving force toward extinction in populations with fewer than 50 individuals. Random events influence the fecundity and survival of individuals in a population, and in larger populations, these events tend to stabilize toward a steady growth rate. However, in small populations there is much more relative variance, which can in turn cause extinction.
Environmental stochasticity
Small, random changes in the abiotic and biotic components of the ecosystem that a population inhabits fall under environmental stochasticity. Examples are changes in climate over time and the arrival of another species that competes for resources. Unlike demographic and genetic stochasticity, environmental stochasticity tends to affect populations of all sizes.
Natural catastrophes
An extension of environmental stochasticity, natural disasters are random, large scale events such as blizzards, droughts, storms, or fires that directly reduce a population within a short period of time. Natural catastrophes are the hardest events to predict, and MVP models often have difficulty factoring them in.
Genetic stochasticity
Small populations are vulnerable to genetic stochasticity, the random change in allele frequencies over time, also known as genetic drift. Genetic drift can cause alleles to disappear from a population, and this lowers genetic diversity. In small populations, low genetic diversity can increase rates of inbreeding, which can result in inbreeding depression, in which a population made up of genetically similar individuals loses fitness. Inbreeding in a population reduces fitness by causing deleterious recessive alleles to become more common in the population, and also by reducing adaptive potential. The so-called "50/500 rule", where a population needs 50 individuals to prevent inbreeding depression, and 500 individuals to guard against genetic drift at-large, is an oft-used benchmark for an MVP, but a recent study suggests that this guideline is not applicable across a wide diversity of taxa.
Application
MVP does not take external intervention into account. Thus, it is useful for conservation managers and environmentalists; a population may be increased above the MVP using a captive breeding program or by bringing other members of the species in from other reserves.
There is naturally some debate on the accuracy of PVAs, since a wide variety of assumptions are generally required for forecasting; however, the important consideration is not absolute accuracy but the promulgation of the concept that each species indeed has an MVP, which at least can be approximated for the sake of conservation biology and Biodiversity Action Plans.
There is a marked trend for insularity, surviving genetic bottlenecks, and r-strategy to allow far lower MVPs than average. Conversely, taxa easily affected by inbreeding depression –having high MVPs – are often decidedly K-strategists, with low population densities occurring over a wide range. An MVP of 500 to 1,000 has often been given as an average for terrestrial vertebrates when inbreeding or genetic variability is ignored. When inbreeding effects are included, estimates of MVP for many species are in the thousands. Based on a meta-analysis of reported values in the literature for many species, Traill et al. reported concerning vertebrates "a cross-species frequency distribution of MVP with a median of 4169 individuals (95% CI = 3577–5129)."
See also
Effective population size
Inbreeding depression
Human population
Metapopulation
Rescue effect
References
Ecological metrics
Biostatistics
Environmental terminology
Habitat | 0.770895 | 0.99139 | 0.764257 |
Micronutrient | Micronutrients are essential dietary elements required by organisms in varying quantities to regulate physiological functions of cells and organs. Micronutrients support the health of organisms throughout life.
In varying amounts supplied through the diet, micronutrients include such compounds as vitamins and dietary minerals. For human nutrition, micronutrient requirements are in amounts generally less than 100 milligrams per day, whereas macronutrients are required in gram quantities daily. A multiple micronutrient powder of at least iron, zinc, and vitamin A was added to the World Health Organization's List of Essential Medicines in 2019. Deficiencies in micronutrient intake commonly result in malnutrition.
Inadequate micronutrient intake
Inadequate intake of essential nutrients predisposes humans to various chronic diseases, with some 50% of American adults having one or more preventable disease. In the United States, foods poor in micronutrient content and high in food energy make up some 27% of daily calorie intake. One US national survey (National Health and Nutrition Examination Survey 2003-2006) found that persons with high sugar intake consumed fewer micronutrients, especially vitamins A, C, and E, and magnesium.
A 1994 report by the World Bank estimated that micronutrient malnutrition costs developing economies at least 5 percent of gross domestic product. The Asian Development Bank has summarized the benefits of eliminating micronutrient deficiencies as follows:
Along with a growing understanding of the extent and impact of micronutrient malnutrition, several interventions have demonstrated the feasibility and benefits of correction and prevention. Distributing inexpensive capsules, diversifying to include more micronutrient-rich foods, or fortifying commonly consumed foods can make an enormous difference. Correcting iodine, vitamin A, and iron deficiencies can improve the population-wide intelligence quotient by 10–15 points, reduce maternal deaths by one-fourth, decrease infant and child mortality by 40 percent, and increase people's work capacity by almost half. The elimination of these deficiencies will reduce health care and education costs, improve work capacity and productivity, and accelerate equitable economic
growth and national development. Improved nutrition is essential to sustain economic growth. Micronutrient deficiency elimination is as cost-effective as the best public health interventions and fortification is the most cost-effective strategy.
Salt iodization
Salt iodization is a major strategy for addressing iodine deficiency, which is a major cause of mental health problems. In 1990, less than 20 percent of households in developing countries were consuming iodized salt. By 1994, international partnerships had formed in a global campaign for Universal Salt Iodization. By 2008, it was estimated that 72 percent of households in developing countries were consuming iodized salt, and the number of countries in which iodine deficiency disorders were a public health concern reduced by more than half from 110 to 47 countries.
Vitamin A supplementation
Vitamin A deficiency is a major factor in causing blindness worldwide, particularly among children. Global vitamin A supplementation efforts have targeted 103 priority countries. In 1999, 16 percent of children in these countries received two annual doses of vitamin A. By 2007, the rate increased to 62 percent.
Fortification of staple foods with vitamin A has uncertain benefits on reducing the risk of subclinical vitamin A deficiency.
Zinc
Fortification of staple foods may improve serum zinc levels in the population. Other effects such as improving zinc deficiency, children's growth, cognition, work capacity of adults, or blood indicators are unknown. Experiments show that soil and foliar application of zinc fertilizer can effectively reduce the phytate zinc ratio in grain. People who eat bread prepared from zinc-enriched wheat show a significant increase in serum zinc, suggesting that the zinc fertilizer strategy is a promising approach to address zinc deficiencies in humans.
Plants
Plants tend not to use vitamins, although minerals are required.
Some seven trace elements are essential to plant growth, although often in trace quantities.
Boron is believed to be involved in carbohydrate transport in plants; it also assists in metabolic regulation. Boron deficiency will often result in bud dieback.
Chloride is necessary for osmosis and ionic balance; it also plays a role in photosynthesis.
Copper, iron, manganese, molybdenum, and zinc are cofactors essential for the functioning of many enzymes. For plants, deficiency in these elements often results in inefficient production of chlorophyll, manifested in chlorosis.
See also
List of micronutrients
Human nutrition
Macronutrient (ecology)
Dietary mineral (redirects to Mineral (nutrient))
Silicon § Human nutrition
Manganese deficiency (medicine)
References
External links
Micronutrient Information Center, Oregon State University
Nutrition | 0.768356 | 0.994643 | 0.76424 |
Dialectics of Nature | Dialectics of Nature is an unfinished 1883 work by Friedrich Engels that applies Marxist ideas – particularly those of dialectical materialism – to nature.
History and contents
Engels wrote most of the manuscript between 1872 and 1882, which was a melange of German, French and English notations on the contemporary development of science and technology; however, it was not published within his lifetime. In later times, Eduard Bernstein passed the manuscripts to Albert Einstein, who thought the science confused (particularly the mathematics and physics) but the overall work worthy of a broader readership. After that in 1925, the Marx–Engels–Lenin Institute in Moscow published the manuscripts (a bilingual German/Russian edition).
The biologist J. B. S. Haldane wrote a preface for the work in 1939, "Hence it is often hard to follow if one does not know the history of the scientific practice of that time. The idea of what is now called the conservation of energy was beginning to permeate physics, astronomy, chemistry, geoscience, and biology, but it was still very incompletely realised, and still more incompletely applied. Words such as 'force', 'motion', and 'vis viva' were used where we should now speak of energy".
Some then controversial topics of Engels' day, pertaining to incomplete or faulty theories, are now settled, making some of Engels' essays dated. "Their interest lies not so much in their detailed criticism of theories, but in showing how Engels grappled with intellectual problems".
One "law" proposed in the Dialectics of Nature is the "law of the transformation of quantity into quality and vice versa". Probably the most commonly cited example of this is the change of water from a liquid to a gas, by increasing its temperature (although Engels also describes other examples from chemistry). In contemporary science, this process is known as a phase transition. There has also been an effort to apply this mechanism to social phenomena, whereby increases in population result in changes in social structure.
Dialectics and its study was derived from the philosopher and author of Science of Logic, G. W. F. Hegel, who, in turn, had studied the Greek philosopher Heraclitus. Heraclitus taught that everything was constantly changing and that all things consisted of two opposite elements which changed into each other as night changes into day, light into darkness, life into death etc.
Engels's work develops from the comments he had made about science in Anti-Dühring. It includes the famous "The Part Played by Labour in the Transition from Ape to Man", which has also been published separately as a pamphlet. Engels argues that the hand and brain grew together, an idea supported by later fossil discoveries (see Australopithecus afarensis).
Most of the work is fragmentary and in the form of rough notes, as shown in this quotation from the section entitled "Biology":
See also
Natural philosophy
Naturphilosophie
Dialectical materialism
Notes and references
External links
Full text on-line.
Dialectics of Nature, PDF of edition published by Progress Publishers.
1883 non-fiction books
Marxist books
Books by Friedrich Engels
Dialectical materialism | 0.779891 | 0.979927 | 0.764236 |
Whataboutism | Whataboutism or whataboutery (as in "what about ...?") is a pejorative for the strategy of responding to an accusation with a counter-accusation instead of a defense against the original accusation.
From a logical and argumentative point of view, whataboutism is considered a variant of the tu-quoque pattern (Latin 'you too', term for a counter-accusation), which is a subtype of the ad-hominem argument.
The communication intent is often to distract from the content of a topic (red herring). The goal may also be to question the justification for criticism and the legitimacy, integrity, and fairness of the critic, which can take on the character of discrediting the criticism, which may or may not be justified. Common accusations include double standards, and hypocrisy, but it can also be used to relativize criticism of one's own viewpoints or behaviors. (A: "Long-term unemployment often means poverty in Germany." B: "And what about the starving in Africa and Asia?"). Related manipulation and propaganda techniques in the sense of rhetorical evasion of the topic are the change of topic and false balance (bothsidesism).
Some commentators have defended the usage of whataboutism and tu quoque in certain contexts. Whataboutism can provide necessary context into whether or not a particular line of critique is relevant or fair, and behavior that may be imperfect by international standards may be appropriate in a given geopolitical neighborhood. Accusing an interlocutor of whataboutism can also in itself be manipulative and serve the motive of discrediting, as critical talking points can be used selectively and purposefully even as the starting point of the conversation (cf. agenda setting, framing, framing effect, priming, cherry picking). The deviation from them can then be branded as whataboutism. Both whataboutism and the accusation of it are forms of strategic framing and have a framing effect.
Etymology
The term whataboutism is a compound of what and about, is synonymous with whataboutery, and means to twist criticism back on the initial critic.
Origins
According to lexicographer Ben Zimmer, the term originated in Northern Ireland in the 1970s. Zimmer cites a 1974 letter by history teacher Sean O'Conaill which was published in The Irish Times where he complained about "the Whatabouts", people who defended the IRA by pointing out supposed wrongdoings of their enemy:
Three days later, an opinion column by John Healy in the same paper entitled "Enter the cultural British Army" picked up the theme by using the term whataboutery: "As a correspondent noted in a recent letter to this paper, we are very big on Whatabout Morality, matching one historic injustice with another justified injustice. We have a bellyfull [sic] of Whataboutery in these killing days and the one clear fact to emerge is that people, Orange and Green, are dying as a result of it." Zimmer says the term gained wide currency in commentary about the conflict between unionists and nationalists in Northern Ireland. Zimmer also notes that the variant whataboutism was used in the same context in a 1993 book by Tony Parker.
In 1978, Australian journalist Michael Bernard wrote a column in The Age applying the term whataboutism to the Soviet Union's tactics of deflecting any criticism of its human rights abuses. Merriam-Webster details that "the association of whataboutism with the Soviet Union began during the Cold War. As the regimes of [Joseph] Stalin and his successors were criticized by the West for human rights atrocities, the Soviet propaganda machine would be ready with a comeback alleging atrocities of equal reprehensibility for which the West was guilty."
Zimmer credits British journalist Edward Lucas for beginning regular common use of the word whataboutism in the modern era following its appearance in a blog post on 29 October 2007, reporting as part of a diary about Russia which was re-printed in the 2 November issue of The Economist. On 31 January 2008 The Economist printed another article by Lucas titled "Whataboutism". Ivan Tsvetkov, associate professor of International Relations in St Petersburg also credits Lucas for modern uses of the term.
Analysis
Psychological motivations
The philosopher Merold Westphal said that only people who know themselves to be guilty of something "can find comfort in finding others to be just as bad or worse." Whataboutery, as practiced by both parties in The Troubles in Northern Ireland to highlight what the other side had done to them, was "one of the commonest forms of evasion of personal moral responsibility," according to Bishop (later Cardinal) Cahal Daly. After a political shooting at a baseball game in 2017, journalist Chuck Todd criticized the tenor of political debate, commenting, "What-about-ism is among the worst instincts of partisans on both sides."
Intentionally discrediting oneself
Whataboutism usually points the finger at a rival's offenses to discredit them, but, in a reversal of this usual direction, it can also be used to discredit oneself while one refuses to critique an ally. During the 2016 U.S. presidential campaign, when The New York Times asked candidate Donald Trump about Turkish President Recep Tayyip Erdoğan's treatment of journalists, teachers, and dissidents, Trump replied with a criticism of U.S. history on civil liberties. Writing for The Diplomat, Catherine Putz pointed out: "The core problem is that this rhetorical device precludes discussion of issues (e.g. civil rights) by one country (e.g. the United States) if that state lacks a perfect record." Masha Gessen wrote for The New York Times that usage of the tactic by Trump was shocking to Americans, commenting, "No American politician in living memory has advanced the idea that the entire world, including the United States, was rotten to the core."
Concerns about effects
Joe Austin was critical of the practice of whataboutism in Northern Ireland in a 1994 piece, The Obdurate and the Obstinate, writing: "And I'd no time at all for 'What aboutism' ... if you got into it you were defending the indefensible." In 2017, The New Yorker described the tactic as "a strategy of false moral equivalences", and Clarence Page called the technique "a form of logical jiu-jitsu". Writing for National Review, commentator Ben Shapiro criticized the practice, whether it was used by those espousing right-wing or left-wing politics; Shapiro concluded: "It's all dumb. And it's making us all dumber." Michael J. Koplow of Israel Policy Forum wrote that the usage of whataboutism had become a crisis; concluding that the tactic did not yield any benefits, Koplow charged that "whataboutism from either the right or the left only leads to a black hole of angry recriminations from which nothing will escape".
Defense
Contextualization
Some commentators have defended the usage of whataboutism and tu quoque in certain contexts. Whataboutism can provide necessary context into whether or not a particular line of critique is relevant or fair. In international relations, behavior that may be imperfect by international standards may be quite good for a given geopolitical neighborhood and deserves to be recognized as such.
Distorted self-perception
Christian Christensen, Professor of Journalism in Stockholm, argues that the accusation of whataboutism is itself a form of the tu quoque fallacy, as it dismisses criticisms of one's own behavior to focus instead on the actions of another, thus creating a double standard. Those who use whataboutism are not necessarily engaging in an empty or cynical deflection of responsibility: whataboutism can be a useful tool to expose contradictions, double standards, and hypocrisy. For example, one's opponent's action appears as forbidden torture, one's own actions as "enhanced interrogation methods", the other's violence as aggression, one's own merely as a reaction. Christensen even sees utility in the use of the argument: "The so-called 'whataboutists' question what has not been questioned before and bring contradictions, double standards, and hypocrisy to light. This is not naïve justification or rationalization [...], it is a challenge to think critically about the (sometimes painful) truth of our position in the world."
Lack of sincerity
In his analysis of Whataboutism, logic professor Axel Barceló of the UNAM concludes that the counteraccusation often expresses a justified suspicion that the criticism does not correspond to the critic's real position and reasons.
Abe Greenwald pointed out that even the first accusation leading to the counteraccusation is an arbitrary setting, which can be just as one-sided and biased, or even more one-sided than the counter-question "what about?" Thus, whataboutism could also be enlightening and put the first accusation in perspective.
Idealization
In her analysis of whataboutism in the US Presidential Campaign, Catherine Putz notes in 2016 in The Diplomat Magazine that the core problem is that this rhetorical device precludes discussion of a country's contentious issues (e.g., civil rights on the part of the United States) if that country is not perfect in that area. It required, by default, that a country be allowed to make a case to other countries only for those ideals in which it had achieved the highest level of perfection. The problem with ideals, he said, is that we rarely achieve them as human beings. But the ideals remain important, he said, and the United States should continue to advocate for them: "It is the message that is important, not the ambassador."
Protective mechanism
Gina Schad sees the characterization of counterarguments as "whataboutism" as a lack of communicative competence, insofar as discussions are cut off by this accusation. The accusation of others of whataboutism is also used as an ideological protective mechanism that leads to "closures and echo chambers". The reference to "whataboutism" is also perceived as a "discussion stopper" "to secure a certain hegemony of discourse and interpretation."
Deflection
A number of commentators, among them Forbes columnist Mark Adomanis, have criticized the usage of accusations of whataboutism by American news outlets, arguing that accusations of whataboutism have been used to simply deflect criticisms of human rights abuses perpetrated by the United States or its allies. Vincent Bevins and Alex Lo argue that the usage of the term almost exclusively by American outlets is a double standard, and that moral accusations made by powerful countries are merely a pretext to punish their geopolitical rivals in the face of their own wrongdoing.
Left-wing academics Kristen Ghodsee and Scott Sehon argue that mentioning the possible existence of victims of capitalism in popular discourse is often dismissed as "whataboutism", which they describe as "a term implying that only atrocities perpetrated by communists merit attention." They also argue that such accusations of "whataboutism" are invalid as the same arguments used against communism can also be used against capitalism.
Scholars Ivan Franceschini and Nicholas Loubere argue it is not whataboutism to document and denounce authoritarianism in different countries, and noted global parallels such as the role Islamophobia played in China's Xinjiang internment camps and the US's War on terror and travel bans targeting Muslim countries, as well as influence of corporations and other international actors in the documented abuses which is becoming more obscured. Franceschini and Loubere conclude that authoritarianism "must be opposed everywhere", and that "only by finding the critical parallels, linkages, and complicities can we develop immunity to the virus of whataboutism and avoid its essentialist hyperactive immune response, achieving the moral consistency and holistic perspective that we need in order to build up international solidarity and stop sleepwalking towards the abyss."
Whataboutism in proverbs and similes
Jesus' statement, "Let he who is without fault cast the first stone" (John 8:7), the similar parable of the beam in the eye (Matthew 7:3) and proverbs based on it such as "He who sits in a glass house should not throw stones" are sometimes compared to whataboutism. Nigel Warburton sees the difference in the fact that the point of view in the Bible and in Proverbs is different from that in politics. Jesus is in the right to remind the sinner of his own guilt, because he himself has no guilt, he is on the side of good. Although a wrongdoer can sometimes be in the right by pointing out an actual shortcoming, this does not change the difference in principle.The whataboutery move seems to rest on the false assumption that wrongdoing is mitigated if others have done something similar, and the feeling that accusers need to be innocent of the crime of which they are accusing others. 'You think I'm doing something terrible, so look around you at all the others doing much the same as me. What is more, you don't have a credible position from which to attack me.' At best that is just self-serving rationalisation, but as a tactical move it can work.
Use in political contexts
Soviet Union and Russia
Although the term whataboutism spread recently, Edward Lucas's 2008 Economist article states that "Soviet propagandists during the cold war were trained in a tactic that their western interlocutors nicknamed 'whataboutism. Any criticism of the Soviet Union (Afghanistan, martial law in Poland, imprisonment of dissidents, censorship) was met with a 'What about...' (apartheid South Africa, jailed trade-unionists, the Contras in Nicaragua, and so forth)." Lucas recommended two methods of properly countering whataboutism: to "use points made by Russian leaders themselves" so that they cannot be applied to the West, and for Western nations to engage in more self-criticism of their own media and government. In his book The New Cold War: Putin's Russia and the Threat to the West (2008), Edward Lucas characterized whataboutism as "the favourite weapon of Soviet propagandists".
Following the publication of Lucas's 2007 and 2008 articles and his book, opinion writers at prominent English language media outlets began using the term and echoing the themes laid out by Lucas, including the association with the Soviet Union and Russia. Journalist Luke Harding described Russian whataboutism as "practically a national ideology". Juhan Kivirähk and colleagues called it a "polittechnological" strategy.
Writing in The National Interest in 2013, Samuel Charap was critical of the tactic, commenting, "Russian policy makers, meanwhile, gain little from petulant bouts of 'whataboutism. National security journalist Julia Ioffe commented in a 2014 article, "Anyone who has ever studied the Soviet Union knows about a phenomenon called 'whataboutism'." Ioffe said that Russia Today was "an institution that is dedicated solely to the task of whataboutism", and concluded that whataboutism was a "sacred Russian tactic". Garry Kasparov discussed the Soviet tactic in his 2015 book Winter Is Coming, calling it a form of "Soviet propaganda" and a way for Russian bureaucrats to "respond to criticism of Soviet massacres, forced deportations, and gulags". Mark Adomanis commented for The Moscow Times in 2015 that "Whataboutism was employed by the Communist Party with such frequency and shamelessness that a sort of pseudo mythology grew up around it." Adomanis observed, "Any student of Soviet history will recognize parts of the whataboutist canon."
Writing in 2016 for Bloomberg News, journalist Leonid Bershidsky called whataboutism a "Russian tradition", while The National called the tactic "an effective rhetorical weapon". In their book The European Union and Russia (2016), Forsberg and Haukkala characterized whataboutism as an "old Soviet practice", and they observed that the strategy "has been gaining in prominence in the Russian attempts at deflecting Western criticism". In her 2016 book, Security Threats and Public Perception, author Elizaveta Gaufman called the whataboutism technique "A Soviet/Russian spin on liberal anti-Americanism", comparing it to the Soviet rejoinder, "And you are lynching negroes". Foreign Policy supported this assessment. Daphne Skillen discussed the tactic in her 2016 book, Freedom of Speech in Russia, identifying it as a "Soviet propagandist's technique" and "a common Soviet-era defence". Writing for Bloomberg News, Leonid Bershidsky called whataboutism a "Russian tradition", while The New Yorker described the technique as "a strategy of false moral equivalences".
In a piece for CNN, Jill Dougherty compared the technique to the pot calling the kettle black. Dougherty wrote: "There's another attitude ... that many Russians seem to share, what used to be called in the Soviet Union 'whataboutism', in other words, 'who are you to call the kettle black? Julia Ioffe called whataboutism a "sacred Russian tactic", and also compared it to accusing the pot calling the kettle black.
Russian journalist Alexey Kovalev told GlobalPost in 2017 that the tactic was "an old Soviet trick". Peter Conradi, author of Who Lost Russia?, called whataboutism "a form of moral relativism that responds to criticism with the simple response: 'But you do it too. Conradi echoed Gaufman's comparison of the tactic to the Soviet response, "Over there they lynch Negroes". Writing for Forbes in 2017, journalist Melik Kaylan explained the term's increased pervasiveness in referring to Russian propaganda tactics: "Kremlinologists of recent years call this 'whataboutism' because the Kremlin's various mouthpieces deployed the technique so exhaustively against the U.S." Kaylan commented upon a "suspicious similarity between Kremlin propaganda and Trump propaganda". Foreign Policy wrote that Russian whataboutism was "part of the national psyche". EurasiaNet stated that "Moscow's geopolitical whataboutism skills are unmatched", while Paste correlated whataboutism's rise with the increasing societal consumption of fake news.
Notable examples
Several articles connected whataboutism to the Soviet era by pointing to the "And you are lynching Negroes" example (as Lucas did) of the 1930s, in which the Soviets deflected any criticism by referencing racism in the segregated American South. The tactic was extensively used even after the racial segregation in the South was outlawed in the 1950s and 1960s. Ioffe, who has written about whataboutism in at least three separate outlets, called it a "classic" example of whataboutism, citing the Soviet response to criticism, "And you are lynching negroes", as a "classic" form of whataboutism.
The Soviet government engaged in a major cover-up of the Chernobyl nuclear disaster in 1986. When they finally acknowledged the disaster, although without any details, the Telegraph Agency of the Soviet Union (TASS) then discussed the Three Mile Island accident and other American nuclear accidents, which Serge Schmemann of The New York Times wrote was an example of the common Soviet tactic of whataboutism. The mention of a commission also indicated to observers the seriousness of the incident, and subsequent state radio broadcasts were replaced with classical music, which was a common method of preparing the public for an announcement of a tragedy in the USSR.
In 2016, Canadian columnist Terry Glavin asserted in the Ottawa Citizen that Noam Chomsky used the tactic in an October 2001 speech, delivered after the September 11 attacks, that was critical of US foreign policy. In 2006, Putin replied to George W. Bush's criticism of Russia's human rights record by stating that he "did not want to head a democracy like Iraq's," referencing the US intervention in Iraq.
Some writers also identified examples in 2012 when Russian officials responded to critique by, for example, redirecting attention to the United Kingdom's anti-protest laws or Russians' difficulty obtaining a visa to the United Kingdom.
The term receives increased attention when controversies involving Russia are in the news. For example, writing for Slate in 2014, Joshua Keating noted the use of "whataboutism" in a statement on Russia's 2014 annexation of Crimea, where Putin "listed a litany of complaints about Western intervention."
In 2017, Ben Zimmer noted that Putin also used the tactic in an interview with NBC News journalist Megyn Kelly.
Russophobia allegation
The practice of labelling whataboutism as typically Russian or Soviet is sometimes rejected as russophobic. Glenn Diesen sees this usage as an attempt to delegitimize Russian politics. As early as 1985, Ronald Reagan had introduced the construct of "false ethical balance" to "denounce" any attempt at comparison between the US and other countries. Jeane Kirkpatrick, in her essay The Myth of Moral Equivalence (1986) saw the Soviet Union's whataboutism as an attempt to use moral reasoning to present themselves as a legitimate superpower on an equal footing with the United States. The comparison was inadmissible in principle, since there was only one legitimate superpower, the USA, and it did not stand up for power interests but for values. Glenn Diesen sees this as a framing of American politics, with the aim of defining the relationship of countries to each other analogously to a teacher-pupil relationship, whereby in the political framework the USA is the teacher. Kirkpatrick invoked Harold Lasswell's understanding of the enforcement of an ideological framework using political dominance to analyze the semantic manipulations of the Soviet Union. According to Lasswell, every country tries to impose its interpretive framework on others, even by the means of revolution and war. For Kirkpatrick, however, these interpretive frameworks of different states are not equivalent.
China
A synonymous Chinese-language metaphor is the "stinky bug argument", coined by Lu Xun, a leading figure in modern Chinese literature, in 1933 to describe his Chinese colleagues' common tendency to accuse Europeans of "having equally bad issues" whenever foreigners commented upon China's domestic problems. As a Chinese nationalist, Lu saw this mentality as one of the biggest obstructions to the modernization of China in the early 20th century, which Lu frequently mocked in his literary works. In response to tweets from Donald Trump's administration criticizing the Chinese government's mistreatment of ethnic minorities and the pro-democracy protests in Hong Kong, Chinese Foreign Ministry officials began using Twitter to point out racial inequalities and social unrest in the United States which led Politico to accuse China of engaging in whataboutism.
Donald Trump
Writing for The Washington Post, former United States Ambassador to Russia, Michael McFaul wrote critically of Trump's use of the tactic and compared him to Putin. McFaul commented, "That's exactly the kind of argument that Russian propagandists have used for years to justify some of Putin's most brutal policies." Los Angeles Times contributor Matt Welch classed the tactic among "six categories of Trump apologetics". Mother Jones called the tactic "a traditional Russian propaganda strategy", and observed, "The whataboutism strategy has made a comeback and evolved in President Vladimir Putin's Russia."
In early 2017, amid coverage of interference in the 2016 election and the lead up to the Mueller Investigation into Donald Trump, several people, including Edward Lucas, wrote opinion pieces associating whataboutism with both Trump and Russia. "Instead of giving a reasoned defense [of his health care plan], he went for blunt offense, which is a hallmark of whataboutism", wrote Danielle Kurtzleben of NPR, adding that he "sounds an awful lot like Putin."
When, in a widely viewed television interview that aired before the Super Bowl in 2017, Fox News host Bill O'Reilly called Putin a "killer", Trump responded by saying that the US government was also guilty of killing people. He responded, "There are a lot of killers. We've got a lot of killers. What do you think — our country's so innocent?" This episode prompted commentators to accuse Trump of whataboutism, including Chuck Todd on the television show Meet the Press and political advisor Jake Sullivan.
Use by other states
Europe
The term "whataboutery" has been used by Loyalists and Republicans since the period of the Troubles in Northern Ireland.
Asia
The tactic was employed by Azerbaijan, which responded to criticism of its human rights record by holding parliamentary hearings on issues in the United States. Simultaneously, pro-Azerbaijan Internet trolls used whataboutism to draw attention away from criticism of the country.
The Turkish government engaged in whataboutism by publishing an official document listing criticisms of other governments that had criticized Turkey for its dramatic purge of state institutions and civil society in the wake of a failed coup attempt in July of that year.
The tactic was also employed by Saudi Arabia and Israel. In 2018, Israeli Prime Minister Benjamin Netanyahu said that "the [Israeli] occupation is nonsense, there are plenty of big countries that occupied and replaced populations and no one talks about them." In July 2022, the Crown Prince of Saudi Arabia Mohammad bin Salman engaged in this tactic by raising the killing of Palestinian-American journalist Shireen Abu Akleh, and the torture and abuse of Iraqi prisoners by US soldiers during the Iraq War, after US President Joe Biden raised the killing of Saudi journalist Jamal Khashoggi at the Saudi consulate in Istanbul on 2 October 2018 by agents of the Saudi government, during a conversation with Mohammed as part of Biden's state visit to Saudi Arabia.
Iran's foreign minister Mohammad Javad Zarif used the tactic in the Zurich Security Conference on February 17, 2019. When pressed by BBC's Lyse Doucet about eight environmentalists imprisoned in his country, he mentioned the killing of Jamal Khashoggi. Doucet picked up the fallacy and said "let's leave that aside."
The Indian prime minister Narendra Modi has been accused of using whataboutism, especially in regard to the 2015 Indian writers protest and the nomination of former Chief Justice Ranjan Gogoi to parliament.
External links
Whataboutism at Fallacy Check
Whataboutism at Merriam-Webster
See also
Ad hominem
Antanagoge
Character assassination
Clean hands
Discrediting tactic
Fallacy of relative privation
False equivalence
Genetic fallacy
Physician, heal thyself
Poisoning the well
Precedent
Psychological projection
Race card
Russian political jokes
Selection bias
Tankie
The Mote and the Beam
Two wrongs don't make a right
Victor's justice
References
Further reading
External links
Analogy
Cold War terminology
Hypocrisy
Propaganda in Russia
Propaganda in the Soviet Union
Soviet phraseology
Relevance fallacies
Articles containing video clips | 0.76538 | 0.99848 | 0.764217 |
Hydroponics | Hydroponics is a type of horticulture and a subset of hydroculture which involves growing plants, usually crops or medicinal plants, without soil, by using water-based mineral nutrient solutions in an artificial environment. Terrestrial or aquatic plants may grow freely with their roots exposed to the nutritious liquid or the roots may be mechanically supported by an inert medium such as perlite, gravel, or other substrates.
Despite inert media, roots can cause changes of the rhizosphere pH and root exudates can affect rhizosphere biology and physiological balance of the nutrient solution when secondary metabolites are produced in plants. Transgenic plants grown hydroponically allow the release of pharmaceutical proteins as part of the root exudate into the hydroponic medium.
The nutrients used in hydroponic systems can come from many different organic or inorganic sources, including fish excrement, duck manure, purchased chemical fertilizers, or artificial standard or hybrid nutrient solutions.
In contrast to field cultivation, plants are commonly grown hydroponically in a greenhouse or contained environment on inert media, adapted to the controlled-environment agriculture (CEA) process. Plants commonly grown hydroponically include tomatoes, peppers, cucumbers, strawberries, lettuces, and cannabis, usually for commercial use, as well as Arabidopsis thaliana, which serves as a model organism in plant science and genetics.
Hydroponics offers many advantages, notably a decrease in water usage in agriculture. To grow of tomatoes using
intensive farming methods requires of water;
using hydroponics, ; and
only using aeroponics.
Hydroponic cultures lead to highest biomass and protein production compared to other growth substrates, of plants cultivated in the same environmental conditions and supplied with equal amounts of nutrients.
Hydroponics is not only used on earth, but has also proven itself in plant production experiments in space.
History
The earliest published work on growing terrestrial plants without soil was the 1627 book Sylva Sylvarum or 'A Natural History' by Francis Bacon, printed a year after his death. As a result of his work, water culture became a popular research technique. In 1699, John Woodward published his water culture experiments with spearmint. He found that plants in less-pure water sources grew better than plants in distilled water. By 1842, a list of nine elements believed to be essential for plant growth had been compiled, and the discoveries of German botanists Julius von Sachs and Wilhelm Knop, in the years 1859–1875, resulted in a development of the technique of soilless cultivation. To quote von Sachs directly: "In the year 1860, I published the results of experiments which demonstrated that land plants are capable of absorbing their nutritive matters out of watery solutions, without the aid of soil, and that it is possible in this way not only to maintain plants alive and growing for a long time, as had long been known, but also to bring about a vigorous increase of their organic substance, and even the production of seed capable of germination." Growth of terrestrial plants without soil in mineral nutrient solutions was later called "solution culture" in reference to "soil culture". It quickly became a standard research and teaching technique in the 19th and 20th centuries and is still widely used in plant nutrition science.
Around the 1930s plant nutritionists investigated diseases of certain plants, and thereby, observed symptoms related to existing soil conditions such as salinity. In this context, water culture experiments were undertaken with the hope of delivering similar symptoms under controlled laboratory conditions. This approach forced by Dennis Robert Hoagland led to innovative model systems (e.g., green algae Nitella) and standardized nutrient recipes playing an increasingly important role in modern plant physiology. In 1929, William Frederick Gericke of the University of California at Berkeley began publicly promoting that the principles of solution culture be used for agricultural crop production. He first termed this cultivation method "aquiculture" created in analogy to "agriculture" but later found that the cognate term aquaculture was already applied to culture of aquatic organisms. Gericke created a sensation by growing tomato vines high in his backyard in mineral nutrient solutions rather than soil. He then introduced the term Hydroponics, water culture, in 1937, proposed to him by W. A. Setchell, a phycologist with an extensive education in the classics. Hydroponics is derived from neologism υδρωπονικά (derived from Greek ύδωρ=water and πονέω=cultivate), constructed in analogy to γεωπονικά (derived from Greek γαία=earth and πονέω=cultivate), geoponica, that which concerns agriculture, replacing, γεω-, earth, with ὑδρο-, water.
Despite initial successes, however, Gericke realized that the time was not yet ripe for the general technical application and commercial use of hydroponics for producing crops. He also wanted to make sure all aspects of hydroponic cultivation were researched and tested before making any of the specifics available to the public. Reports of Gericke's work and his claims that hydroponics would revolutionize plant agriculture prompted a huge number of requests for further information. Gericke had been denied use of the university's greenhouses for his experiments due to the administration's skepticism, and when the university tried to compel him to release his preliminary nutrient recipes developed at home, he requested greenhouse space and time to improve them using appropriate research facilities. While he was eventually provided greenhouse space, the university assigned Hoagland and Arnon to re-evaluate Gericke's claims and show his formula held no benefit over soil grown plant yields, a view held by Hoagland. Because of these irreconcilable conflicts, Gericke left his academic position in 1937 in a climate that was politically unfavorable and continued his research independently in his greenhouse. In 1940, Gericke, whose work is considered to be the basis for all forms of hydroponic growing, published the book, Complete Guide to Soilless Gardening. Therein, for the first time, he published his basic formulas involving the macro- and micronutrient salts for hydroponically-grown plants.
As a result of research of Gericke's claims by order of the Director of the California Agricultural Experiment Station of the University of California, Claude Hutchison, Dennis Hoagland and Daniel Arnon wrote a classic 1938 agricultural bulletin, The Water Culture Method for Growing Plants Without Soil, one of the most important works on solution culture ever, which made the claim that hydroponic crop yields were no better than crop yields obtained with good-quality soils. Ultimately, crop yields would be limited by factors other than mineral nutrients, especially light and aeration of the culture medium. However, in the introduction to his landmark book on soilless cultivation, published two years later, Gericke pointed out that the results published by Hoagland and Arnon in comparing the yields of experimental plants in sand, soil and solution cultures, were based on several systemic errors ("...these experimenters have made the mistake of limiting the productive capacity of hydroponics to that of soil. Comparison can be only by growing as great a number of plants in each case as the fertility of the culture medium can support").
For example, the Hoagland and Arnon study did not adequately appreciate that hydroponics has other key benefits compared to soil culture including the fact that the roots of the plant have constant access to oxygen and that the plants have access to as much or as little water and nutrients as they need. This is important as one of the most common errors when cultivating plants is over- and underwatering; hydroponics prevents this from occurring as large amounts of water, which may drown root systems in soil, can be made available to the plant in hydroponics, and any water not used, is drained away, recirculated, or actively aerated, eliminating anoxic conditions in the root area. In soil, a grower needs to be very experienced to know exactly how much water to feed the plant. Too much and the plant will be unable to access oxygen because air in the soil pores is displaced, which can lead to root rot; too little and the plant will undergo water stress or lose the ability to absorb nutrients, which are typically moved into the roots while dissolved, leading to nutrient deficiency symptoms such as chlorosis or fertilizer burn. Eventually, Gericke's advanced ideas led to the implementation of hydroponics into commercial agriculture while Hoagland's views and helpful support by the University prompted Hoagland and his associates to develop several new formulas (recipes) for mineral nutrient solutions, universally known as Hoagland solution.
One of the earliest successes of hydroponics occurred on Wake Island, a rocky atoll in the Pacific Ocean used as a refueling stop for Pan American Airlines. Hydroponics was used there in the 1930s to grow vegetables for the passengers. Hydroponics was a necessity on Wake Island because there was no soil, and it was prohibitively expensive to airlift in fresh vegetables.
From 1943 to 1946, Daniel I. Arnon served as a major in the United States Army and used his prior expertise with plant nutrition to feed troops stationed on barren Ponape Island in the western Pacific by growing crops in gravel and nutrient-rich water because there was no arable land available.
In the 1960s, Allen Cooper of England developed the nutrient film technique. The Land Pavilion at Walt Disney World's EPCOT Center opened in 1982 and prominently features a variety of hydroponic techniques.
In recent decades, NASA has done extensive hydroponic research for its Controlled Ecological Life Support System (CELSS). Hydroponics research mimicking a Martian environment uses LED lighting to grow in a different color spectrum with much less heat. Ray Wheeler, a plant physiologist at Kennedy Space Center's Space Life Science Lab, believes that hydroponics will create advances within space travel, as a bioregenerative life support system.
As of 2017, Canada had hundreds of acres of large-scale commercial hydroponic greenhouses, producing tomatoes, peppers and cucumbers.
Due to technological advancements within the industry and numerous economic factors, the global hydroponics market is forecast to grow from US$226.45 million in 2016 to US$724.87 million by 2023.
Techniques
There are two main variations for each medium: sub-irrigation and top irrigation. For all techniques, most hydroponic reservoirs are now built of plastic, but other materials have been used, including concrete, glass, metal, vegetable solids, and wood. The containers should exclude light to prevent algae and fungal growth in the hydroponic medium.
Static solution culture
In static solution culture, plants are grown in containers of nutrient solution, such as glass Mason jars (typically, in-home applications), pots, buckets, tubs, or tanks. The solution is usually gently aerated but may be un-aerated. If un-aerated, the solution level is kept low enough that enough roots are above the solution so they get adequate oxygen. A hole is cut (or drilled) in the top of the reservoir for each plant; if it is a jar or tub, it may be its lid, but otherwise, cardboard, foil, paper, wood or metal may be put on top. A single reservoir can be dedicated to a single plant, or to various plants. Reservoir size can be increased as plant size increases. A home-made system can be constructed from food containers or glass canning jars with aeration provided by an aquarium pump, aquarium airline tubing, aquarium valves or even a biofilm of green algae on the glass, through photosynthesis. Clear containers can also be covered with aluminium foil, butcher paper, black plastic, or other material to eliminate the effects of negative phototropism. The nutrient solution is changed either on a schedule, such as once per week, or when the concentration drops below a certain level as determined with an electrical conductivity meter. Whenever the solution is depleted below a certain level, either water or fresh nutrient solution is added. A Mariotte's bottle, or a float valve, can be used to automatically maintain the solution level. In raft solution culture, plants are placed in a sheet of buoyant plastic that is floated on the surface of the nutrient solution. That way, the solution level never drops below the roots.
Continuous-flow solution culture
In continuous-flow solution culture, the nutrient solution constantly flows past the roots. It is much easier to automate than the static solution culture because sampling and adjustments to the temperature, pH, and nutrient concentrations can be made in a large storage tank that has potential to serve thousands of plants. A popular variation is the nutrient film technique or NFT, whereby a very shallow stream of water containing all the dissolved nutrients required for plant growth is recirculated in a thin layer past a bare root mat of plants in a watertight channel, with an upper surface exposed to air. As a consequence, an abundant supply of oxygen is provided to the roots of the plants. A properly designed NFT system is based on using the right channel slope, the right flow rate, and the right channel length. The main advantage of the NFT system over other forms of hydroponics is that the plant roots are exposed to adequate supplies of water, oxygen, and nutrients. In all other forms of production, there is a conflict between the supply of these requirements, since excessive or deficient amounts of one results in an imbalance of one or both of the others. NFT, because of its design, provides a system where all three requirements for healthy plant growth can be met at the same time, provided that the simple concept of NFT is always remembered and practised. The result of these advantages is that higher yields of high-quality produce are obtained over an extended period of cropping. A downside of NFT is that it has very little buffering against interruptions in the flow (e.g., power outages). But, overall, it is probably one of the more productive techniques.
The same design characteristics apply to all conventional NFT systems. While slopes along channels of 1:100 have been recommended, in practice it is difficult to build a base for channels that is sufficiently true to enable nutrient films to flow without ponding in locally depressed areas. As a consequence, it is recommended that slopes of 1:30 to 1:40 are used. This allows for minor irregularities in the surface, but, even with these slopes, ponding and water logging may occur. The slope may be provided by the floor, benches or racks may hold the channels and provide the required slope. Both methods are used and depend on local requirements, often determined by the site and crop requirements.
As a general guide, flow rates for each gully should be one liter per minute. At planting, rates may be half this and the upper limit of 2 L/min appears about the maximum. Flow rates beyond these extremes are often associated with nutritional problems. Depressed growth rates of many crops have been observed when channels exceed 12 meters in length. On rapidly growing crops, tests have indicated that, while oxygen levels remain adequate, nitrogen may be depleted over the length of the gully. As a consequence, channel length should not exceed 10–15 meters. In situations where this is not possible, the reductions in growth can be eliminated by placing another nutrient feed halfway along the gully and halving the flow rates through each outlet.
Aeroponics
Aeroponics is a system wherein roots are continuously or discontinuously kept in an environment saturated with fine drops (a mist or aerosol) of nutrient solution. The method requires no substrate and entails growing plants with their roots suspended in a deep air or growth chamber with the roots periodically wetted with a fine mist of atomized nutrients. Excellent aeration is the main advantage of aeroponics.
Aeroponic techniques have proven to be commercially successful for propagation, seed germination, seed potato production, tomato production, leaf crops, and micro-greens. Since inventor Richard Stoner commercialized aeroponic technology in 1983, aeroponics has been implemented as an alternative to water intensive hydroponic systems worldwide. A major limitation of hydroponics is the fact that of water can only hold of air, no matter whether aerators are utilized or not.
Another distinct advantage of aeroponics over hydroponics is that any species of plants can be grown in a true aeroponic system because the microenvironment of an aeroponic can be finely controlled. Another limitation of hydroponics is that certain species of plants can only survive for so long in water before they become waterlogged. In contrast, suspended aeroponic plants receive 100% of the available oxygen and carbon dioxide to their roots zone, stems, and leaves, thus accelerating biomass growth and reducing rooting times. NASA research has shown that aeroponically grown plants have an 80% increase in dry weight biomass (essential minerals) compared to hydroponically grown plants. Aeroponics also uses 65% less water than hydroponics. NASA concluded that aeroponically grown plants require ¼ the nutrient input compared to hydroponics. Unlike hydroponically grown plants, aeroponically grown plants will not suffer transplant shock when transplanted to soil, and offers growers the ability to reduce the spread of disease and pathogens.
Aeroponics is also widely used in laboratory studies of plant physiology and plant pathology. Aeroponic techniques have been given special attention from NASA since a mist is easier to handle than a liquid in a zero-gravity environment.
Fogponics
Fogponics is a derivation of aeroponics wherein the nutrient solution is aerosolized by a diaphragm vibrating at ultrasonic frequencies. Solution droplets produced by this method tend to be 5–10 μm in diameter, smaller than those produced by forcing a nutrient solution through pressurized nozzles, as in aeroponics. The smaller size of the droplets allows them to diffuse through the air more easily, and deliver nutrients to the roots without limiting their access to oxygen.
Passive sub-irrigation
Passive sub-irrigation, also known as passive hydroponics, semi-hydroponics, or hydroculture, is a method wherein plants are grown in an inert porous medium that moves water and fertilizer to the roots by capillary action from a separate reservoir as necessary, reducing labor and providing a constant supply of water to the roots. In the simplest method, the pot sits in a shallow solution of fertilizer and water or on a capillary mat saturated with nutrient solution. The various hydroponic media available, such as expanded clay and coconut husk, contain more air space than more traditional potting mixes, delivering increased oxygen to the roots, which is important in epiphytic plants such as orchids and bromeliads, whose roots are exposed to the air in nature. Additional advantages of passive hydroponics are the reduction of root rot.
Ebb and flow (flood and drain) sub-irrigation
In its simplest form, nutrient-enriched water is pumped into containers with plants in a growing medium such as Expanded clay aggregate At regular intervals, a simple timer causes a pump to fill the containers with nutrient solution, after which the solution drains back down into the reservoir. This keeps the medium regularly flushed with nutrients and air.
Run-to-waste
In a run-to-waste system, nutrient and water solution is periodically applied to the medium surface. The method was invented in Bengal in 1946; for this reason it is sometimes referred to as "The Bengal System".
This method can be set up in various configurations. In its simplest form, a nutrient-and-water solution is manually applied one or more times per day to a container of inert growing media, such as rockwool, perlite, vermiculite, coco fibre, or sand. In a slightly more complex system, it is automated with a delivery pump, a timer and irrigation tubing to deliver nutrient solution with a delivery frequency that is governed by the key parameters of plant size, plant growing stage, climate, substrate, and substrate conductivity, pH, and water content.
In a commercial setting, watering frequency is multi-factorial and governed by computers or PLCs.
Commercial hydroponics production of large plants like tomatoes, cucumber, and peppers uses one form or another of run-to-waste hydroponics.
Deep water culture
The hydroponic method of plant production by means of suspending the plant roots in a solution of nutrient-rich, oxygenated water. Traditional methods favor the use of plastic buckets and large containers with the plant contained in a net pot suspended from the centre of the lid and the roots suspended in the nutrient solution.
The solution is oxygen saturated by an air pump combined with porous stones. With this method, the plants grow much faster because of the high amount of oxygen that the roots receive. The Kratky Method is similar to deep water culture, but uses a non-circulating water reservoir.
Top-fed deep water culture
Top-fed deep water culture is a technique involving delivering highly oxygenated nutrient solution direct to the root zone of plants. While deep water culture involves the plant roots hanging down into a reservoir of nutrient solution, in top-fed deep water culture the solution is pumped from the reservoir up to the roots (top feeding). The water is released over the plant's roots and then runs back into the reservoir below in a constantly recirculating system. As with deep water culture, there is an airstone in the reservoir that pumps air into the water via a hose from outside the reservoir. The airstone helps add oxygen to the water. Both the airstone and the water pump run 24 hours a day.
The biggest advantage of top-fed deep water culture over standard deep water culture is increased growth during the first few weeks. With deep water culture, there is a time when the roots have not reached the water yet. With top-fed deep water culture, the roots get easy access to water from the beginning and will grow to the reservoir below much more quickly than with a deep water culture system. Once the roots have reached the reservoir below, there is not a huge advantage with top-fed deep water culture over standard deep water culture. However, due to the quicker growth in the beginning, grow time can be reduced by a few weeks.
Rotary
A rotary hydroponic garden is a style of commercial hydroponics created within a circular frame which rotates continuously during the entire growth cycle of whatever plant is being grown.
While system specifics vary, systems typically rotate once per hour, giving a plant 24 full turns within the circle each 24-hour period. Within the center of each rotary hydroponic garden can be a high intensity grow light, designed to simulate sunlight, often with the assistance of a mechanized timer.
Each day, as the plants rotate, they are periodically watered with a hydroponic growth solution to provide all nutrients necessary for robust growth. Due to the plants continuous fight against gravity, plants typically mature much more quickly than when grown in soil or other traditional hydroponic growing systems. Because rotary hydroponic systems have a small size, they allow for more plant material to be grown per area of floor space than other traditional hydroponic systems.
Rotary hydroponic systems should be avoided in most circumstances, mainly because of their experimental nature and their high costs for finding, buying, operating, and maintaining them.
Substrates (growing support materials)
Different media are appropriate for different growing techniques.
Rock wool
Rock wool (mineral wool) is the most widely used medium in hydroponics. Rock wool is an inert substrate suitable for both run-to-waste and recirculating systems. Rock wool is made from molten rock, basalt or 'slag' that is spun into bundles of single filament fibres, and bonded into a medium capable of capillary action, and is, in effect, protected from most common microbiological degradation. Rock wool is typically used only for the seedling stage, or with newly cut clones, but can remain with the plant base for its lifetime. Rock wool has many advantages and some disadvantages. The latter being the possible skin irritancy (mechanical) whilst handling (1:1000). Flushing with cold water usually brings relief. Advantages include its proven efficiency and effectiveness as a commercial hydroponic substrate. Most of the rock wool sold to date is a non-hazardous, non-carcinogenic material, falling under Note Q of the European Union Classification Packaging and Labeling Regulation (CLP).
Mineral wool products can be engineered to hold large quantities of water and air that aid root growth and nutrient uptake in hydroponics; their fibrous nature also provides a good mechanical structure to hold the plant stable. The naturally high pH of mineral wool makes them initially unsuitable to plant growth and requires "conditioning" to produce a wool with an appropriate, stable pH.
Expanded clay aggregate
Baked clay pellets are suitable for hydroponic systems in which all nutrients are carefully controlled in water solution. The clay pellets are inert, pH-neutral, and do not contain any nutrient value.
The clay is formed into round pellets and fired in rotary kilns at . This causes the clay to expand, like popcorn, and become porous. It is light in weight, and does not compact over time. The shape of an individual pellet can be irregular or uniform depending on brand and manufacturing process. The manufacturers consider expanded clay to be an ecologically sustainable and re-usable growing medium because of its ability to be cleaned and sterilized, typically by washing in solutions of white vinegar, chlorine bleach, or hydrogen peroxide, and rinsing completely.
Another view is that clay pebbles are best not re-used even when they are cleaned, due to root growth that may enter the medium. Breaking open a clay pebble after use can reveal this growth.
Growstones
Growstones, made from glass waste, have both more air and water retention space than perlite and peat. This aggregate holds more water than parboiled rice hulls. Growstones by volume consist of 0.5 to 5% calcium carbonate – for a standard 5.1 kg bag of Growstones that corresponds to 25.8 to 258 grams of calcium carbonate. The remainder is soda-lime glass.
Coconut Coir
Coconut coir, also known as coir peat, is a natural byproduct derived from coconut processing. The outer husk of a coconut consists of fibers which are commonly used to make a myriad of items ranging from floor mats to brushes. After the long fibers are used for those applications, the dust and short fibers are merged to create coir. Coconuts absorb high levels of nutrients throughout their life cycle, so the coir must undergo a maturation process before it becomes a viable growth medium. This process removes salt, tannins and phenolic compounds through substantial water washing. Contaminated water is a byproduct of this process, as three hundred to six hundred liters of water per one cubic meter of coir are needed. Additionally, this maturation can take up to six months and one study concluded the working conditions during the maturation process are dangerous and would be illegal in North America and Europe. Despite requiring attention, posing health risks and environmental impacts, coconut coir has impressive material properties. When exposed to water, the brown, dry, chunky and fibrous material expands nearly three or four times its original size. This characteristic combined with coconut coir's water retention capacity and resistance to pests and diseases make it an effective growth medium. Used as an alternative to rock wool, coconut coir offers optimized growing conditions.
Rice husks
Parboiled rice husks (PBH) are an agricultural byproduct that would otherwise have little use. They decay over time, and allow drainage, and even retain less water than growstones. A study showed that rice husks did not affect the effects of plant growth regulators.
Perlite
Perlite is a volcanic rock that has been superheated into very lightweight expanded glass pebbles. It is used loose or in plastic sleeves immersed in the water. It is also used in potting soil mixes to decrease soil density. It does contain a high amount of fluorine which could be harmful to some plants. Perlite has similar properties and uses to vermiculite but, in general, holds more air and less water and is buoyant.
Vermiculite
Like perlite, vermiculite is a mineral that has been superheated until it has expanded into light pebbles. Vermiculite holds more water than perlite and has a natural "wicking" property that can draw water and nutrients in a passive hydroponic system. If too much water and not enough air surrounds the plants roots, it is possible to gradually lower the medium's water-retention capability by mixing in increasing quantities of perlite.
Pumice
Like perlite, pumice is a lightweight, mined volcanic rock that finds application in hydroponics.
Sand
Sand is cheap and easily available. However, it is heavy, does not hold water very well, and it must be sterilized between uses.
Gravel
The same type that is used in aquariums, though any small gravel can be used, provided it is washed first. Indeed, plants growing in a typical traditional gravel filter bed, with water circulated using electric powerhead pumps, are in effect being grown using gravel hydroponics, also termed "nutriculture". Gravel is inexpensive, easy to keep clean, drains well and will not become waterlogged. However, it is also heavy, and, if the system does not provide continuous water, the plant roots may dry out.
Wood fiber
Wood fibre, produced from steam friction of wood, is an efficient organic substrate for hydroponics. It has the advantage that it keeps its structure for a very long time. Wood wool (i.e. wood slivers) have been used since the earliest days of the hydroponics research. However, more recent research suggests that wood fibre may have detrimental effects on "plant growth regulators".
Sheep wool
Wool from shearing sheep is a little-used yet promising renewable growing medium. In a study comparing wool with peat slabs, coconut fibre slabs, perlite and rockwool slabs to grow cucumber plants, sheep wool had a greater air capacity of 70%, which decreased with use to a comparable 43%, and water capacity that increased from 23% to 44% with use. Using sheep wool resulted in the greatest yield out of the tested substrates, while application of a biostimulator consisting of humic acid, lactic acid and Bacillus subtilis improved yields in all substrates.
Brick shards
Brick shards have similar properties to gravel. They have the added disadvantages of possibly altering the pH and requiring extra cleaning before reuse.
Polystyrene packing peanuts
Polystyrene packing peanuts are inexpensive, readily available, and have excellent drainage. However, they can be too lightweight for some uses. They are used mainly in closed-tube systems. Note that non-biodegradable polystyrene peanuts must be used; biodegradable packing peanuts will decompose into a sludge. Plants may absorb styrene and pass it to their consumers; this is a possible health risk.
Nutrient solutions
Inorganic hydroponic solutions
The formulation of hydroponic solutions is an application of plant nutrition, with nutrient deficiency symptoms mirroring those found in traditional soil based agriculture. However, the underlying chemistry of hydroponic solutions can differ from soil chemistry in many significant ways. Important differences include:
Unlike soil, hydroponic nutrient solutions do not have cation-exchange capacity (CEC) from clay particles or organic matter. The absence of CEC and soil pores means the pH, oxygen saturation, and nutrient concentrations can change much more rapidly in hydroponic setups than is possible in soil.
Selective absorption of nutrients by plants often imbalances the amount of counterions in solution. This imbalance can rapidly affect solution pH and the ability of plants to absorb nutrients of similar ionic charge (see article membrane potential). For instance, nitrate anions are often consumed rapidly by plants to form proteins, leaving an excess of cations in solution. This cation imbalance can lead to deficiency symptoms in other cation based nutrients (e.g. Mg2+) even when an ideal quantity of those nutrients are dissolved in the solution.
Depending on the pH or on the presence of water contaminants, nutrients such as iron can precipitate from the solution and become unavailable to plants. Routine adjustments to pH, buffering the solution, or the use of chelating agents is often necessary.
Unlike soil types, which can vary greatly in their composition, hydroponic solutions are often standardized and require routine maintenance for plant cultivation. Under controlled laboratory conditions hydroponic solutions are periodically pH adjusted to near neutral (pH 6.0) and are aerated with oxygen. Also, water levels must be refilled to account for transpiration losses and nutrient solutions require re-fortification to correct the nutrient imbalances that occur as plants grow and deplete nutrient reserves. Sometimes the regular measurement of nitrate ions is used as a key parameter to estimate the remaining proportions and concentrations of other essential nutrient ions to restore a balanced solution.
Well-known examples of standardized, balanced nutrient solutions are the Hoagland solution, the Long Ashton nutrient solution, or the Knop solution.
As in conventional agriculture, nutrients should be adjusted to satisfy Liebig's law of the minimum for each specific plant variety. Nevertheless, generally acceptable concentrations for nutrient solutions exist, with minimum and maximum concentration ranges for most plants being somewhat similar. Most nutrient solutions are mixed to have concentrations between 1,000 and 2,500 ppm. Acceptable concentrations for the individual nutrient ions, which comprise that total ppm figure, are summarized in the following table. For essential nutrients, concentrations below these ranges often lead to nutrient deficiencies while exceeding these ranges can lead to nutrient toxicity. Optimum nutrition concentrations for plant varieties are found empirically by experience or by plant tissue tests.
Organic hydroponic solutions
Organic fertilizers can be used to supplement or entirely replace the inorganic compounds used in conventional hydroponic solutions. However, using organic fertilizers introduces a number of challenges that are not easily resolved. Examples include:
organic fertilizers are highly variable in their nutritional compositions in terms of minerals and different organic and inorganic species. Even similar materials can differ significantly based on their source (e.g. the quality of manure varies based on an animal's diet).
organic fertilizers are often sourced from animal byproducts, making disease transmission a serious concern for plants grown for human consumption or animal forage.
organic fertilizers are often particulate and can clog substrates or other growing equipment. Sieving or milling the organic materials to fine dusts is often necessary.
biochemical degradation and conversion processes of organic materials can make their mineral ingredients available to plants.
some organic materials (i.e. particularly manures and offal) can further degrade to emit foul odors under anaerobic conditions.
many organic molecules (i.e. sugars) demand additional oxygen during aerobic degradation, which is essential for cellular respiration in the plant roots.
organic compounds (i.e. sugars, vitamins, a.o.) are not necessary for normal plant nutrition.
Nevertheless, if precautions are taken, organic fertilizers can be used successfully in hydroponics.
Organically sourced macronutrients
Examples of suitable materials, with their average nutritional contents tabulated in terms of percent dried mass, are listed in the following table.
Organically sourced micronutrients
Micronutrients can be sourced from organic fertilizers as well. For example, composted pine bark is high in manganese and is sometimes used to fulfill that mineral requirement in hydroponic solutions. To satisfy requirements for National Organic Programs, pulverized, unrefined minerals (e.g. Gypsum, Calcite, and glauconite) can also be added to satisfy a plant's nutritional needs.
Additives
Compounds can be added in both organic and conventional hydroponic systems to improve nutrition acquisition and uptake by the plant. Chelating agents and humic acid have been shown to increase nutrient uptake. Additionally, plant growth promoting rhizobacteria (PGPR), which are regularly utilized in field and greenhouse agriculture, have been shown to benefit hydroponic plant growth development and nutrient acquisition. Some PGPR are known to increase nitrogen fixation. While nitrogen is generally abundant in hydroponic systems with properly maintained fertilizer regimens, Azospirillum and Azotobacter genera can help maintain mobilized forms of nitrogen in systems with higher microbial growth in the rhizosphere. Traditional fertilizer methods often lead to high accumulated concentrations of nitrate within plant tissue at harvest. Rhodopseudo-monas palustris has been shown to increase nitrogen use efficiency, increase yield, and decrease nitrate concentration by 88% at harvest compared to traditional hydroponic fertilizer methods in leafy greens. Many Bacillus spp., Pseudomonas spp. and Streptomyces spp. convert forms of phosphorus in the soil that are unavailable to the plant into soluble anions by decreasing soil pH, releasing phosphorus bound in chelated form that is available in a wider pH range, and mineralizing organic phosphorus.
Some studies have found that Bacillus inoculants allow hydroponic leaf lettuce to overcome high salt stress that would otherwise reduce growth. This can be especially beneficial in regions with high electrical conductivity or salt content in their water source. This could potentially avoid costly reverse osmosis filtration systems while maintaining high crop yield.
Tools
Common equipment
Managing nutrient concentrations, oxygen saturation, and pH values within acceptable ranges is essential for successful hydroponic horticulture. Common tools used to manage hydroponic solutions include:
Electrical conductivity meters, a tool which estimates nutrient ppm by measuring how well a solution transmits an electric current.
pH meter, a tool that uses an electric current to determine the concentration of hydrogen ions in solution.
Oxygen electrode, an electrochemical sensor for determining the oxygen concentration in solution.
Litmus paper, disposable pH indicator strips that determine hydrogen ion concentrations by color changing chemical reaction.
Graduated cylinders or measuring spoons to measure out premixed, commercial hydroponic solutions.
Equipment
Chemical equipment can also be used to perform accurate chemical analyses of nutrient solutions. Examples include:
Balances for accurately measuring materials.
Laboratory glassware, such as burettes and pipettes, for performing titrations.
Colorimeters for solution tests which apply the Beer–Lambert law.
Spectrophotometer to measure the concentrations of the key parameter nitrate and other nutrients, such as phosphate, sulfate or iron.
Containers for growing and storing the plants.
Using chemical equipment for hydroponic solutions can be beneficial to growers of any background because nutrient solutions are often reusable. Because nutrient solutions are virtually never completely depleted, and should never be due to the unacceptably low osmotic pressure that would result, re-fortification of old solutions with new nutrients can save growers money and can control point source pollution, a common source for the eutrophication of nearby lakes and streams.
Software
Although pre-mixed concentrated nutrient solutions are generally purchased from commercial nutrient manufacturers by hydroponic hobbyists and small commercial growers, several tools exist to help anyone prepare their own solutions without extensive knowledge about chemistry. The free and open source tools HydroBuddy and HydroCal have been created by professional chemists to help any hydroponics grower prepare their own nutrient solutions. The first program is available for Windows, Mac and Linux while the second one can be used through a simple JavaScript interface. Both programs allow for basic nutrient solution preparation although HydroBuddy provides added functionality to use and save custom substances, save formulations and predict electrical conductivity values.
Mixing solutions
Often mixing hydroponic solutions using individual salts is impractical for hobbyists or small-scale commercial growers because commercial products are available at reasonable prices. However, even when buying commercial products, multi-component fertilizers are popular. Often these products are bought as three part formulas which emphasize certain nutritional roles. For example, solutions for vegetative growth (i.e. high in nitrogen), flowering (i.e. high in potassium and phosphorus), and micronutrient solutions (i.e. with trace minerals) are popular. The timing and application of these multi-part fertilizers should coincide with a plant's growth stage. For example, at the end of an annual plant's life cycle, a plant should be restricted from high nitrogen fertilizers. In most plants, nitrogen restriction inhibits vegetative growth and helps induce flowering.
Additional improvements
Growrooms
With pest problems reduced and nutrients constantly fed to the roots, productivity in hydroponics is high; however, growers can further increase yield by manipulating a plant's environment by constructing sophisticated growrooms.
CO2 enrichment
To increase yield further, some sealed greenhouses inject CO2 into their environment to help improve growth and plant fertility.
See also
Aeroponics
Anthroponics
Aquaponics
Digeponics
Fogponics
Folkewall
Grow box
Growroom
Nutrient film technique
Organoponics
Passive hydroponics
Plant factory
Plant nutrition
Plant pathology
Root rot
Vertical farming
Xeriscaping
References
Hydroculture
Aeroponics
de:Hydrokultur | 0.765186 | 0.998722 | 0.764209 |
Cloze test | A cloze test (also cloze deletion test or occlusion test) is an exercise, test, or assessment in which a portion of text is masked and the participant is asked to fill in the masked portion of text. Cloze tests require the ability to understand the context and vocabulary in order to identify the correct language or part of speech that belongs in the deleted passages. This exercise is commonly administered for the assessment of native and second language learning and instruction.
The word cloze is derived from closure in Gestalt theory. The exercise was first described by Wilson L. Taylor in 1953.
Words may be deleted from the text in question either mechanically (every nth word) or selectively, depending on exactly what aspect it is intended to test for. The methodology is the subject of extensive academic literature; nonetheless, teachers commonly devise ad hoc tests.
Examples
A language teacher may give the following passage to students:
Students would then be required to fill in the blanks with words that would best complete the passage. The context in language and content terms is essential in most, if not all, cloze tests. The first blank is preceded by "the"; therefore, a noun, an adjective or an adverb must follow. However, a conjunction follows the blank; the sentence would not be grammatically correct if anything other than a noun were in the blank. The words "milk and eggs" are important for deciding which noun to put in the blank; "supermarket" is a possible answer; depending on the student, however, the first blank could be store, supermarket, shop, shops, market, or grocer while umbrella, brolly or raincoat could fit the second. A possible completed passage would be:
Besides use for testing linguistic fluency, a cloze test may also be used for testing factual knowledge, for example: is the anaerobic catabolism of glucose. Possible answers would then include lactic acid fermentation, anaerobic glycolysis, and anaerobic respiration.
Assessment
The definition of success in a given cloze test varies, depending on the broader goals behind the exercise. Assessment may depend on whether the exercise is objective (i.e. students are given a list of words to use in a cloze) or subjective (i.e. students are to fill in a cloze with words that would make a given sentence grammatically correct).
Given the above passage, students' answers may then vary depending on their vocabulary skills and their personal opinions. However, the placement of the blank at the end of the sentence restricts the possible words that may complete the sentence; following an adverb and finishing the sentence, the word is most likely an adjective. Romantic, chivalrous or gallant may, for example, occupy the blank, as well as foolish or cheesy. Using those answers, a teacher may ask students to reflect on the opinions drawn from the given cloze.
Recent research using eye-tracking has posited that cloze/gapfill items where a selection of words are given as options may be testing different kinds of reading skills depending on the language abilities of the participants taking the test. Lower ability test takers are suggested to be more likely to be concentrating on the information contained in the words immediately surrounding the gap, while higher ability test takers are thought to be able to use a wider context window, which is also true for more capable large language models, such as ChatGPT, in contrast to less able older models.
A number of the methodological problems pointed out by researchers regarding the open-ended type cloze item (readers must supply a correct word from long-term memory, how to score acceptable responses that are not the exact replacement, etc.) can be solved by the use of carefully designed multiple-choice cloze items. See sample test and practice activity from a pilot study in a rural Latin American community. Mostow and associates also showed how this approach is both practical and informative.
Implementation
In addition to the usage in testing, cloze deletion can be used in learning, particularly language learning, but also learning facts. This may be done manually – for example, by covering sections of a text with paper, or highlighting sections of text with a highlighter, then covering the line with a colored ruler in the complementary color (say, a red ruler for a green highlighter) so the highlighted text disappears; this is popular in Japan, for instance . Cloze deletion can also be used as part of spaced repetition software. For example the SuperMemo and Anki applications feature semi-automated creation of cloze tests.
Programming software to accept all synonyms of a word as valid correct answers to a cloze test is a challenge, as all potential synonyms must be considered. An important concept that applies during the automatic creation of cloze tests by software is word clozability. Word clozability is defined as: "How often do participants who know this word guess it correctly when it is clozed in a sentence that they haven't seen before?"
Words that have a large amount of synonyms will have a low word clozability score as the likelihood that the given word will be guessed correctly is reduced. Words that are specific and have a low amount of synonyms will have a high clozability score.
Cloze deletion can also be applied to a graphic organizer, wherein a diagram, map, grid, or image is presented and contextual clues must be used to fill in some labels. In particular, when learning an image-heavy subject, such as anatomy, a user of Anki may employ an image occlusion to occlude parts of an image.
Comparison to other testing methodologies
Glover, 1989 compared different forms of recall and their effectiveness after time passed for forgetting to occur. Glover referred to cloze tests as cued recall, which was found to be less effective than free recall testing (generic cue was given to pupil, the pupil was expected to recall all they knew), but more effective than recognition tests.
Natural language processing
Cloze test is often used as an evaluation task in natural language processing (NLP) to assess the performance of the trained language models. The tasks have a few different variants, like predicting the answer for the blank with and without providing the right options, predicting the ending sentence of a story or passage, etc. Since the design of the BERT encoder, it is also used in pre-training language models, in which case it is known as masked language modelling.
See also
Communicative competence
English language learning and teaching
Form letter
Mad Libs
Sentence completion tests
References
More Information
Language assessment | 0.771398 | 0.990665 | 0.764197 |
Positive deviance | Positive deviance (PD) is an approach to behavioral and social change. It is based on the idea that, within a community, some individuals engage in unusual behaviors allowing them to solve problems better than others who face similar challenges, despite not having additional resources or knowledge. These individuals are referred to as positive deviants.
The concept first appeared in nutrition research in the 1970s. Researchers observed that, despite the poverty in a community, some families had well-nourished children. Some suggested using information gathered from these outliers to plan nutrition programs.
Principles
Positive deviance is a strength-based approach applicable to problems requiring behavior and social change. It is based on the following principles:
Communities already have the solutions; they are the best experts in solving their problems.
Communities self-organize and are equipped with the human resources and social assets to solve agreed-upon problems.
Collective intelligence. Intelligence and know-how are not concentrated in the leadership of a community alone or in external experts but are distributed throughout the community. Thus, the PD process aims to elicit collective intelligence to apply it to specific problems requiring behavior or social change.
Sustainability is the cornerstone of the approach. The PD approach enables the community or organization to seek and discover sustainable solutions to a given problem because the demonstrably successful uncommon behaviors are already practiced in that community within the constraints and challenges of the current situation.
It is easier to change behavior by practicing it rather than knowing about it. "It is easier to act your way into a new way of thinking than think your way into a new way of acting".
Original application
The PD approach was first operationalized and applied in programming in the field by Jerry and Monique Sternin through their work with Save the Children in Vietnam in the 1990s (Tuhus-Dubrow, Sternin, Sternin and Pascale).
At the start of the pilot, 64% of children weighed in the pilot villages were malnourished. Through a PD inquiry, the villagers found poor peers in the community that, through their uncommon but successful strategies, had well-nourished children. These families collected foods typically considered inappropriate for children (e.g., sweet potato greens, shrimp, and crabs), washed their children's hands before meals, and actively fed them three to four times a day instead of the typical two meals a day provided to children.
Unknowingly, PDs had incorporated foods already found in their community that provided essential nutrients: protein, iron, and calcium. A nutrition program based on these insights was created. Instead of simply telling participants what to do differently, they designed the program to help them act their way into a new way of thinking. Parents were required to bring one of the newly identified foods to attend a feeding session. They brought their children and, while sharing nutritious meals, learned to cook the new foods.
At the end of the two-year pilot, malnutrition fell by 85%. Results were sustained and transferred to the participants' younger siblings.
This approach to programming was different in important ways.
It is always appropriate, as it operates within the assets of a community, and it, therefore, caters to its specific cultural context, e.g., village, business, schools, ministry, department, or hospital. Additionally, by seeing that certain members of their community are already engaging in an uncommon behavior, others are more likely to adopt it themselves, as this serves as "social proof" that the behavior is acceptable for everyone within the community. Furthermore, the solutions stem from the community, avoiding thus the "immune response" that can occur when outside experts enter a community with best practices that are often unsuccessful in promoting sustained change. (Sternin)
Since it was first applied in Vietnam, PD has been used to inform nutrition programs in over 40 countries by USAID, World Vision, Mercy Corps, Save the Children, CARE, Plan International, Indonesian Ministry of Health, Peace Corps, Food for the Hungry, among others.
Steps
A positive deviance approach may follow a series of steps.
An invitation to change
A PD inquiry begins with an invitation from a community that wishes to address a significant issue they face. This is crucial, as it is the community that acquires ownership of the process.
Defining the problem
The definition of the problem is carried out by and for the community. This will often lead to a problem definition that differs from the outside "expert" opinion of the situation.
The community establishes a quantitative baseline, allowing it to reflect on the problem given the evidence at hand and measure the progress toward its goals.
This is also the beginning of the process of identifying stakeholders and decision-makers involved. Additional stakeholders and decision-makers will be pulled in throughout the process as they are identified.
Determining the presence of PD individuals or groups
Using data and observation, the community can identify the positive deviants in their midst.
Discovering uncommon practices or behaviors
The Positive Deviance Inquiry aims to discover uncommon practices or behaviors. The community, having identified positive deviants, sets out to find the behaviors, attitudes, or beliefs that allow the PD to be successful. The focus is on the successful strategies of the PD, not on making a hero of the person using the strategy. This self-discovery of people/groups just like them who have found successful solutions provides "social proof" that this problem can be overcome now, without outside resources.
Program design
After identifying successful strategies, the community decides which strategies they would like to adopt, and they design activities to help others access and practice these uncommon and other beneficial strategies. Program design is not focused on spreading "best practices" but on helping community members "act their way into a new way of thinking" through hands-on activities.
Monitoring and evaluation
PD-informed projects are monitored and evaluated through a participatory process. As the community decides on and performs the monitoring, the tools they create will be appropriate to the setting. Even illiterate community members can participate through pictorial monitoring forms or other appropriate tools.
Evaluation allows the community to track their progress toward their goals and reinforces the changes they are making in behaviors, attitudes, and beliefs.
Scaling up
The scaling up of a PD project may happen through many mechanisms: the "ripple effect" of other communities observing the success and engaging in a PD project of their own, through the coordination of NGOs, or organizational development consultants. Irrespective of the mechanism employed, the community discovery process of PDs in their midst remains vital to the acceptance of new behaviors, attitudes, and knowledge.
Applications
Preventing hospital-acquired infections
The PD approach has been applied in hospitals in the United States, Brazil, Canada, Mexico, Colombia, and England to stop the spread of hospital-acquired infections such as Clostridioides difficile and Methicillin-resistant Staphylococcus aureus (MRSA). The Centers for Disease Control and Prevention (CDC) evaluated pilot programs in the U.S. and found units using the approach decreased their infections by 30-73%.
Additionally, it has been used in healthcare settings by increasing the incidence of hand washing and improving care for patients immediately after a heart attack.
Primary care (Bright Spotting)
Termed "Bright Spotting", instead of positive deviance, the primary care pilot initiative first took place in rural New Hampshire and is still ongoing. The outpatient clinic identified a complex patient population, from the clinic's perspective, studied the risk factors of that population, then identified measures that would signify that a patient has become healthy and sustained health. Once these measures were identified (using both data and the practices' knowledge of the patients), "Bright Spots" were identified as those that meet both high-risk criteria and achieved health. Finding positive deviant patients through predictive analytics has also be suggested as a possible tool in discovery. Once these patients were identified the care team performed qualitative research to discover their patterns of behavior. The results were then shown to the bright spots and their families who then designed a peer learning experience with the results in mind. The community meetings were then facilitated using both positive deviance facilitation techniques as well as applying the "Citizen Health Care Model", which is very similar to positive deviance approaches.
Public health
A PD project helped prisoners in a New South Wales prison stop smoking. Projects in Burkino Faso, Guatemala, Ivory Coast, and Rwanda addressed reproductive health in adolescents. PD maternal and newborn health projects in Myanmar, Pakistan, Egypt, and India have improved women's access to prenatal care, delivery preparation, and antenatal care for mothers and babies.
PD projects to prevent the spread of HIV/AIDS took place in 2002 with motorbike taxi drivers in Vietnam, and in 2004 with sex workers in Indonesia. A PD project to enhance psychological resilience amongst adolescents vulnerable to depression and anxiety was implemented in the Netherlands.
Child protection
A five-year PD project starting in 2003 to prevent girl trafficking in Indonesia with Save the Children and a local Indonesian NGO, helped them find viable economic options to stay in their communities.
A PD project to stop Female Genital Mutilation/Cutting in Egypt began in 1998 with CEDPA (Center for Development and Population Activities), COST (Coptic Organization for Services and Training), Caritas in Minya, Community Development Agency (CDA), Monshaat Nasser in Beni Suef governorate, and the Center for Women's Legal Assistance (CEWLA). Efforts have already shown a reduction in the practice.
In Uganda, a project with the Oak Foundation and Save the Children helped girls who were child soldiers with the Lords Resistance Army in Sudan reintegrate into their communities.
In education
PD projects in New Jersey, California, Argentina, Ethiopia, and Burkina Faso have addressed dropout rates and keeping girls in school.
Private sector
Proponents of PD within management science argue that, in any population (even in such seemingly mundane groups as service personnel in fast food environments), the positive deviants have attitudes, cognitive processes, and behavioral patterns that lead to significantly improved performance in key metrics such as speed of service and profitability. Studies claim that the widespread adoption of positive-deviant approaches consistently leads to significant performance improvement.
PD had been significantly extended to the private sector, by William Seidman and Michael McCauley. Their extensions include methodologies and technologies for:
Quickly identifying the positive deviants
Efficiently gathering and organizing the positive deviant knowledge
Motivating a willingness in others to adopt the positive deviant approaches
Sustaining the change by others by integrating it into their pre-existing emotional and cognitive functions
Scaling the positive deviant knowledge to large numbers of people simultaneously
Positive deviance was further extended to groups or organizations by Gary Hamel. Hamel looks to Positive Deviant companies to set the example for "management innovation."
See also
Creativity
Deviance (sociology)
Individuality
Invention
Nonconformity
Outliers (book)
Thinking outside the box
Rebellious Motivational State
References
Malnutrition
Eating behaviors of humans
Change management
Health promotion
Research on poverty | 0.785792 | 0.972508 | 0.76419 |
BREEAM | BREEAM (Building Research Establishment Environmental Assessment Method), first published by the Building Research Establishment (BRE) in 1990, is touted as the world's longest established method of identifying the sustainability of buildings. Around 550,000 buildings have been 'BREEAM-certified'. Additionally, two million homes have registered for certification globally. BREEAM also has a tool which focuses on neighbourhood development.
Purpose
BREEAM is an assessment undertaken by independent licensed assessors using scientifically-based sustainability metrics and indices which cover a range of environmental issues. Its categories evaluate energy and water use, health and wellbeing, pollution, transport, materials, waste, ecology and management processes. Buildings are rated and certified on a scale of 'Pass', 'Good', 'Very Good', 'Excellent' and 'Outstanding'.
It was created to educate home owners and designers of benefits involved in taking its approach, which has a long term focus, and to let these parties make further decisions along the same line. A major focus of the method is on sustainability: It aims to reduce the negative effects of construction and development on the environment.
History
Work on creating BREEAM began at the Building Research Establishment (based in Watford, England) in 1988. The first version for assessing new office buildings was launched in 1990. This was followed by versions for other buildings including superstores, industrial units and existing offices.
In 1998, there was a major revamp of the BREEAM Offices standard, and the scheme's layout, with features such as weighting for different sustainability issues, was established. The development of BREEAM then accelerated with annual updates and variations for other building types such as retail premises being introduced.
A version of BREEAM for new homes called EcoHomes was launched in 2000. This scheme was later used as the basis of the Code for Sustainable Homes, which was developed by BRE for the UK Government in 2006/7 and replaced Eco Homes in England and Wales. In 2014, the Government in England signalled the winding down the Code for Sustainable Homes. Since then BRE has developed the Home Quality Mark, which is part of the BREEAM family of schemes.
An extensive update of all BREEAM schemes in 2008 resulted in the introduction of mandatory post-construction reviews, minimum standards and innovation credits. International versions of BREEAM were also launched that year.
Another major update in 2011 resulted in the launch of BREEAM New Construction, which is now used to assess and certify all new UK buildings. This revision included the reclassification and consolidation of issues and criteria to further streamline the BREEAM process. In 2012, a scheme for domestic refurbishment was introduced in the UK, followed by a non-domestic version in 2014 that was expanded to an international scope the following year.
In 2015, the Building Research Establishment announced the acquisition of CEEQUAL following a recommendation from their board, with the aim of creating a single sustainability rating scheme for civil engineering and infrastructure projects.
The 2018 update of BREEAM UK New Construction was launched in March 2018 at Ecobuild.
The BREEAM UK New Construction V6 was released on 24 August 2022 following the updates to building regulations in England that came into force on 15 June 2022 and V6.1 (to incorporate changes to the building regulations for energy performance in Scotland, Wales, and Northern Ireland) on 14 June 2023.
Scope
BREEAM has expanded from its original focus on individual new buildings at the construction stage to encompass the whole life cycle of buildings from planning to in-use and refurbishment. Its regular revisions and updates are driven by the ongoing need to improve sustainability, respond to feedback from industry and support the UK's sustainability strategies and commitments.
Highly flexible, the BREEAM standard can be applied to virtually any building and location, with versions for new buildings, existing buildings, refurbishment projects and large developments:
BREEAM New Construction is the BREEAM standard against which the sustainability of new, non-residential buildings in the UK is assessed. Developers and their project teams use the scheme at key stages in the design and procurement process to measure, evaluate, improve and reflect the performance of their buildings.
BREEAM International New Construction is the BREEAM standard for assessing the sustainability of new residential and non-residential buildings in countries around the world, except for the UK and other countries with a national BREEAM scheme (see below). This scheme makes use of assessment criteria that take account of the circumstances, priorities, codes and standards of the country or region in which the development is located.
BREEAM In-Use is a scheme to help building managers reduce the running costs and improve the environmental performance of existing buildings. It has two parts: building asset and building management. Both parts are relevant to all non-domestic, commercial, industrial, retail and institutional buildings. BREEAM In-Use is widely used by members of the International Sustainability Alliance (ISA), which provides a platform for certification against the scheme. The newest version v6, available from 2020 includes also Residential programs.
BREEAM Refurbishment provides a design and assessment method for sustainable housing refurbishment projects, helping to cost-effectively improve the sustainability and environmental performance of existing dwellings in a robust way. A scheme for non-housing refurbishment projects is being developed and is targeted for launch in early 2014. The launch date will be announced once the piloting and independent peer review processes has been completed.
BREEAM Communities focuses on the masterplanning of whole communities. It is aimed at helping construction industry professionals to design places that people want to live and work in, are good for the environment and are economically successful.
BREEAM includes several general sustainability categories for the assessment:
Management
Energy
Health and wellbeing
Transport
Water
Materials
Waste
Land use and ecology
Pollution
Home Quality Mark was launched in 2015 as part of the BREEAM family of schemes. It rates new homes on their overall quality and sustainability, then provides further indicators on the homes impact upon the occupants 'Running costs', 'Health and wellbeing' and 'Environmental footprint'.
National operators
BREEAM is used in more than 70 countries, with several in Europe having gone a stage further to develop country-specific BREEAM schemes operated by National Scheme Operators (NSOs). There are currently NSOs affiliated to BREEAM in:
Germany: the German Institute for Sustainable Real Estate (DIFNI) operates BREEAM DE.
Netherlands: the Dutch Green Building Council operates BREEAM NL
Norway: the Norwegian Green Building Council operates BREEAM NOR
Spain: the Instituto Tecnológico de Galicia operates BREEAM ES
Sweden: the Swedish Green Building Council operates BREEAM SE
Schemes developed by NSOs can take any format as long as they comply with a set of overarching requirements laid down in the Code for a Sustainable Built Environment. They can be produced from scratch by adapting current BREEAM schemes to the local context, or by developing existing local schemes.
The cost and value of sustainability
A growing body of research evidence is challenging the perception that sustainable buildings are significantly more costly to design and build than those that simply adhere to regulatory requirements. Research by the Sweett Group into projects using BREEAM, for example, demonstrates that sustainable options often add little or no capital cost to a development project. Where such measures do incur additional costs, these can frequently be paid back through lower running expenses, ultimately leading to saving over the life of the building.
Research studies have also highlighted the enhanced value and quality of sustainable buildings. Achieving the standards required by BREEAM requires careful planning, design, specification and detailing, and a good working relationship between the client and project team—the very qualities that can produce better buildings and better conditions for building users. A survey commissioned by Schneider Electric and undertaken by BSRIA examined the experiences of a wide range of companies that had used BREEAM. The findings included, for example, that 88% think it is a good thing, 96% would use the scheme again and 88% would recommend BREEAM to others.
The greater efficiency and quality associated with sustainability are also helping to make such building more commercially successful. There is growing evidence, for example, that BREEAM-rated buildings provide increased rates of return for investors, and increased rental rates and sales premiums for developers and owners. A Maastricht University document, published by RICS Research, reported on a study of the effect of BREEAM certification on office buildings in London from 2000–2009. It found, for example, that these buildings achieved a 21% premium on transaction prices and an 18% premium on rents.
See also
LEED (Leadership in Energy and Environmental Design)
Sustainable refurbishment
References
External links
BREEAM website
Website of the Building Research Establishment
Building energy rating
Building engineering
Construction
Environmental design
Environmental engineering
Low-energy building in the United Kingdom
Science and technology in Hertfordshire
Sustainability
Sustainable building in the United Kingdom
Sustainable building rating systems
Sustainable design
Sustainable development | 0.774225 | 0.987024 | 0.764179 |
AsapScience | AsapScience, stylized as AsapSCIENCE, is a YouTube channel created by Canadian YouTubers Mitchell Moffit and Gregory Brown. The channel produces a range of videos that touch on various concepts related to science and technology.
AsapScience is one of the largest educational channels on YouTube. The channel was created in May of 2012 and had acquired more than 7 million subscribers by March 2018. This following had increased to 9 million by 2020. In addition to videos explaining scientific news and research, the channel produces songs, several of which have achieved viral fame and also created controversy.
Moffit and Brown have been praised for prompting meaningful dialog about LGBTQ+ issues.
Team
Mitchell "Mitch" Moffit, born , creator and host
Gregory "Greg" Brown, born , creator and host
Moffit and Brown are an openly gay couple who met while studying biology at the University of Guelph. They made their sexualities and relationship public online in 2014, two years after starting their channel, in response to derogatory comments and in order to be visible role models for young gay people interested in science.
Kendra Y. Hill, manager
Max Simmons, illustrator
Luka Sarlija, editor
Channel
AsapScience videos are about science, with many episodes, such as How Much Sleep Do You Actually Need?, discussing functions of the human body. They sometimes make songs explaining science such as Science Love Song and Periodic Table Song. Each video's scientific concepts are conveyed using coloured drawings on a whiteboard and voice-over narration. As revealed in a behind-the-scenes video, Mitchell voices and composes the background music for the videos, while Greg is the primary illustrator.
The most viewed video of the channel as of September 2024 is Do You Hear "Yanny" or "Laurel"?, which has 66 million views. Their videos have been featured in websites such as The Huffington Post and Gizmodo. In March 2015, Moffit and Brown released their first book, AsapSCIENCE: Answers to the World's Weirdest Questions, Most Persistent Rumors, and Unexplained Phenomena.
Collaborations
AsapScience has collaborated with Vsauce3 on 4 videos, The Scientific Secret of Strength and Muscle Growth and What if Superman Punched You?, Can We Genetically Improve Intelligence? and Can You Genetically Enhance Yourself?. One of the videos, Could We Stop An Asteroid?, features Bill Nye, who discusses different ways humanity could stop an asteroid if one were on a collision course for Earth.
On February 2, 2014, AsapScience announced that they have collaborated with CBC News to produce one video daily related to sports, for 19 days starting from 6 February. AsapScience also appeared in several videos with IISuperwomanII. They had a one-time collaboration with Kurzgesagt – In a Nutshell on the What Is The Most Dangerous Drug In The World? video which aired on November 16, 2017.
In December 2017, AsapScience appeared on Rhett and Link's YouTube channel Good Mythical Morning. In 2020, alongside Psych IRL and others, AsapScience featured in a YouTube original series Sleeping With Friends, a competition in which participants aim to get the best night's sleep.
Religion
On March 16, 2017, AsapScience released a video regarding the existence of God and whether it could be proven through the use of math, titled "Can Math Prove God's Existence?" The video sparked a lot of controversy and received a channel-highest dislike percentage of more than 45%.
Statistics
As of 14 May 2023, AsapScience and Greg and Mitch have over 11 million subscribers combined.
Other work
In February 2016, Moffit was announced as one of the 16 HouseGuests on Big Brother Canada 4. He placed 11th and was evicted on day 42 in a 5-3 eviction vote. He was the first member of the Final Jury that decided the winner of the game.
Honours
On December 7, 2023, Mary Simon, the Governor General of Canada, invested both Brown and Moffit with the Meritorious Service Medal (Civil Division) for using "explanations, solid facts and humour" to "educate the Internet generation about science topics".
See also
Vsauce
Veritasium
MinutePhysics
Numberphile
Kurzgesagt
Vi Hart
Wendover Productions
Mark Rober
References
Canadian YouTubers
LGBTQ YouTubers
Same-sex couples
Science communicators
Science-related YouTube channels | 0.774811 | 0.986276 | 0.764177 |
Development theory | Development theory is a collection of theories about how desirable change in society is best achieved. Such theories draw on a variety of social science disciplines and approaches. In this article, multiple theories are discussed, as are recent developments with regard to these theories. Depending on which theory that is being looked at, there are different explanations to the process of development and their inequalities.
Modernization theory
Modernization theory is used to analyze the processes in which modernization in societies take place. The theory looks at which aspects of countries are beneficial and which constitute obstacles for economic development. The idea is that development assistance targeted at those particular aspects can lead to modernization of 'traditional' or 'backward' societies. Scientists from various research disciplines have contributed to modernization theory.
Sociological and anthropological modernization theory
The earliest principles of modernization theory can be derived from the idea of progress, which stated that people can develop and change their society themselves. Marquis de Condorcet was involved in the origins of this theory. This theory also states that technological advancements and economic changes can lead to changes in moral and cultural values. The French sociologist Émile Durkheim stressed the interdependence of institutions in a society and the way in which they interact with cultural and social unity. His work The Division of Labor in Society was very influential. It described how social order is maintained in society and ways in which primitive societies can make the transition to more advanced societies.
Other scientists who have contributed to the development of modernization theory are: David Apter, who did research on the political system and history of democracy; Seymour Martin Lipset, who argued that economic development leads to social changes which tend to lead to democracy; David McClelland, who approached modernization from the psychological side with his motivations theory; and Talcott Parsons who used his pattern variables to compare backwardness to modernity.
Linear stages of growth model
The linear stages of growth model is an economic model which is heavily inspired by the Marshall Plan which was used to revitalize Europe's economy after World War II. It assumes that economic growth can only be achieved by industrialization. Growth can be restricted by local institutions and social attitudes, especially if these aspects influence the savings rate and investments. The constraints impeding economic growth are thus considered by this model to be internal to society.
According to the linear stages of growth model, a correctly designed massive injection of capital coupled with intervention by the public sector would ultimately lead to industrialization and economic development of a developing nation.
The Rostow's stages of growth model is the most well-known example of the linear stages of growth model. Walt W. Rostow identified five stages through which developing countries had to pass to reach an advanced economy status: (1) Traditional society, (2) Preconditions for take-off, (3) Take-off, (4) Drive to maturity, (5) Age of high mass consumption. He argued that economic development could be led by certain strong sectors; this is in contrast to for instance Marxism which states that sectors should develop equally. According to Rostow's model, a country needed to follow some rules of development to reach the take-off: (1) The investment rate of a country needs to be increased to at least 10% of its GDP, (2) One or two manufacturing sectors with a high rate of growth need to be established, (3) An institutional, political and social framework has to exist or be created in order to promote the expansion of those sectors.
The Rostow model has serious flaws, of which the most serious are: (1) The model assumes that development can be achieved through a basic sequence of stages which are the same for all countries, a doubtful assumption; (2) The model measures development solely by means of the increase of GDP per capita; (3) The model focuses on characteristics of development, but does not identify the causal factors which lead development to occur. As such, it neglects the social structures that have to be present to foster development.
Economic modernization theories such as Rostow's stages model have been heavily inspired by the Harrod-Domar model which explains in a mathematical way the growth rate of a country in terms of the savings rate and the productivity of capital. Heavy state involvement has often been considered necessary for successful development in economic modernization theory; Paul Rosenstein-Rodan, Ragnar Nurkse and Kurt Mandelbaum argued that a big push model in infrastructure investment and planning was necessary for the stimulation of industrialization, and that the private sector would not be able to provide the resources for this on its own.
Another influential theory of modernization is the dual-sector model by Arthur Lewis. In this model Lewis explained how the traditional stagnant rural sector is gradually replaced by a growing modern and dynamic manufacturing and service economy.
Because of the focus on the need for investments in capital, the Linear Stages of Growth Models are sometimes referred to as suffering from ‘capital fundamentalism’.
Critics of modernization theory
Modernization theory observes traditions and pre-existing institutions of so-called "primitive" societies as obstacles to modern economic growth. Modernization which is forced from outside upon a society might induce violent and radical change, but according to modernization theorists it is generally worth this side effect. Critics point to traditional societies as being destroyed and slipping away to a modern form of poverty without ever gaining the promised advantages of modernization.
Structuralism
Structuralism is a development theory which focuses on structural aspects which impede the economic growth of developing countries. The unit of analysis is the transformation of a country's economy from, mainly, a subsistence agriculture to a modern, urbanized manufacturing and service economy. Policy prescriptions resulting from structuralist thinking include major government intervention in the economy to fuel the industrial sector, known as import substitution industrialization (ISI). This structural transformation of the developing country is pursued in order to create an economy which in the end enjoys self-sustaining growth. This can only be reached by ending the reliance of the underdeveloped country on exports of primary goods (agricultural and mining products), and pursuing inward-oriented development by shielding the domestic economy from that of the developed economies. Trade with advanced economies is minimized through the erection of all kinds of trade barriers and an overvaluation of the domestic exchange rate; in this way the production of domestic substitutes of formerly imported industrial products is encouraged. The logic of the strategy rests on the infant industry argument, which states that young industries initially do not have the economies of scale and experience to be able to compete with foreign competitors and thus need to be protected until they are able to compete in the free market. The Prebisch–Singer hypothesis states that over time the terms of trade for commodities deteriorate compared to those for manufactured goods, because the income elasticity of demand of manufactured goods is greater than that of primary products. If true, this would also support the ISI strategy.
Structuralists argue that the only way Third World countries can develop is through action by the state. Third world countries have to push industrialization and have to reduce their dependency on trade with the First World, and trade among themselves.
The roots of structuralism lie in South America, and particularly Chile. In 1950, Raul Prebisch went to Chile to become the first director of the Economic Commission for Latin America. In Chile, he cooperated with Celso Furtado, Aníbal Pinto, Osvaldo Sunkel, and Dudley Seers, who all became influential structuralists.
Dependency theory
Dependency theory is essentially a follow-up to structuralist thinking, and shares many of its core ideas. Whereas structuralists did not consider that development would be possible at all unless a strategy of delinking and rigorous ISI was pursued, dependency thinking could allow development with external links with the developed parts of the globe. However, this kind of development is considered to be "dependent development", i.e., it does not have an internal domestic dynamic in the developing country and thus remains highly vulnerable to the economic vagaries of the world market. Dependency thinking starts from the notion that resources flow from the ‘periphery’ of poor and underdeveloped states to a ‘core’ of wealthy countries, which leads to accumulation of wealth in the rich states at the expense of the poor states. Contrary to modernization theory, dependency theory states that not all societies progress through similar stages of development. Periphery states have unique features, structures and institutions of their own and are considered weaker with regards to the world market economy, while the developed nations have never been in this colonized position in the past. Dependency theorists argue that underdeveloped countries remain economically vulnerable unless they reduce their connections to the world market.
Dependency theory states that poor nations provide natural resources and cheap labor for developed nations, without which the developed nations could not have the standard of living which they enjoy. When underdeveloped countries try to remove the Core's influence, the developed countries hinder their attempts to keep control. This means that poverty of developing nations is not the result of the disintegration of these countries in the world system, but because of the way in which they are integrated into this system.
In addition to its structuralist roots, dependency theory has much overlap with Neo-Marxism and World Systems Theory, which is also reflected in the work of Immanuel Wallerstein, a famous dependency theorist. Wallerstein rejects the notion of a Third World, claiming that there is only one world which is connected by economic relations (World Systems Theory). He argues that this system inherently leads to a division of the world in core, semi-periphery and periphery. One of the results of expansion of the world-system is the commodification of things, like natural resources, labor and human relationships.
Basic needs
The basic needs model was introduced by the International Labour Organization in 1976, mainly in reaction to prevalent modernization- and structuralism-inspired development approaches, which were not achieving satisfactory results in terms of poverty alleviation and combating inequality in developing countries. It tried to define an absolute minimum of resources necessary for long-term physical well-being. The poverty line which follows from this, is the amount of income needed to satisfy those basic needs. The approach has been applied in the sphere of development assistance, to determine what a society needs for subsistence, and for poor population groups to rise above the poverty line. Basic needs theory does not focus on investing in economically productive activities. Basic needs can be used as an indicator of the absolute minimum an individual needs to survive.
Proponents of basic needs have argued that elimination of absolute poverty is a good way to make people active in society so that they can provide labor more easily and act as consumers and savers. There have been also many critics of the basic needs approach. It would lack theoretical rigour, practical precision, be in conflict with growth promotion policies, and run the risk of leaving developing countries in permanent turmoil.
Neoclassical theory
Neoclassical development theory has it origins in its predecessor: classical economics. Classical economics was developed in the 18th and 19th centuries and dealt with the value of products and on which production factors it depends. Early contributors to this theory are Adam Smith and David Ricardo. Classical economists argued – as do the neoclassical ones – in favor of the free market, and against government intervention in those markets. The 'invisible hand' of Adam Smith makes sure that free trade will ultimately benefit all of society. John Maynard Keynes was a very influential classical economist as well, having written his General Theory of Employment, Interest, and Money in 1936.
Neoclassical development theory became influential towards the end of the 1970s, fired by the election of Margaret Thatcher in the UK and Ronald Reagan in the USA. Also, the World Bank shifted from its Basic Needs approach to a neoclassical approach in 1980. From the beginning of the 1980s, neoclassical development theory really began to roll out.
Structural adjustment
One of the implications of the neoclassical development theory for developing countries were the Structural Adjustment Programmes (SAPs) which the World Bank and the International Monetary Fund wanted them to adopt. Important aspects of those SAPs include:
Fiscal austerity (reduction in government spending)
Privatization (which should both raise money for governments and improve efficiency and financial performance of the firms involved)
Trade liberalization, currency devaluation and the abolition of marketing boards (to maximize the static comparative advantage the developing country has on the global market)
Retrenchment of the government and deregulation (in order to stimulate the free market)
These measures are more or less reflected by the themes which were identified by the Institute of International Economics which were believed to be necessary for the recovery of Latin America from the economic and financial crises of the 1980s. These themes are known as the Washington consensus, a termed coined in 1989 by the economist John Williamson.
Recent trends
Post-development theory
Postdevelopment theory is a school of thought which questions the idea of national economic development altogether. According to postdevelopment scholars, the goal of improving living standards leans on arbitrary claims as to the desirability and possibility of that goal. Postdevelopment theory arose in the 1980s and 1990s.
According to postdevelopment theorists, the idea of development is just a 'mental structure' (Wolfgang Sachs) which has resulted in a hierarchy of developed and underdeveloped nations, of which the underdeveloped nations desire to be like developed nations. Development thinking has been dominated by the West and is very ethnocentric, according to Sachs. The Western lifestyle may neither be a realistic nor a desirable goal for the world's population, postdevelopment theorists argue. Development is being seen as a loss of a country's own culture, people's perception of themselves and modes of life. According to Majid Rahnema, another leading postdevelopment scholar, things like notions of poverty are very culturally embedded and can differ a lot among cultures. The institutes which voice the concern over underdevelopment are very Western-oriented, and postdevelopment calls for a broader cultural involvement in development thinking.
Postdevelopment proposes a vision of society which removes itself from the ideas which currently dominate it. According to Arturo Escobar, postdevelopment is interested instead in local culture and knowledge, a critical view against established sciences and the promotion of local grassroots movements. Also, postdevelopment argues for structural change in order to reach solidarity, reciprocity, and a larger involvement of traditional knowledge.
Sustainable development
Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. (Brundtland Commission) There exist more definitions of sustainable development, but they all have to do with the carrying capacity of the earth and its natural systems and the challenges faced by humanity. Sustainable development can be broken up into environmental sustainability, economic sustainability and sociopolitical sustainability. The book Limits to Growth, commissioned by the Club of Rome, gave huge momentum to the thinking about sustainability. Global warming issues are also problems which are emphasized by the sustainable development movement. This led to the 1997 Kyoto Accord, with the plan to cap greenhouse-gas emissions.
Opponents of the implications of sustainable development often point to the environmental Kuznets curve. The idea behind this curve is that, as an economy grows, it shifts towards more capital and knowledge-intensive production. This means that as an economy grows, its pollution output increases, but only until it reaches a particular threshold where production becomes less resource-intensive and more sustainable. This means that a pro-growth, not an anti-growth policy is needed to solve the environmental problem. But the evidence for the environmental Kuznets curve is quite weak. Also, empirically spoken, people tend to consume more products when their income increases. Maybe those products have been produced in a more environmentally friendly way, but on the whole the higher consumption negates this effect. There are people like Julian Simon however who argue that future technological developments will resolve future problems.
Human development theory
Human development theory is a theory which uses ideas from different origins, such as ecology, sustainable development, feminism and welfare economics. It wants to avoid normative politics and is focused on how social capital and instructional capital can be deployed to optimize the overall value of human capital in an economy.
Amartya Sen and Mahbub ul Haq are the most well-known human development theorists. The work of Sen is focused on capabilities: what people can do and be. It is these capabilities, rather than the income or goods that they receive (as in the Basic Needs approach), that determine their well-being. This core idea also underlies the construction of the Human Development Index, a human-focused measure of development pioneered by the UNDP in its Human Development Reports; this approach has become popular the world over, with indexes and reports published by individual counties, including the American Human Development Index and Report in the United States. The economic side of Sen's work can best be categorized under welfare economics, which evaluates the effects of economic policies on the well-being of peoples. Sen wrote the influential book Development as Freedom which added an important ethical side to development economics.
See also
Development (disambiguation)
Ecological modernization theory
Economic development
International development
World-systems theory
Progress
Progressivism
Development-induced displacement
Manifest destiny
White mans burden
Civilizing mission
Christian mission
White savior
References
Further reading
M. P. Cowen and R. W. Shenton, Doctrines of Development, Routledge (1996), .
Peter W. Preston, Development Theory: An Introduction to the Analysis of Complex Change, Wiley-Blackwell (1996), .
Peter W. Preston, Rethinking Development, Routledge & Kegan Paul Books Ltd (1988), .
Richard Peet with Elaine Hartwick, "Theories of Development", The Guilford Press (1999)
Walt Whitman Rostow, (1959), The stages of economic growth. The Economic History Review, 12: 1–16.
Tourette, J. E. L. (1964), Technological change and equilibrium growth in the Harrod-Domar model. Kyklos, 17: 207–226.
Durkheim, Emile. The Division of Labor in Society. Trans. Lewis A. Coser. New York: Free Press, 1997, pp. 39, 60, 108.
John Rapley (2007), Understanding Development. Boulder, London: Lynne Rienner Publishers
Meadows et al. (1972), The Limits to Growth, Universe Books,
Hunt, D. (1989), Economic Theories of Development: An Analysis of Competing Paradigms. London: Harvester Wheatsheaf
Greig, A., D. Hulme and M. Turner (2007). "Challenging Global Inequality. Development Theory and Practice in the 21st century". Palgrave Macmillan, New York.
International trade theory
Sociological theories | 0.770058 | 0.992344 | 0.764163 |
Gender empowerment | Gender empowerment is the empowerment of people of any gender. While conventionally, the aspect of it is mentioned for empowerment of women, the concept stresses the distinction between biological sex and gender as a role, also referring to other marginalized genders in a particular political or social context.
Gender empowerment has become a significant topic of discussion in regard to development and economics. Entire nations, businesses, communities, and groups can benefit from the implementation of programs and policies that adopt the notion of women empowerment. Empowerment is one of the main procedural concerns when addressing human rights and development. The Human Development and Capabilities Approach, The Millennium Development Goals, and other credible approaches/goals point to empowerment and participation as a necessary step if a country is to overcome the obstacles associated with poverty and development.
Measuring
Gender empowerment can be measured through the Gender Empowerment Measure, or the GEM. The GEM shows women's participation in a given nation, both politically and economically. Gem is calculated by tracking "the share of seats in parliament held by women; of female legislators, senior officials and managers; and of female profession and technical workers; and the gender disparity in earned income, reflecting economic independence." It then ranks countries given this information. Other measures that take into account the importance of female participation and equality include: the Gender Parity Index and the Gender Development Index (GDI).
See also
Anti-gender movement
Diversity, equity, and inclusion
Diversity (politics)
Diversity training
Gender and politics
Gender diversity
Gender equality
Gender essentialism
Respect
Suicide in LGBT youth
Sociology of gender
Women's empowerment
References
Gender equality
Human rights concepts
Law and economics
Sexuality and gender identity-based cultures
Law by issue
Egalitarianism
Empowerment
Gender and society
Feminism and society
Control (social and political)
Social privilege | 0.781606 | 0.977668 | 0.764151 |
Policy | Policy is a deliberate system of guidelines to guide decisions and achieve rational outcomes. A policy is a statement of intent and is implemented as a procedure or protocol. Policies are generally adopted by a governance body within an organization. Policies can assist in both subjective and objective decision making. Policies used in subjective decision-making usually assist senior management with decisions that must be based on the relative merits of a number of factors, and as a result, are often hard to test objectively, e.g. work–life balance policy. Moreover, governments and other institutions have policies in the form of laws, regulations, procedures, administrative actions, incentives and voluntary practices. Frequently, resource allocations mirror policy decisions.
Policy is a blueprint of the organizational activities which are repetitive/routine in nature.
In contrast, policies to assist in objective decision-making are usually operational in nature and can be objectively tested, e.g. password policy.
The term may apply to government, public sector organizations and groups, as well as individuals, Presidential executive orders, corporate privacy policies, and parliamentary rules of order are all examples of policy. Policy differs from rules or law. While the law can compel or prohibit behaviors (e.g. a law requiring the payment of taxes on income), policy merely guides actions toward those that are most likely to achieve the desired outcome.
Policy or policy study may also refer to the process of making important organizational decisions, including the identification of different alternatives such as programs or spending priorities, and choosing among them on the basis of the impact they will have. Policies can be understood as political, managerial, financial, and administrative mechanisms arranged to reach explicit goals. In public corporate finance, a critical accounting policy is a policy for a firm/company or an industry that is considered to have a notably high subjective element, and that has a material impact on the financial statements.
It has been argued that policies ought to be evidence-based. An individual or organization is justified in claiming that a specific policy is evidence-based if, and only if, three conditions are met. First, the individual or organization possesses comparative evidence about the effects of the specific policy in comparison to the effects of at least one alternative policy. Second, the specific policy is supported by this evidence according to at least one of the individual's or organization's preferences in the given policy area. Third, the individual or organization can provide a sound account for this support by explaining the evidence and preferences that lay the foundation for the claim.
Policies are dynamic; they are not just static lists of goals or laws. Policy blueprints have to be implemented, often with unexpected results. Social policies are what happens 'on the ground' when they are implemented, as well as what happens at the decision making or legislative stage.
When the term policy is used, it may also refer to:
Official government policy (legislation or guidelines that govern how laws should be put into operation)
Broad ideas and goals in political manifestos and pamphlets
A company or organization's policy on a particular topic. For example, the equal opportunity policy of a company shows that the company aims to treat all its staff equally.
The actions an organization actually takes may often vary significantly from its stated policy. This difference is sometimes caused by political compromise over policy, while in other situations it is caused by lack of policy implementation and enforcement. Implementing policy may have unexpected results, stemming from a policy whose reach extends further than the problem it was originally crafted to address. Additionally, unpredictable results may arise from selective or idiosyncratic enforcement of policy.
Effects
Intended effects and policy-design
The intended effects of a policy vary widely according to the organization and the context in which they are made. Broadly, policies are typically instituted to avoid some negative effect that has been noticed in the
organization, or to seek some positive benefit.
A meta-analysis of policy studies concluded that international treaties that aim to foster global cooperation have mostly failed to produce their intended effects in addressing global challenges, and sometimes may have led to unintended harmful or net negative effects. The study suggests enforcement mechanisms are the "only modifiable treaty design choice" with the potential to improve the effectiveness.
Corporate purchasing policies provide an example of how organizations attempt to avoid negative effects. Many large companies have policies that all purchases above a certain value must be performed through a purchasing process. By requiring this standard purchasing process through policy, the organization can limit waste and standardize the way purchasing is done.
The State of California provides an example of benefit-seeking policy. In recent years, the numbers of hybrid cars in California has increased dramatically, in part because of policy changes in Federal law that provided USD $1,500 in tax credits (since phased out) and enabled the use of high-occupancy vehicle lanes to drivers of hybrid vehicles. In this case, the organization (state and/or federal government) created an effect (increased ownership and use of hybrid vehicles) through policy (tax breaks, highway lanes).
Unintended
Policies frequently have side effects or unintended consequences. Because the environments that policies seek to influence or manipulate are typically complex adaptive systems (e.g. governments, societies, large companies), making a policy change can have counterintuitive results. For example, a government may make a policy decision to raise taxes, in hopes of increasing overall tax revenue. Depending on the size of the tax increase, this may have the overall effect of reducing tax revenue by causing capital flight or by creating a rate so high that citizens are deterred from earning the money that is taxed.
The policy formulation process theoretically includes an attempt to assess as many areas of potential policy impact as possible, to lessen the chances that a given policy will have unexpected or unintended consequences.
Cycle
In political science, the policy cycle is a tool commonly used for analyzing the development of a policy. It can also be referred to as a "stages model" or "stages heuristic". It is thus a rule of thumb rather than the actual reality of how policy is created, but has been influential in how political scientists looked at policy in general. It was developed as a theory from Harold Lasswell's work. It is called the policy cycle as the final stage (evaluation) often leads back to the first stage (problem definition), thus restarting the cycle.
Harold Lasswell's popular model of the policy cycle divided the process into seven distinct stages, asking questions of both how and why public policies should be made. With the stages ranging from (1) intelligence, (2) promotion, (3) prescription, (4) invocation, (5) application, (6) termination and (7) appraisal, this process inherently attempts to combine policy implementation to formulated policy goals.
One version by James E. Anderson, in his Public Policy-Making (1974) has the following stages:
Agenda setting (Problem identification) – The recognition of certain subject as a problem demanding further government attention.
Policy formulation – Involves exploring a variation of options or alternative courses of action available for addressing the problem. (appraisal, dialogue, formulation, and consolidation)
Decision-making – Government decides on an ultimate course of action, whether to perpetuate the policy status quo or alter it. (Decision could be 'positive', 'negative', or 'no-action')
Implementation – The ultimate decision made earlier will be put into practice.
Evaluation – Assesses the effectiveness of a public policy in terms of its perceived intentions and results. Policy actors attempt to determine whether the course of action is a success or failure by examining its impact and outcomes.
Anderson's version of the stages model is the most common and widely recognized out of the models. However, it could also be seen as flawed. According to Paul A. Sabatier, the model has "outlived its usefulness" and should be replaced. The model's issues have led to a paradoxical situation in which current research and updated versions of the model continue to rely on the framework created by Anderson. But the very concept of the stages model has been discredited, which attacks the cycle's status as a heuristic.
Due to these problems, alternative and newer versions of the model have aimed to create a more comprehensive view of the policy cycle. An eight step policy cycle is developed in detail in The Australian Policy Handbook by Peter Bridgman and Glyn Davis: (now with Catherine Althaus in its 4th and 5th editions)
Issue identification
Policy analysis
Consultation (which permeates the entire process)
Policy instrument development
Building coordination and coalitions
Program Design: Decision making
Policy Implementation
Policy Evaluation
The Althaus, Bridgman & Davis model is heuristic and iterative. It is and not meant to be or predictive. Policy cycles are typically characterized as adopting a classical approach, and tend to describe processes from the perspective of policy decision makers. Accordingly, some post-positivist academics challenge cyclical models as unresponsive and unrealistic, preferring systemic and more complex models. They consider a broader range of actors involved in the policy space that includes civil society organizations, the media, intellectuals, think tanks or policy research institutes, corporations, lobbyists, etc.
Content
Policies are typically promulgated through official written documents. Policy documents often come with the endorsement or signature of the executive powers within an organization to legitimize the policy and demonstrate that it is considered in force. Such documents often have standard formats that are particular to the organization issuing the policy. While such formats differ in form, policy documents usually contain certain standard components including:
A purpose statement, outlining why the organization is issuing the policy, and what its desired effect or outcome of the policy should be.
An applicability and scope statement, describing who the policy affects and which actions are impacted by the policy. The applicability and scope may expressly exclude certain people, organizations, or actions from the policy requirements. Applicability and scope is used to focus the policy on only the desired targets, and avoid unintended consequences where possible.
An effective date which indicates when the policy comes into force. Retroactive policies are rare, but can be found.
A responsibilities section, indicating which parties and organizations are responsible for carrying out individual policy statements. Many policies may require the establishment of some ongoing function or action. For example, a purchasing policy might specify that a purchasing office be created to process purchase requests, and that this office would be responsible for ongoing actions. Responsibilities often include identification of any relevant oversight and/or governance structures.
Policy statements indicating the specific regulations, requirements, or modifications to organizational behavior that the policy is creating. Policy statements are extremely diverse depending on the organization and intent, and may take almost any form.
Some policies may contain additional sections, including:
Background, indicating any reasons, history, ethical background statements, and/or intent that led to the creation of the policy, which may be listed as motivating factors. This information is often quite valuable when policies must be evaluated or used in ambiguous situations, just as the intent of a law can be useful to a court when deciding a case that involves that law.
Definitions, providing clear and unambiguous definitions for terms and concepts found in the policy document.
Types
The American political scientist Theodore J. Lowi proposed four types of policy, namely distributive, redistributive, regulatory and constituent in his article "Four Systems of Policy, Politics and Choice" and in "American Business, Public Policy, Case Studies and Political Theory". Policy addresses the intent of the organization, whether government, business, professional, or voluntary. Policy is intended to affect the "real" world, by guiding the decisions that are made. Whether they are formally written or not, most organizations have identified policies.
Policies may be classified in many different ways. The following is a sample of several different types of policies broken down by their effect on members of the organization.
Distributive
Distributive policies involve government allocation of resources, services, or benefits to specific groups or individuals in society. The primary characteristic of distributive policies is that they aim to provide goods or services to a targeted group without significantly reducing the availability or benefits for other groups. These policies are often designed to promote economic or social equity. Examples include subsidies for farmers, social welfare programs, and funding for public education.
Regulatory
Regulatory policies aim to control or regulate the behavior and practices of individuals, organizations, or industries. These policies are intended to address issues related to public safety, consumer protection, and environmental conservation. Regulatory policies involve government intervention in the form of laws, regulations, and oversight. Examples include environmental regulations, labor laws, and safety standards for food and drugs. Another example of a fairly successful public regulatory policy is that of a highway speed limit.
Constituent
Constituent policies are less concerned with the allocation of resources or regulation of behavior, and more focused on representing the preferences and values of the public. These policies involve addressing public concerns and issues that may not have direct economic or regulatory implications. They often reflect the broader values and beliefs of the society. Constituent policies can include symbolic gestures, such as resolutions recognizing historical events or designating official state symbols. Constituent policies also deal with fiscal policy in some circumstances.
Redistributive
Redistributive policies involve the transfer of resources or benefits from one group to another, typically from the wealthy or privileged to the less advantaged. These policies seek to reduce economic or social inequality by taking from those with more and providing for those with less. Progressive taxation, welfare programs, and financial assistance to low-income households are examples of redistributive policies.
Notable schools
Balsillie School of International Affairs
Blavatnik School of Government
Goldman School of Public Policy at the University of California Berkeley
London School of Economics
King's College London
The University of Chicago Harris School of Public Policy
Heinz College of Information Systems and Public Policy at Carnegie Mellon University
Harvard Kennedy School of Government
Hertie School of Governance
Munk School of Global Affairs and Public Policy
Norman Paterson School of International Affairs
Paul H. Nitze School of Advanced International Studies
Princeton School of Public and International Affairs
Sciences Po Paris
University of Cambridge
University of Glasgow
University of Warwick
Paris Nanterre University
Subtypes
Induction of policies
In contemporary systems of market-oriented economics and of homogeneous voting of delegates and decisions, policy mixes are usually introduced depending on factors that include popularity in the public (influenced via media and education as well as by cultural identity), contemporary economics (such as what is beneficial or a burden in the long- and near-term within it) and a general state of international competition (often the focus of geopolitics). Broadly, considerations include political competition with other parties and social stability as well as national interests within the framework of global dynamics.
Policies or policy-elements can be designed and proposed by a multitude of actors or collaborating actor-networks in various ways. Alternative options as well as organisations and decision-makers that would be responsible for enacting these policies – or explaining their rejection – can be identified. "Policy sequencing" is a concept that integrates mixes of existing or hypothetical policies and arranges them in a sequential order. The use of such frameworks may make complex polycentric governance for the achievement of goals such as climate change mitigation and stoppage of deforestation more easily achievable or more effective, fair, efficient, legitimate and rapidly implemented.
Contemporary ways of policy-making or decision-making may depend on exogenously-driven shocks that "undermine institutionally entrenched policy equilibria" and may not always be functional in terms of sufficiently preventing and solving problems, especially when unpopular policies, regulation of influential entities with vested interests, international coordination and non-reactive strategic long-term thinking and management are needed. In that sense, "reactive sequencing" refers to "the notion that early events in a sequence set in motion a chain of causally linked reactions and counter-reactions which trigger subsequent development". This is a concept separate to policy sequencing in that the latter may require actions from a multitude of parties at different stages for progress of the sequence, rather than an initial "shock", force-exertion or catalysis of chains of events.
In the modern highly interconnected world, polycentric governance has become ever more important – such "requires a complex combination of multiple levels and diverse types of organizations drawn from the public, private, and voluntary sectors that have overlapping realms of responsibility and functional capacities". Key components of policies include command-and-control measures, enabling measures, monitoring, incentives and disincentives.
Science-based policy, related to the more narrow concept of evidence-based policy, may have also become more important. A review about worldwide pollution as a major cause of death – where it found little progress, suggests that successful control of conjoined threats such as pollution, climate change, and biodiversity loss requires a global, "formal science–policy interface", e.g. to "inform intervention, influence research, and guide funding". Broadly, science–policy interfaces include both science in policy and science for policy.
See also
Notes
References
Bibliography
Further reading
External links
Politics by issue
Decision-making | 0.76621 | 0.997301 | 0.764142 |
TESCREAL | TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects. They consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.
Origin
Gebru and Torres coined "TESCREAL" in 2023, first using it in a draft of a paper titled "The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence". First Monday published the paper in April 2024, though Torres and Gebru popularized the term elsewhere before the paper's publication. According to Gebru and Torres, transhumanism, extropianism, singularitarianism, (modern) cosmism, rationalism, effective altruism, and longtermism are a "bundle" of "interconnected and overlapping ideologies" that emerged from 20th-century eugenics, with shared progenitors. They use the term "TESCREAList" to refer to people who subscribe to, or appear to endorse, most or all of the ideologies captured in the acronym.
Analysis
According to critics of these philosophies, TESCREAL describes overlapping movements endorsed by prominent people in the tech industry to provide intellectual backing to pursue and prioritize projects including artificial general intelligence (AGI), life extension, and space colonization. Science fiction author Charles Stross, using the example of space colonization, argued that the ideologies allow billionaires to pursue massive personal projects driven by a right-wing interpretation of science fiction by arguing that not to pursue such projects poses an existential risk to society. Gebru and Torres write that, using the threat of extinction, TESCREALists can justify "attempts to build unscoped systems which are inherently unsafe". Media scholar Ethan Zuckerman argues that by only considering goals that are valuable to the TESCREAL movement, futuristic projects with more immediate drawbacks, such as racial inequity, algorithmic bias, and environmental degradation, can be justified. Speaking at Radio New Zealand, politics writer Danyl McLauchlan said that many of these philosophies may have started off with good intentions but might have been pushed "to a point of ridiculousness."
Philosopher Yogi Hale Hendlin has argued that by both ignoring the human causes of societal problems and over-engineering solutions, TESCREALists ignore the context in which many problems arise. Camille Sojit Pejcha wrote in Document Journal that TESCREAL is a tool for tech elites to concentrate power. In The Washington Spectator, Dave Troy called TESCREAL an "ends justifies the means" movement that is antithetical to "democratic, inclusive, fair, patient, and just governance". Gil Duran wrote that "TESCREAL", "authoritarian technocracy", and "techno-optimism" were phrases used in early 2024 to describe a new ideology emerging in the tech industry.
Gebru, Torres, and others have likened TESCREAL to a secular religion due to its parallels to Christian theology and eschatology. Writers in Current Affairs compared these philosophies and the ensuing techno-optimism to "any other monomaniacal faith... in which doubters are seen as enemies and beliefs are accepted without evidence". They argue pursuing TESCREAL would prevent an actual equitable shared future.
Artificial General Intelligence (AGI)
Much of the discourse about existential risk from AGI occurs among supporters of the TESCREAL ideologies. TESCREALists are either considered "AI accelerationists", who consider AI the only way to pursue a utopian future where problems are solved, or "AI doomers", who consider AI likely to be unaligned to human survival and likely to cause human extinction. Despite the risk, many doomers consider the development of AGI inevitable and argue that only by developing and aligning AGI first can existential risk be averted.
Gebru has likened the conflict between accelerationists and doomers to a "secular religion selling AGI enabled utopia and apocalypse". Torres and Gebru argue that both groups use hypothetical AI-driven apocalypses and utopian futures to justify unlimited research, development, and deregulation of technology. By considering only far-reaching future consequences, creating hype for unproven technology, and fear-mongering, Torres and Gebru allege TESCREALists distract from the impacts of technology that may adversely affect society, disproportionately harm minorities through algorithmic bias, and have a detrimental impact on the environment.
Pharmaceuticals
Neşe Devenot has used the TESCREAL acronym to refer to "global financial and tech elites" who promote new uses of psychedelic drugs as mental health treatments, not because they want to help people, but so that they can make money on the sale of these pharmaceuticals as part of a plan to increase inequality.
Claimed bias against minorities
Gebru and Torres claim that TESCREAL ideologies directly originate from 20th-century eugenics and that the bundle of ideologies advocates a second wave of new eugenics. Others have similarly argued that the TESCREAL ideologies developed from earlier philosophies that were used to justify mass murder and genocide. Some prominent figures who have contributed to TESCREAL ideologies have been alleged to be racist and sexist. McLauchlan has said that, while "some people in these groups want to genetically engineer superintelligent humans, or replace the entire species with a superior form of intelligence" others "like the effective altruists, for example, most of them are just in it to help very poor people ... they are kind of shocked ... that they've been lumped into this malevolent ... eugenics conspiracy".
Criticism and debate
Writing in Asterisk, a magazine related to effective altruism, Ozy Brennan criticized Gebru's and Torres's grouping of different philosophies as if they were a "monolithic" movement. Brennan argues Torres has misunderstood these different philosophies, and has taken philosophical thought experiments out of context. James Pethokoukis, of the American Enterprise Institute, disagrees with criticizing proponents of TESCREAL. He argues that the tech billionaires criticized in a Scientific American article for allegedly espousing TESCREAL have significantly advanced society. McLauchlan has noted that critics of the TESCREAL bundle have objected to what they see as disparate and sometimes conflicting ideologies being grouped together, but opines that TESCREAL is a good way to describe and consolidate many of the "grand bizarre ideologies in Silicon Valley". Eli Sennesh and James Hughes, publishing in the blog for the transhumanist Institute for Ethics and Emerging Technologies, have argued that TESCREAL is a left-wing conspiracy theory that unnecessarily groups disparate philosophies together without understanding the mutually exclusive tenets in each.
According to Torres, "If advanced technologies continue to be developed at the current rate, a global-scale catastrophe is almost certainly a matter of when rather than if." Torres believes that "perhaps the only way to actually attain a state of 'existential security' is to slow down or completely halt further technological innovation", and criticized the longtermist view that technology, although dangerous, is essential for human civilization to achieve its full potential. Brennan contends that Torres's proposal to slow or halt technological development represents a more extreme position than TESCREAL ideologies, preventing many improvements in quality of life, healthcare, and poverty reduction that technological progress enables.
Alleged TESCREALists
Venture capitalist Marc Andreessen has self-identified as a TESCREAList. He published the "Techno-Optimist Manifesto" in October 2023, which Jag Bhalla and Nathan J. Robinson have called a "perfect example" of the TESCREAL ideologies. In the document, he argues that more advanced artificial intelligence could save countless future potential lives, and that those working to slow or prevent its development should be condemned as murderers.
Elon Musk has been described as sympathetic to some TESCREAL ideologies. In August 2022, Musk tweeted that William MacAskill's longtermist book What We Owe the Future was a "close match for my philosophy". Some writers believe Musk's Neuralink pursues TESCREAList goals. Some AI experts have complained about the focus of Musk's XAI company on existential risk, arguing that it and other AI companies have ties to TESCREAL movements. Dave Troy believes Musk's natalist views originate from TESCREAL ideals.
It has also been suggested that Peter Thiel is sympathetic to TESCREAL ideas. Benjamin Svetkey wrote in The Hollywood Reporter that Thiel and other Silicon Valley CEOs who support the Donald Trump 2024 presidential campaign are pushing for policies that would shut down "regulators whose outdated restrictions on things like human experimentation are slowing down progress toward a technotopian paradise".
Sam Altman and much of the OpenAI board has been described as supporting TESCREAL movements, especially in the context of his attempted firing in 2023. Gebru and Torres have urged Altman not to pursue TESCREAL ideals. Lorraine Redaud writing in Charlie Hebdo described Sam Altman and multiple other Silicon Valley executives as supporting TESCREAL ideals.
Self-identified transhumanists Nick Bostrom and Eliezer Yudkowsky, both influential in discussions of existential risk from AI, have also been described as leaders of the TESCREAL movement. Redaud said Bostrom supported some ideals "in line with the TESCREALists movement".
Sam Bankman-Fried, former CEO of the FTX cryptocurrency exchange, was a prominent and self-identified member of the effective altruist community. According to The Guardian, since FTX's collapse, administrators of the bankruptcy estate have been trying to recoup about $5 million that they allege was transferred to a nonprofit to help secure the purchase of a historic hotel that has been repurposed for conferences and workshops associated with longtermism, rationalism, and effective altruism. The property hosted liberal eugenicists and other speakers the Guardian said had racist and misogynistic histories.
Longtermist and effective altruist William MacAskill, who frequently collaborated with Bankman-Fried to coordinate philanthropic initiatives, has been described as a TESCREAList.
See also
Effective accelerationism
Utilitarianism
The Californian Ideology
References
2023 neologisms
Acronyms
Effective altruism
Ethical theories
Ethics of science and technology
Eugenics
Existential risk from artificial general intelligence
Extropianism
Futures studies
Ideologies
Philosophy of artificial intelligence
Philosophy of technology
Rationalism
Singularitarianism
Subcultures
Transhumanism
Political neologisms
Natalism | 0.770651 | 0.991544 | 0.764135 |
Heritability | Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of variation in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?"
Other causes of measured variation in a trait are characterized as environmental factors, including observational error. In human studies of heritability these are often apportioned into factors from "shared environment" and "non-shared environment" based on whether they tend to result in persons brought up in the same household being more or less similar to persons who were not.
Heritability is estimated by comparing individual phenotypic variation among related individuals in a population, by examining the association between individual phenotype and genotype data, or even by modeling summary-level data from genome-wide association studies (GWAS). Heritability is an important concept in quantitative genetics, particularly in selective breeding and behavior genetics (for instance, twin studies). It is the source of much confusion due to the fact that its technical definition is different from its commonly-understood folk definition. Therefore, its use conveys the incorrect impression that behavioral traits are "inherited" or specifically passed down through the genes. Behavioral geneticists also conduct heritability analyses based on the assumption that genes and environments contribute in a separate, additive manner to behavioral traits.
Overview
Heritability measures the fraction of phenotype variability that can be attributed to genetic variation. This is not the same as saying that this fraction of an individual phenotype is caused by genetics. For example, it is incorrect to say that since the heritability of personality traits is about 0.6, that means that 60% of your personality is inherited from your parents and 40% comes from the environment. In addition, heritability can change without any genetic change occurring, such as when the environment starts contributing to more variation. As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation, like showing more similar levels of intelligence. Heritability increases when genetics are contributing more variation or because non-genetic factors are contributing less variation; what matters is the relative contribution. Heritability is specific to a particular population in a particular environment. High heritability of a trait, consequently, does not necessarily mean that the trait is not very susceptible to environmental influences. Heritability can also change as a result of changes in the environment, migration, inbreeding, or the way in which heritability itself is measured in the population under study. The heritability of a trait should not be interpreted as a measure of the extent to which said trait is genetically determined in an individual.
The extent of dependence of phenotype on environment can also be a function of the genes involved. Matters of heritability are complicated because genes may canalize a phenotype, making its expression almost inevitable in all occurring environments. Individuals with the same genotype can also exhibit different phenotypes through a mechanism called phenotypic plasticity, which makes heritability difficult to measure in some cases. Recent insights in molecular biology have identified changes in transcriptional activity of individual genes associated with environmental changes. However, there are a large number of genes whose transcription is not affected by the environment.
Estimates of heritability use statistical analyses to help to identify the causes of differences between individuals. Since heritability is concerned with variance, it is necessarily an account of the differences between individuals in a population. Heritability can be univariate – examining a single trait – or multivariate – examining the genetic and environmental associations between multiple traits at once. This allows a test of the genetic overlap between different phenotypes: for instance hair color and eye color. Environment and genetics may also interact, and heritability analyses can test for and examine these interactions (GxE models).
A prerequisite for heritability analyses is that there is some population variation to account for. This last point highlights the fact that heritability cannot take into account the effect of factors which are invariant in the population. Factors may be invariant if they are absent and do not exist in the population, such as no one having access to a particular antibiotic, or because they are omni-present, like if everyone is drinking coffee. In practice, all human behavioral traits vary and almost all traits show some heritability.
Definition
Any particular phenotype can be modeled as the sum of genetic and environmental effects:
Phenotype (P) = Genotype (G) + Environment (E).
Likewise the phenotypic variance in the trait – Var (P) – is the sum of effects as follows:
Var(P) = Var(G) + Var(E) + 2 Cov(G,E).
In a planned experiment Cov(G,E) can be controlled and held at 0. In this case, heritability, is defined as
H2 is the broad-sense heritability. This reflects all the genetic contributions to a population's phenotypic variance including additive, dominant, and epistatic (multi-genic interactions), as well as maternal and paternal effects, where individuals are directly affected by their parents' phenotype, such as with milk production in mammals.
A particularly important component of the genetic variance is the additive variance, Var(A), which is the variance due to the average effects (additive effects) of the alleles. Since each parent passes a single allele per locus to each offspring, parent-offspring resemblance depends upon the average effect of single alleles. Additive variance represents, therefore, the genetic component of variance responsible for parent-offspring resemblance. The additive genetic portion of the phenotypic variance is known as Narrow-sense heritability and is defined as
An upper case H2 is used to denote broad sense, and lower case h2 for narrow sense.
For traits which are not continuous but dichotomous such as an additional toe or certain diseases, the contribution of the various alleles can be considered to be a sum, which past a threshold, manifests itself as the trait, giving the liability threshold model in which heritability can be estimated and selection modeled.
Additive variance is important for selection. If a selective pressure such as improving livestock is exerted, the response of the trait is directly related to narrow-sense heritability. The mean of the trait will increase in the next generation as a function of how much the mean of the selected parents differs from the mean of the population from which the selected parents were chosen. The observed response to selection leads to an estimate of the narrow-sense heritability (called realized heritability). This is the principle underlying artificial selection or breeding.
Example
The simplest genetic model involves a single locus with two alleles (b and B) affecting one quantitative phenotype.
The number of B alleles can be 0, 1, or 2. For any genotype, (Bi,Bj), where Bi and Bj are either 0 or 1, the expected phenotype can then be written as the sum of the overall mean, a linear effect, and a dominance deviation (one can think of the dominance term as an interaction between Bi and Bj):
The additive genetic variance at this locus is the weighted average of the squares of the additive effects:
where
There is a similar relationship for the variance of dominance deviations:
where
The linear regression of phenotype on genotype is shown in Figure 1.
Assumptions
Estimates of the total heritability of human traits assume the absence of epistasis, which has been called the "assumption of additivity". Although some researchers have cited such estimates in support of the existence of "missing heritability" unaccounted for by known genetic loci, the assumption of additivity may render these estimates invalid. There is also some empirical evidence that the additivity assumption is frequently violated in behavior genetic studies of adolescent intelligence and academic achievement.
Estimating heritability
Since only P can be observed or measured directly, heritability must be estimated from the similarities observed in subjects varying in their level of genetic or environmental similarity. The statistical analyses required to estimate the genetic and environmental components of variance depend on the sample characteristics. Briefly, better estimates are obtained using data from individuals with widely varying levels of genetic relationship - such as twins, siblings, parents and offspring, rather than from more distantly related (and therefore less similar) subjects. The standard error for heritability estimates is improved with large sample sizes.
In non-human populations it is often possible to collect information in a controlled way. For example, among farm animals it is easy to arrange for a bull to produce offspring from a large number of cows and to control environments. Such experimental control is generally not possible when gathering human data, relying on naturally occurring relationships and environments.
In classical quantitative genetics, there were two schools of thought regarding estimation of heritability.
One school of thought was developed by Sewall Wright at The University of Chicago, and further popularized by C. C. Li (University of Chicago) and J. L. Lush (Iowa State University). It is based on the analysis of correlations and, by extension, regression. Path Analysis was developed by Sewall Wright as a way of estimating heritability.
The second was originally developed by R. A. Fisher and expanded at The University of Edinburgh, Iowa State University, and North Carolina State University, as well as other schools. It is based on the analysis of variance of breeding studies, using the intraclass correlation of relatives. Various methods of estimating components of variance (and, hence, heritability) from ANOVA are used in these analyses.
Today, heritability can be estimated from general pedigrees using linear mixed models and from genomic relatedness estimated from genetic markers.
Studies of human heritability often utilize adoption study designs, often with identical twins who have been separated early in life and raised in different environments. Such individuals have identical genotypes and can be used to separate the effects of genotype and environment. A limit of this design is the common prenatal environment and the relatively low numbers of twins reared apart. A second and more common design is the twin study in which the similarity of identical and fraternal twins is used to estimate heritability. These studies can be limited by the fact that identical twins are not completely genetically identical, potentially resulting in an underestimation of heritability.
In observational studies, or because of evocative effects (where a genome evokes environments by its effect on them), G and E may covary: gene environment correlation. Depending on the methods used to estimate heritability, correlations between genetic factors and shared or non-shared environments may or may not be confounded with heritability.
Regression/correlation methods of estimation
The first school of estimation uses regression and correlation to estimate heritability.
Comparison of close relatives
In the comparison of relatives, we find that in general,
where r can be thought of as the coefficient of relatedness, b is the coefficient of regression and t is the coefficient of correlation.
Parent-offspring regression
Heritability may be estimated by comparing parent and offspring traits (as in Fig. 2). The slope of the line (0.57) approximates the heritability of the trait when offspring values are regressed against the average trait in the parents. If only one parent's value is used then heritability is twice the slope. (This is the source of the term "regression," since the offspring values always tend to regress to the mean value for the population, i.e., the slope is always less than one). This regression effect also underlies the DeFries–Fulker method for analyzing twins selected for one member being affected.
Sibling comparison
A basic approach to heritability can be taken using full-Sib designs: comparing similarity between siblings who share both a biological mother and a father. When there is only additive gene action, this sibling phenotypic correlation is an index of familiarity – the sum of half the additive genetic variance plus full effect of the common environment. It thus places an upper limit on additive heritability of twice the full-Sib phenotypic correlation. Half-Sib designs compare phenotypic traits of siblings that share one parent with other sibling groups.
Twin studies
Heritability for traits in humans is most frequently estimated by comparing resemblances between twins. "The advantage of twin studies, is that the total variance can be split up into genetic, shared or common environmental, and unique environmental components, enabling an accurate estimation of heritability". Fraternal or dizygotic (DZ) twins on average share half their genes (assuming there is no assortative mating for the trait), and so identical or monozygotic (MZ) twins on average are twice as genetically similar as DZ twins. A crude estimate of heritability, then, is approximately twice the difference in correlation between MZ and DZ twins, i.e. Falconer's formula H2=2(r(MZ)-r(DZ)).
The effect of shared environment, c2, contributes to similarity between siblings due to the commonality of the environment they are raised in. Shared environment is approximated by the DZ correlation minus half heritability, which is the degree to which DZ twins share the same genes, c2=DZ-1/2h2. Unique environmental variance, e2, reflects the degree to which identical twins raised together are dissimilar, e2=1-r(MZ).
Analysis of variance methods of estimation
The second set of methods of estimation of heritability involves ANOVA and estimation of variance components.
Basic model
We use the basic discussion of Kempthorne. Considering only the most basic of genetic models, we can look at the quantitative contribution of a single locus with genotype Gi as
where is the effect of genotype Gi and is the environmental effect.
Consider an experiment with a group of sires and their progeny from random dams. Since the progeny get half of their genes from the father and half from their (random) mother, the progeny equation is
Intraclass correlations
Consider the experiment above. We have two groups of progeny we can compare. The first is comparing the various progeny for an individual sire (called within sire group). The variance will include terms for genetic variance (since they did not all get the same genotype) and environmental variance. This is thought of as an error term.
The second group of progeny are comparisons of means of half sibs with each other (called among sire group). In addition to the error term as in the within sire groups, we have an addition term due to the differences among different means of half sibs. The intraclass correlation is
,
since environmental effects are independent of each other.
The ANOVA
In an experiment with sires and progeny per sire, we can calculate the following ANOVA, using as the genetic variance and as the environmental variance:
The term is the intraclass correlation between half sibs. We can easily calculate . The expected mean square is calculated from the relationship of the individuals (progeny within a sire are all half-sibs, for example), and an understanding of intraclass correlations.
The use of ANOVA to calculate heritability often fails to account for the presence of gene–-environment interactions, because ANOVA has a much lower statistical power for testing for interaction effects than for direct effects.
Model with additive and dominance terms
For a model with additive and dominance terms, but not others, the equation for a single locus is
where
is the additive effect of the ith allele, is the additive effect of the jth allele, is the dominance deviation for the ijth genotype, and is the environment.
Experiments can be run with a similar setup to the one given in Table 1. Using different relationship groups, we can evaluate different intraclass correlations. Using as the additive genetic variance and as the dominance deviation variance, intraclass correlations become linear functions of these parameters. In general,
Intraclass correlation
where and are found as
P[ alleles drawn at random from the relationship pair are identical by descent], and
P[ genotypes drawn at random from the relationship pair are identical by descent].
Some common relationships and their coefficients are given in Table 2.
Linear mixed models
A wide variety of approaches using linear mixed models have been reported in literature. Via these methods, phenotypic variance is partitioned into genetic, environmental and experimental design variances to estimate heritability. Environmental variance can be explicitly modeled by studying individuals across a broad range of environments, although inference of genetic variance from phenotypic and environmental variance may lead to underestimation of heritability due to the challenge of capturing the full range of environmental influence affecting a trait. Other methods for calculating heritability use data from genome-wide association studies to estimate the influence on a trait by genetic factors, which is reflected by the rate and influence of putatively associated genetic loci (usually single-nucleotide polymorphisms) on the trait. This can lead to underestimation of heritability, however. This discrepancy is referred to as "missing heritability" and reflects the challenge of accurately modeling both genetic and environmental variance in heritability models.
When a large, complex pedigree or another aforementioned type of data is available, heritability and other quantitative genetic parameters can be estimated by restricted maximum likelihood (REML) or Bayesian methods. The raw data will usually have three or more data points for each individual: a code for the sire, a code for the dam and one or several trait values. Different trait values may be for different traits or for different time points of measurement.
The currently popular methodology relies on high degrees of certainty over the identities of the sire and dam; it is not common to treat the sire identity probabilistically. This is not usually a problem, since the methodology is rarely applied to wild populations (although it has been used for several wild ungulate and bird populations), and sires are invariably known with a very high degree of certainty in breeding programmes. There are also algorithms that account for uncertain paternity.
The pedigrees can be viewed using programs such as Pedigree Viewer , and analyzed with programs such as ASReml, VCE , WOMBAT , MCMCglmm within the R environment or the BLUPF90 family of programs .
Pedigree models are helpful for untangling confounds such as reverse causality, maternal effects such as the prenatal environment, and confounding of genetic dominance, shared environment, and maternal gene effects.
Genomic heritability
When genome-wide genotype data and phenotypes from large population samples are available, one can estimate the relationships between individuals based on their genotypes and use a linear mixed model to estimate the variance explained by the genetic markers. This gives a genomic heritability estimate based on the variance captured by common genetic variants. There are multiple methods that make different adjustments for allele frequency and linkage disequilibrium. Particularly, the method called High-Definition Likelihood (HDL) can estimate genomic heritability using only GWAS summary statistics, making it easier to incorporate large sample size available in various GWAS meta-analysis.
Response to selection
In selective breeding of plants and animals, the expected response to selection of a trait with known narrow-sense heritability can be estimated using the breeder's equation:
In this equation, the Response to Selection (R) is defined as the realized average difference between the parent generation and the next generation, and the Selection Differential (S) is defined as the average difference between the parent generation and the selected parents.
For example, imagine that a plant breeder is involved in a selective breeding project with the aim of increasing the number of kernels per ear of corn. For the sake of argument, let us assume that the average ear of corn in the parent generation has 100 kernels. Let us also assume that the selected parents produce corn with an average of 120 kernels per ear. If h2 equals 0.5, then the next generation will produce corn with an average of 0.5(120-100) = 10 additional kernels per ear. Therefore, the total number of kernels per ear of corn will equal, on average, 110.
Observing the response to selection in an artificial selection experiment will allow calculation of realized heritability as in Fig. 4.
Heritability in the above equation is equal to the ratio only if the genotype and the environmental noise follow Gaussian distributions.
Controversies
Heritability estimates' prominent critics, such as Steven Rose, Jay Joseph, and Richard Bentall, focus largely on heritability estimates in behavioral sciences and social sciences. Bentall has claimed that such heritability scores are typically calculated counterintuitively to derive numerically high scores, that heritability is misinterpreted as genetic determination, and that this alleged bias distracts from other factors that researches have found more causally important, such as childhood abuse causing later psychosis. Heritability estimates are also inherently limited because they do not convey any information regarding whether genes or environment play a larger role in the development of the trait under study. For this reason, David Moore and David Shenk describe the term "heritability" in the context of behavior genetics as "...one of the most misleading in the history of science" and argue that it has no value except in very rare cases. When studying complex human traits, it is impossible to use heritability analysis to determine the relative contributions of genes and environment, as such traits result from multiple causes interacting. In particular, Feldman and Lewontin emphasize that heritability is itself a function of environmental variation. However, some researchers argue that it is possible to disentangle the two.
The controversy over heritability estimates is largely via their basis in twin studies. The scarce success of molecular-genetic studies to corroborate such population-genetic studies' conclusions is the missing heritability problem. Eric Turkheimer has argued that newer molecular methods have vindicated the conventional interpretation of twin studies, although it remains mostly unclear how to explain the relations between genes and behaviors. According to Turkheimer, both genes and environment are heritable, genetic contribution varies by environment, and a focus on heritability distracts from other important factors. Overall, however, heritability is a concept widely applicable.
See also
Behavioral genetics
Heredity
Heritability of IQ
References
Further reading
External links
Stanford Encyclopedia of Philosophy entry on Heredity and Heritability
Quantitative Genetics Resources website, including the two volume book by Lynch and Walsh. Free access
Genetic epidemiology
Quantitative genetics
Population genetics | 0.76968 | 0.992765 | 0.764112 |
Sustainability metrics and indices | Sustainability metrics and indices are measures of sustainability, using numbers to quantify environmental, social and economic aspects of the world. There are multiple perspectives on how to measure sustainability as there is no universal standard. Intead, different disciplines and international organizations have offered measures or indicators of how to measure the concept.
While sustainability indicators, indices and reporting systems gained growing popularity in both the public and private sectors, their effectiveness in influencing actual policy and practices often remains limited.
Metrics and indices
Various ways of operationalizing or measuring sustainability have been developed. Since the 2010s, there has been an expansion of interest in Sustainable Development Index (SDI) systems, both in industrialized and, albeit to a lesser extent, in developing countries. SDIs are seen as useful in a wide range of settings, by a wide range of actors: international and intergovernmental bodies; national governments and government departments; economic sectors; administrators of geographic or ecological regions; communities; nongovernmental organizations; and the private sector.
SDI processes are underpinned and driven by the increasing need for improved quality and regularly produced information with better spatial and temporal resolution. Accompanying this need is the requirement, brought in part by the information revolution, to better differentiate between information that matters in any given policy context versus information that is of secondary importance or irrelevant.
A large and still growing number of attempts to create aggregate measures of various aspects of sustainability created a stable of indices that provide a more nuanced perspective on development than economic aggregates such as GDP. Some of the most prominent of these include the Human Development Index (HDI) of the United Nations Development Programme (UNDP); the Ecological footprint of Global Footprint Network and its partner organizations; the Environmental Sustainability Index (ESI) and the pilot Environmental Performance Index (EPI) reported under the World Economic Forum (WEF); or the Genuine Progress Index (GPI) calculated at the national or sub-national level. Parallel to these initiatives, political interest in producing a green GDP that would take at least the cost of pollution and natural capital depletion into account has grown, even if implementation is held back by the reluctance of policymakers and statistical services arising mostly from a concern about conceptual and technical challenges.
At the heart of the debate over different indicators are not only different disciplinary approaches but also different views of development. Some indicators reflect the ideology of globalization and urbanization that seek to define and measure progress on whether different countries or cultures agree to accept industrial technologies in their eco-systems. Other approaches, like those that start from international treaties on cultural rights of indigenous peoples to maintain traditional cultures, measure the ability of those cultures to maintain their traditions within their eco-systems at whatever level of productivity they choose.
The Lempert-Nguyen indicator, devised in 2008 for practitioners, starts with the standards for sustainable development that have been agreed upon by the international community and then looks at whether intergovernmental organizations such as the UNDP and other development actors are applying these principles in their projects and work as a whole.
In using sustainability indicators, it is important to distinguish between three types of sustainability that are often mentioned in international development:
Sustainability of a culture (human system) within its resources and environment;
Sustainability of a specific stream of benefits or productivity (usually just an economic measure); and
Sustainability of a particular institution or project without additional assistance (institutionalization of an input).
The following list is not exhaustive but contains the major points of view:
"Daly Rules" approach
University of Maryland School of Public Policy professor and former Chief Economist for the World Bank Herman E. Daly (working from theory initially developed by Romanian economist Nicholas Georgescu-Roegen and laid out in his 1971 opus "The Entropy Law and the Economic Process") suggested the following three operational rules defining the condition of ecological (thermodynamic) sustainability:
Renewable resources such as fish, soil, and groundwater must be used no faster than the rate at which they regenerate.
Nonrenewable resources such as minerals and fossil fuels must be used no faster than renewable substitutes for them can be put into place.
Pollution and wastes must be emitted no faster than natural systems can absorb them, recycle them, or render them harmless.
Some commentators have argued that the "Daly Rules", based on ecological theory and the Laws of Thermodynamics, should be considered implicit or foundational for the many other systems that are advocated, and are thus the most straightforward system for operationalization of the Bruntland Definition. In this view, the Bruntland Definition and the Daly Rules can be seen as complementary—Bruntland provides the ethical goal of non-depletion of natural capital, Daly details parsimoniously how this ethic is operationalized in physical terms. The system is rationally complete, and in agreement with physical laws. Other definitions may thus be superfluous, or mere glosses on the immutable thermodynamic reality.
There are numerous other definitions and systems of operationalization for sustainability, and there has been competition for influence between them, with the unfortunate result that, in the minds of some observers at least, sustainability has no agreed-upon definition.
Natural Step approach
Following the Brundtland Commission's report, one of the first initiatives to bring scientific principles to the assessment of sustainability was by Swedish cancer scientist Karl-Henrik Robèrt. Robèrt coordinated a consensus process to define and operationalize sustainability. At the core of the process lies a consensus on what Robèrt came to call the natural step framework. The framework is based on a definition of sustainability, described as the system conditions of sustainability (as derived from System theory). In the natural step framework, a sustainable society does not systematically increase concentrations of substances extracted from the Earth's crust, or substances produced by society; that does not degrade the environment and in which people have the capacity to meet their needs worldwide.
Ecological footprint approach
Ecological footprint accounting, based on the biological concept of carrying capacity, tracks the amount of land and water area a human population demands for producing the biological resources the population consumes, for absorbing its waste, and for accommodating its built infrastructure, all under prevailing technology. This amount then is compared to available biocapacity, in the world or in that region. The biocapacity represents the area able to regenerate resources and assimilate waste. Global Footprint Network publishes every year results for all nations captured in UN statistics.
The algorithms of ecological footprint accounts have been used in combination with the emergy methodology (S. Zhao, Z. Li and W. Li 2005), and a sustainability index has been derived from the latter. They have also been combined with a measure of quality of life, for instance through the "Happy Planet Index" (HPI) calculated for 178 nations (Marks et al., 2006). The Happy Planet Index calculates how many happy life years each country is able to generate per global hectare of ecological footprint.
One of the striking conclusions to emerge from ecological footprint accounting is that it would be necessary to have 4 or 5 back-up planets engaged in nothing but agriculture for all those alive today to live a western lifestyle. The Footprint analysis is closely related to the I = PAT equation that, itself, can be considered a metric.
Anthropological-cultural approach
Though sustainable development has become a concept that biologists and ecologists have measured from an eco-system point of view and that the business community has measured from a perspective of energy and resource efficiencies and consumption, the discipline of anthropology is itself founded on the concept of sustainability of human groups within ecological systems. At the basis of the definition of culture is whether a human group is able to transmit its values and continue several aspects of that lifestyle for at least three generations. The measurement of culture, by anthropologists, is itself a measure of sustainability and it is also one that has been codified by international agreements and treaties like the Rio Declaration of 1992 and the United Nations Declaration on the Rights of Indigenous Peoples to maintain a cultural group's choice of lifestyles within their lands and ecosystems.
Terralingua, an organization of anthropologists and linguists working to protect biocultural diversity, with a focus on language, has devised a sert of measures with UNESCO for measuring the survivability of languages and cultures in given eco-systems.
The Lempert–Nguyen indicator of sustainable development, developed in 2008 by David Lempert and Hue Nhu Nguyen, is one that incorporates and integrates these cultural principles with international law.
Circles of Sustainability approach
A number of agencies including the UN Global Compact Cities Programme, World Vision and Metropolis have since 2010 begun using the Circles of Sustainability approach that sets up a four-domain framework for choosing appropriate indicators. Rather than designating the indicators that have to be used like most other approaches, it provides a framework to guide decision-making on what indicators are most useful. The framework is arranged around four domains - economics, ecology, politics and culture - which are then subdivided into seven analytically derived sub-domains for each domain. Indicators are linked to each sub-domain. By choosing culture as one of its key domains, the approach takes into account the emphasis of the 'Anthropological' approach (above), but retains a comprehensive sense of sustainability. The approach can be used to map any other sustainability indicator set. This is foundationally different from the Global Reporting Initiative Index (below) which uses a triple-bottom-line organizing framework, and is most relevant to corporate reporting.
Global Reporting Initiative
In 1997 the Global Reporting Initiative (GRI) was started as a multi-stakeholder process and independent institution whose mission has been "to develop and disseminate globally applicable Sustainability Reporting Guidelines". The GRI uses ecological footprint analysis and became independent in 2002. It is an official collaborating centre of the United Nations Environment Programme (UNEP) and during the tenure of Kofi Annan, it cooperated with the UN Secretary-General's Global Compact.
Energy, Emergy and Sustainability Index
In 1956 Dr. Howard T. Odum of the University of Florida coined the term Emergy and devised the accounting system of embodied energy.
In 1997, systems ecologists M.T. Brown and S. Ulgiati published their formulation of a quantitative Sustainability Index (SI) as a ratio of the emergy (spelled with an "m", i.e. "embodied energy", not simply "energy") yield ratio (EYR) to the environmental loading ratio (ELR). Brown and Ulgiati also called the sustainability index the "Emergy Sustainability Index" (ESI), "an index that accounts for yield, renewability, and environmental load. It is the incremental emergy yield compared to the environmental load".
Sustainability Index = =
NOTE: The numerator is called "emergy" and is spelled with an "m". It is an abbreviation of the term, "embodied energy". The numerator is NOT "energy yield ratio", which is a different concept.
Writers like Leone (2005) and Yi et al. have also recently suggested that the emergy sustainability index has significant utility. In particular, Leone notes that while the GRI measures behavior, it fails to calculate supply constraints the emergy methodology aims to calculate.
Environmental Sustainability Index
In 2004, a joint initiative of the Yale Center for Environmental Law and Policy (YCELP) and the Center for International Earth Science Information Network (CIESIN) of Columbia University, in collaboration with the World Economic Forum and the Directorate-General Joint Research Centre (European Commission) also attempted to construct an Environmental Sustainability Index (ESI).
This was formally released in Davos, Switzerland, at the annual meeting of the World Economic Forum (WEF) on 28 January 2005. The report on this index made a comparison of the WEF ESI to other sustainability indicators such as the Ecological footprint Index. However, there was no mention of the emergy sustainability index.
IISD Sample Policy Framework
In 1996 the International Institute for Sustainable Development (IISD) developed a Sample Policy Framework, which proposed that a sustainability index "...would give decision-makers tools to rate policies and programs against each other" (1996, p. 9). Ravi Jain (2005) argued that, "The ability to analyze different alternatives or to assess progress towards sustainability will then depend on establishing measurable entities or metrics used for sustainability."
Sustainability dashboard
The International Institute for Sustainable Development has produced a "Dashboard of Sustainability", "a free, non-commercial software package that illustrates the complex relationships among economic, social and environmental issues". This is based on Sustainable Development Indicators Prepared for the United Nations Division for Sustainable Development (UN-DSD)DECEMBER 2005.
WBCSD approach
The World Business Council for Sustainable Development (WBCSD), founded in 1995, has formulated the business case for sustainable development and argues that "sustainable development is good for business and business is good for sustainable development". This view is also maintained by proponents of the concept of industrial ecology. The theory of industrial ecology declares that industry should be viewed as a series of interlocking man-made ecosystems interfacing with the natural global ecosystem.
According to some economists, it is possible for the concepts of sustainable development and competitiveness to merge if enacted wisely, so that there is not an inevitable trade-off. This merger is motivated by the following six observations (Hargroves & Smith 2005):
Throughout the economy there are widespread untapped potential resource productivity improvements to be made to be coupled with effective design.
There has been a significant shift in understanding over the last three decades of what creates lasting competitiveness of a firm.
There is now a critical mass of enabling technologies in eco-innovations that make integrated approaches to sustainable development economically viable.
Since many of the costs of what economists call ‘environmental externalities’ are passed on to governments, in the long-term sustainable development strategies can provide multiple benefits to the tax payer.
There is a growing understanding of the multiple benefits of valuing social and natural capital, for both moral and economic reasons, and including them in measures of national well-being.
There is mounting evidence to show that a transition to a sustainable economy, if done wisely, may not harm economic growth significantly, in fact it could even help it. Recent research by ex-Wuppertal Institute member Joachim Spangenberg, working with neo-classical economists, shows that the transition, if focused on improving resource productivity, leads to higher economic growth than business as usual, while at the same time reducing pressures on the environment and enhancing employment.
Life-cycle assessment
Life-cycle assessment is a "composite measure of sustainability." It analyses the environmental performance of products and services through all phases of their life cycle: extracting and processing raw materials; manufacturing, transportation and distribution; use, re-use, maintenance; recycling, and final disposal.
Sustainable enterprise approach
Building on the work of the World Business Council for Sustainable Development, businesses began to see the needs of environmental and social systems as opportunities for business development and contribution to stakeholder value. This approach has manifested itself in three key areas of strategic intent: 'sustainable innovation', human development, and 'bottom of the pyramid' business strategies. Now, as businesses have begun the shift toward sustainable enterprise, many business schools are leading the research and education of the next generation of business leaders. Companies have introduced key development indicators to set targets and track progress on sustainable development. Some key players are:
Center for Sustainable Global Enterprise, Cornell University
Center for Sustainable Enterprise, Stuart School of Business, Illinois Institute of Technology
Erb Institute, Ross School of Business, University of Michigan
William Davidson Institute, Ross School of Business, University of Michigan
Center for Sustainable Enterprise, University of North Carolina, Chapel-Hill
Community Enterprise System, NABARD–XIMB Sustainability Trust, Center for Case Research, Xavier Institute of Management, Bhubaneswar
Sustainable livelihoods approach
Another application of the term sustainability has been in the Sustainable Livelihoods Approach, developed from conceptual work by Amartya Sen, and the UK's Institute for Development Studies. This was championed by the UK's Department for International Development (DFID), UNDP, Food and Agriculture Organization (FAO) as well as NGOs such as CARE, OXFAM and the African Institute for Community-Driven Development, Khanya-aicdd. Key concepts include the Sustainable Livelihoods (SL) Framework, a holistic way of understanding livelihoods, the SL principles, as well as six governance issues developed by Khanya-aicdd. A wide range of information resources on Sustainable Livelihoods Approaches can be found at Livelihoods Connect.
Some analysts view this measure with caution because they believe that it has a tendency to take one part of the footprint analysis and I = PAT equation (productivity) and to focus on the sustainability of economic returns to an economic sector rather than on the sustainability of the entire population or culture.
FAO types of sustainability
The United Nations Food and Agriculture Organization (FAO) has identified considerations for technical cooperation that affect three types of sustainability:
Institutional sustainability. Can a strengthened institutional structure continue to deliver the results of technical cooperation to end users? The results may not be sustainable if, for example, the planning authority that depends on the technical cooperation loses access to top management, or is not provided with adequate resources after the technical cooperation ends. Institutional sustainability can also be linked to the concept of social sustainability, which asks how the interventions can be sustained by social structures and institutions;
Economic and financial sustainability. Can the results of technical cooperation continue to yield an economic benefit after the technical cooperation is withdrawn? For example, the benefits from the introduction of new crops may not be sustained if the constraints to marketing the crops are not resolved. Similarly, economic, as distinct from financial, sustainability may be at risk if the end users continue to depend on heavily subsidized activities and inputs.
Ecological sustainability. Are the benefits to be generated by the technical cooperation likely to lead to a deterioration in the physical environment, thus indirectly contributing to a fall in production, or well-being of the groups targeted and their society?
Some ecologists have emphasised a fourth type of sustainability:
Energetic sustainability. This type of sustainability is often concerned with the production of energy and mineral resources. Some researchers have pointed to trends they say document the limits of production. See Hubbert peak for example.
"Development sustainability" approaches
Sustainability is relevant to international development projects. One definition of development sustainability is "the continuation of benefits after major assistance from the donor has been completed" (Australian Agency for International Development 2000). Ensuring that development projects are sustainable can reduce the likelihood of them collapsing after they have just finished; it also reduces the financial cost of development projects and the subsequent social problems, such as dependence of the stakeholders on external donors and their resources. All development assistance, apart from temporary emergency and humanitarian relief efforts, should be designed and implemented with the aim of achieving sustainable benefits. There are ten key factors that influence development sustainability.
Participation and ownership. Get the stakeholders (men and women) to genuinely participate in design and implementation. Build on their initiatives and demands. Get them to monitor the project and periodically evaluate it for results.
Capacity building and training. Training stakeholders to take over should begin from the start of any project and continue throughout. The right approach should both motivate and transfer skills to people.
Government policies. Development projects should be aligned with local government policies.
Financial. In some countries and sectors, financial sustainability is difficult in the medium term. Training in local fundraising is a possibility, as is identifying links with the private sector, charging for use, and encouraging policy reforms.
Management and organization. Activities that integrate with or add to local structures may have better prospects for sustainability than those that establish new or parallel structures.
Social, gender and culture. The introduction of new ideas, technologies and skills requires an understanding of local decision-making systems, gender divisions and cultural preferences.
Technology. All outside equipment must be selected with careful consideration given to the local finance available for maintenance and replacement. Cultural acceptability and the local capacity to maintain equipment and buy spare parts are vital.
Environment. Poor rural communities that depend on natural resources should be involved in identifying and managing environmental risks. Urban communities should identify and manage waste disposal and pollution risks.
External political and economic factors. In a weak economy, projects should not be too complicated, ambitious or expensive.
Realistic duration. A short project may be inadequate for solving entrenched problems in a sustainable way, particularly when behavioural and institutional changes are intended. A long project, may on the other hand, promote dependence.
The definition of sustainability as "the continuation of benefits after major assistance from the donor has been completed" (Australian Agency for International Development 2000) is echoed by other definitions (World Bank, USAID). The concept has however evolved as it has become of interest to non grant-making institutions. Sustainability in development refers to processes and relative increases in local capacity and performance while foreign assistance decreases or shifts (not necessarily disappears). The objective of sustainable development is open to various interpretations.
See also
Geographic information science
Geographic information systems
Infrastructure Sustainability Council of Australia, developer of an infrastructure sustainability rating system
Intergovernmental Panel on Climate Change
Land footprint
Representation theory
Stern Review
References
Development economics | 0.791597 | 0.965258 | 0.764095 |
Spiral Dynamics | Spiral Dynamics (SD) is a model of the evolutionary development of individuals, organizations, and societies. It was initially developed by Don Edward Beck and Christopher Cowan based on the emergent cyclical theory of Clare W. Graves, combined with memetics. A later collaboration between Beck and Ken Wilber produced Spiral Dynamics Integral (SDi). Several variations of Spiral Dynamics continue to exist, both independently and incorporated into or drawing on Wilber's Integral theory. Spiral Dynamics has applications in management theory and business ethics, and as an example of applied memetics. However, it lacks mainstream academic support.
Overview
Spiral Dynamics describes how value systems and worldviews emerge from the interaction of "life conditions" and the mind's capacities. The emphasis on life conditions as essential to the progression through value systems is unusual among similar theories, and leads to the view that no level is inherently positive or negative, but rather is a response to the local environment, social circumstances, place and time. Through these value systems, groups and cultures structure their societies and individuals integrate within them. Each distinct set of values is developed as a response to solving the problems of the previous system. Changes between states may occur incrementally (first order change) or in a sudden breakthrough (second order change). The value systems develop in a specific order, and the most important question when considering the value system being expressed in a particular behavior is why the behavior occurs.
Overview of the levels
Development of the theory
University of North Texas (UNT) professor Don Beck sought out Union College psychology professor Clare W. Graves after reading about his work in The Futurist. They met in person in 1975, and Beck, soon joined by UNT faculty member Chris Cowan, worked closely with Graves until his death in 1986. Beck made over 60 trips to South Africa during the 1980s and 1990s, applying Graves's emergent cyclical theory in various projects. This experience, along with others Beck and Cowan had applying the theory in North America, motivated the development of Spiral Dynamics.
Beck and Cowan first published their extension and adaptation of Graves's emergent cyclical theory in Spiral Dynamics: Mastering Values, Leadership, and Change (Exploring the New Science of Memetics) (1996). They introduced a simple color-coding for the eight value systems identified by Graves (and a predicted ninth) which is better known than Graves's letter pair identifiers. Additionally, Beck and Cowan integrated ideas from the field of memetics as created by Dawkins and further developed by Csikszentmihalyi, identifying memetic attractors for each of Graves's levels. These attractors, which they called "VMemes", are said to bind memes into cohesive packages which structure the world views of both individuals and societies.
Diversification of views
While Spiral Dynamics began as a single formulation and extension of Graves's work, a series of disagreements and shifting collaborations have produced three distinct approaches. By 2010, these had settled as Christopher Cowan and Natasha Todorovic advocating their trademarked "SPIRAL DYNAMICS®" as fundamentally the same as Graves's emergent cyclical theory, Don Beck advocating Spiral Dynamics Integral (SDi) with a community of practice around various chapters of his Centers for Human Emergence, and Ken Wilber subordinating SDi to his similarly but-not-identically colored Integral AQAL "altitudes", with a greater focus on spirituality.
This state of affairs has led to practitioners noting the "lineage" of their approach in publications.
Timeline
The following timeline shows the development of the various Spiral Dynamics factions and the major figures involved in them, as well as the initial work done by Graves. Splits and changes between factions are based on publications or public announcements, or approximated to the nearest year based on well-documented events.
Vertical bars indicate notable publications, which are listed along with a few other significant events after the timeline.
Bolded years indicate publications that appear as vertical bars in the chart above:
1966: Graves: first major publication (in The Harvard Business review)
1970: Graves: peer reviewed publication in Journal of Humanistic Psychology
1974: Graves: article in The Futurist (Beck first becomes aware of Graves's theory; Cowan a bit later)
1977: Graves abandons manuscript of what would later become The Never Ending Quest
1979: Beck and Cowan found National Values Center, Inc. (NVC)
1981: Beck and Cowan resign from UNT to work with Graves; Beck begins applying theory in South Africa
1986: Death of Clare Graves
1995: Wilber: Sex, Ecology, Spirituality (introduces quadrant model, first mention of Graves's ECLET)
1996: Beck and Cowan: Spiral Dynamics: Mastering Values, Leadership, and Change
1998: Cowan and Todorovic form NVC Consulting (NVCC) as an "outgrowth" of NVC
1998: Cowan files for "Spiral Dynamics" service mark, registered to NVC
1999: Beck (against SD as service mark) and Cowan (against Wilber's Integral theory) cease collaborating
1999: Wilber: The Collected Works of Ken Wilber, Vol. 4: Integral Psychology (first Spiral Dynamics reference)
2000: Cowan and Todorovic: "Spiral Dynamics: The Layers of Human Values in Strategy" in Strategy & Leadership (peer reviewed)
2000: Wilber: A Theory of Everything (integrates SD with AQAL, defines MGM: "Mean Green Meme")
2000: Wilber founds the Integral Institute with Beck as a founding associate around this time
2002: Beck: "SDi: Spiral Dynamics in the Integral Age" (launches SDi as a brand)
2002: Todorovic: "The Mean Green Hypothesis: Fact or Fiction?" (refutes MGM)
2002: Graves; William R. Lee (annot.); Cowan and Todorovic (eds.): Levels of Human Existence, transcription of Graves's 1971 three-day seminar
2004: Beck founds the Center for Human Emergence (CHE),
2005: Beck, Elza S. Maalouf and Said E. Dawlabani found the Center for Human Emergence Middle East
2005: Graves; Cowan and Todorovic (eds.): The Never Ending Quest
2005: Beck and Wilber cease collaborating around this time, disagreeing on Wilber's changes to SDi
2006: Wilber: Integral Spirituality (adds altitudes colored to align with both SDi and chakras)
2009: NVC dissolved as business entity, original SD service mark (officially registered to NVC) canceled
2010: Cowan and Todorovic re-file for SD service mark and trademark, registered to NVC Consulting
2015: Death of Chris Cowan
2017: Wilber: Religion of Tomorrow (further elaborates on the altitude concept and coloring)
2018: Beck et al.: Spiral Dynamics in Action
2022: Death of Don Beck
Cowan and Todorovic's "Spiral Dynamics"
Chris Cowan's decision to trademark "Spiral Dynamics" in the US and form a consulting business with Natasha Todorovic contributed to the split between Beck and him in 1999. Cowan and Todorovic subsequently published an article on Spiral Dynamics in the peer-reviewed journal Strategy & Leadership, edited and published Graves's unfinished manuscript, and generally took the position that the distinction between Spiral Dynamics and Graves's ECLET is primarily one of terminology. Holding this view, they opposed interpretations seen as "heterodox."
In particular, Cowan and Todorovic's view of Spiral Dynamics stands in opposition to that of Ken Wilber. Wilber biographer Frank Visser describes Cowan as a "strong" critic of Wilber and his Integral theory, particularly the concept of a "Mean Green Meme." Todorovic produced a paper arguing that research refutes the existence of the "Mean Green Meme" as Beck and particularly Wilber described it.
Beck's "Spiral Dynamics integral" (SDi)
By early 2000, Don Beck was corresponding with integral philosopher Ken Wilber about Spiral Dynamics and using a "4Q/8L" diagram combining Wilber's four quadrants with the eight known levels of Spiral Dynamics. Beck officially announced SDi as launching on January 1, 2002, aligning Spiral Dynamics with integral theory and additionally citing the influence of John Petersen of the Arlington Institute and Ichak Adizes. By 2006, Wilber had introduced a slightly different color sequence for his AQAL "altitudes", diverging from Beck's SDi and relegating it to the values line, which is one of many lines within AQAL.
Later influences on SDi include the work of Muzafer Sherif and Carolyn Sherif in the fields of realistic conflict and social judgement, specifically their Assimilation Contrast Effect model and Robber's Cave study
SD/SDi and Ken Wilber's Integral Theory
Ken Wilber briefly referenced Graves in his 1986 book (with Jack Engler and Daniel P. Brown) Transformations of Consciousness, and again in 1995's Sex, Ecology, Spirituality which also introduced his four quadrants model. However, it was not until the "Integral Psychology" section of 1999's Collected Works: Volume 4 that he integrated Gravesian theory, now in the form of Spiral Dynamics. Beck and Wilber began discussing their ideas with each other around this time.
AQAL "altitudes"
By 2006, Wilber was using SDi only for the values line, one of many lines in his All Quadrants, All Levels/Lines (AQAL) framework. In the book Integral Spirituality published that year, he introduced the concept of "altitudes" as an overall "content-free" system to correlate developmental stages across all of the theories on all of the lines integrated by AQAL.
The altitudes used a set of colors that were ordered according to the rainbow, which Wilber explained was necessary to align with color energies in the tantric tradition. This left only Red, Orange, Green, and Turquoise in place, changing all of the other colors to greater or lesser degrees. Furthermore, where Spiral Dynamics theorizes that the 2nd tier would have six stages repeating the themes of the six stages of the 1st tier, in the altitude system the 2nd tier contains only two levels (corresponding to the first two SD 2nd tier levels) followed by a 3rd tier of four spiritually-oriented levels inspired by the work of Sri Aurobindo. Beck and Cowan each consider this 3rd tier to be non-Gravesian.
Wilber critic Frank Visser notes that while Wilber gives a correspondence of his altitude colors to chakras, his correspondence does not actually match any traditional system for coloring chakras, despite Wilber's assertion that using the wrong colors would "backfire badly when any actual energies were used." He goes on to note that Wilber's criticism of the SD colors as "inadequate" ignores that they were not intended to correlate with any system such as chakras. In this context, Visser expresses sympathy for Beck and Cowan's dismay over what Visser describes as "vandalism" regarding the color scheme, concluding that the altitude colors are an "awkward hybrid" of the SD and rainbow/chakra color systems, both lacking the expressiveness of the former and failing to accurately correlate with the latter.
Criticism and limitations
As an extension of Graves's theory, most criticisms of that theory apply to Spiral Dynamics as well. Likewise, to the extent that Spiral Dynamics Integral incorporates Ken Wilber's integral theory, criticism of that theory, and the lack of mainstream academic support for it are also relevant.
In addition, there have been criticisms of various aspects of SD and/or SDi that are specific to those extensions. Nicholas Reitter, writing in the Journal of Conscious Evolution, observes:
On the other hand, the SD authors seem also to have magnified some of the weaknesses in Graves' approach. The occasional messianism, unevenness of presentation and constant business-orientation of Graves' (2005) manuscript is transmuted in the SD authors' book (Beck and Cowan 1996) into a sometimes- bewildering array of references to world history, pop culture and other topics, often made in helter-skelter fashion.
Spiral Dynamics has been criticized by some as appearing to be like a cult, with undue prominence given to the business and intellectual property concerns of its leading advocates.
Metamodernists Daniel Görtz and Emil Friis, writing as Hanzi Freinacht, who created a multi-part system combining aspects of SD with other developmental measurements dismissed the Turquoise level, saying that while there will eventually be another level, it does not currently exist. They argue that attempts to build Turquoise communities are likely to lead to the development of "abusive cults"
Psychologist Keith Rice, discussing his application of SDi in individual psychotherapy, notes that it encounters limitations in accounting for temperament and the unconscious. However, regarding SDi's "low profile among academics," he notes that it can easily be matched to more well-known models "such as Maslow, Loevinger, Kohlberg, Adorno, etc.," in order to establish trust with clients.
Influence and applications
Spiral Dynamics has influenced management theory, which was the primary focus of the 1996 Spiral Dynamics book. John Mackey and Rajendra Sisodia write that the vision and values of conscious capitalism as they articulate it are consistent with the "2nd tier" VMEMES of Spiral Dynamics. Rica Viljoen's case study of economic development in Ghana demonstrates how understanding the Purple VMEME allows for organizational storytelling that connects with diverse (non-Western) worldviews.
Spiral Dynamics has also been noted as an example of applied memetics. In his chapter, "'Meme Wars': A Brief Overview of Memetics and Some Essential Context" in the peer-reviewed book Memetics and Evolutionary Economics, Michael P. Schlaile includes Spiral Dynamics in the "organizational memetics" section of his list of "enlightening examples of applied memetics." Schlaile also notes Said Dawlabani's SDi-based "MEMEnomics" as an alternative to his own "economemetics" in his chapter examining memetics and economics in the same book. Elza Maalouf argues that SDi provides a "memetic" interpretation of non-Western cultures that Western NGOs often lack, focusing attention on the "indigenous content" of the culture's value system.
One of the main applications of Spiral Dynamics is to inform more nuanced and holistic systems change strategies. Just like categories in any other framework, the various levels can be seen as memetic lenses to look at the world through in order to help those leading change take a bird's eye view in understanding the diverse perspectives on a singular topic. At best, Spiral Dynamics can help us to synthesize these perspectives and recognize the strength in having a diversity of worldviews and aim to create interventions that take into consideration the needs and values of individuals at every level of the spiral.
Spiral Dynamics continues to influence integral philosophy and spirituality, and the developmental branch of metamodern philosophy. Both integralists and metamodernists connect their philosophies to SD's Yellow VMEME. Integralism also identifies with Turquoise and eventually added further stages not found in SD or SDi, while metamodernism dismisses Turquoise as nonexistent.
SDi has also been referenced in the fields of education,
urban planning,
and cultural analysis.
Notes
Works cited
(Note on page ii: "This study was approved by Indiana University Institutional Review Board (IRB)." Note also that a previous report was published as: Nasser, Ilham (June 2020). "Mapping the Terrain of Education 2018–2019: A Summary Report". Journal of Education in Muslim Societies. Indiana University Press. 1 (2): 3–21. doi:10.2979/jems.1.2.08, but is not freely downloadable.)
Developmental psychology | 0.769595 | 0.992838 | 0.764084 |
Decriminalization | Decriminalization or decriminalisation is the legislative process which removes prosecutions against an action so that the action remains illegal but has no criminal penalties or at most some civil fine. This reform is sometimes applied retroactively but otherwise comes into force from either the enactment of the law or from a specified date. In some cases regulated permits or fines may still apply (for contrast, see: legalization), and associated aspects of the original criminalized act may remain or become specifically classified as crimes. The term was coined by anthropologist Jennifer James to express sex workers' movements' "goals of removing laws used to target prostitutes", although it is now commonly applied to drug policies. The reverse process is criminalization.
Decriminalization reflects changing social and moral views. A society may come to the view that an act is not harmful, should no longer be criminalised, or is otherwise not a matter to be addressed by the criminal justice system. Examples of subject matter which have been the subject of changing views on criminality over time in various societies and countries include:
Abortion (see: abortion law and abortion-rights movements)
Breastfeeding in public
Drug possession, and recreational drug use (see: drug liberalization)
Euthanasia (see: legality of euthanasia)
Gambling (see: gambling age)
Homosexuality (see: decriminalization of homosexuality and LGBT rights by country or territory)
Polygamy (see: legality of polygamy)
Prostitution (see: decriminalization of sex work)
Public nudity
steroid use in sport
Suicide (see: suicide legislation)
In a federal country, acts may be decriminalized by one level of government while still subject to penalties levied by another; for example, possession of a decriminalized drug may still be subject to criminal charges by one level of government, but another may yet impose a monetary fine. This should be contrasted with legalization, which removes all or most legal detriments from a previously illegal act.
Drug-use decriminalization topics
Colorado Amendment 64
Law Enforcement Against Prohibition
Legal history of cannabis in the United States
Legality of cannabis
Marijuana Policy Project
Psilocybin decriminalization in the United States
Responsible drug use
Timeline of cannabis law
War on Drugs
See also
Alcohol prohibition
Decriminalization of sex work
Drug liberalization
Drug policy of the Soviet Union
Legal issues of anabolic steroids
Legalization
Liberalization
Prostitution in Belgium
Prostitution in New Zealand
Public-order crime
Sex worker
Sodomy law
Timeline of LGBT history
Unenforced law
Victimless crime
References
Criminal law legal terminology
Drug policy reform
Justice
Libertarian theory
Public policy
Philosophy of law | 0.772358 | 0.989275 | 0.764075 |
Andrology | Andrology (from , anēr, genitive , andros 'man' and , -logia) is a name for the medical specialty that deals with male health, particularly relating to the problems of the male reproductive system and urological problems that are unique to men. It is the counterpart to gynecology, which deals with medical issues which are specific to female health, especially reproductive and urologic health.
Process
Andrology covers anomalies in the connective tissues pertaining to the genitalia, as well as changes in the volume of cells, such as in genital hypertrophy or macrogenitosomia.
From reproductive and urologic viewpoints, male-specific medical and surgical procedures include vasectomy, vasovasostomy (one of the vasectomy reversal procedures), orchidopexy, circumcision, sperm/semen cryopreservation, surgical sperm retrieval, semen analysis (for fertility or post-vasectomy), sperm preparation for assisted reproductive technology (ART) as well as intervention to deal with male genitourinary disorders such as the following:
History
Unlike gynaecology, which has a plethora of medical board certification programs worldwide, andrology has none. Andrology has only been studied as a distinct specialty since the late 1960s: the first specialist journal on the subject was the German periodical Andrologie (now called Andrologia), published from 1969 onwards. The next specialty journal covering both the basic and clinical andrology was the International Journal of Andrology, established in 1978, which became the official journal of the European Academy of Andrology in 1992. In 1980 the American Society of Andrology launched the Journal of Andrology. In 2012, these two society journals merged into one premier journal in the field, named Andrology, with the first issue published in January 2013.
See also
Men's health
Reproductive health
Urology
References
External links
American Society of Andrology
European Academy of Andrology
British Andrology Society
International Society of Andrology
Andrology Australia
Andrology | 0.768161 | 0.99468 | 0.764074 |
In-situ conservation | In-situ conservation is the on-site conservation or the conservation of genetic resources in natural populations of plant or animal species, such as forest genetic resources in natural populations of tree species. This process protects the inhabitants and ensures the sustainability of the environment and ecosystem.
Its converse is ex situ conservation, where threatened species are moved to another location. These can include places like seed libraries, gene banks and more where they are protected through human intervention.
Methods
Nature reserves
Nature reserves (or biosphere reserves) cover very large areas, often more than 5000 km2. They are used to protect species for a long time. There are 3 different classifications for these reserves:
Strict Natural Areas
Managed Natural Areas
Wilderness Areas
Strict natural areas are creates to protect the state of nature in a given region. It is not made for the purpose of protecting any species within its limits. managed natural areas alternatively are made specifically for the purpose of protecting a certain species or community that is at the point it may be at risk being in a strict natural area. This is a more controlled environment that is created to be the most optimal habitat for the species concerned to thrive. Finally, a wilderness area serves a dual purpose of providing a protection for the natural region as well as providing recreational opportunities for patrons (excluding motorized transport)
National parks
A national park is an area dedicated for the conservation of wildlife along with its environment. A national park is an area which is used to conserve scenery, natural and historical objects. It is usually a small reserve covering an area of about 100 to 500 square kilometers. Within biosphere reserves, one or more national parks may also exist.
Wildlife sanctuaries
Wildlife sanctuaries can provide a higher quality of life for animals who are moved there. These animals are placed in specialized habitats that allows for more species-specific behaviors to take place. Wildlife sanctuaries are often used for animals that have been in zoos, circuses, laboratories and more for a long time, and then live the rest of their lives with greater autonomy in these habitats.
Biodiversity hotspots
Several international organizations focus their conservation work on areas designated as biodiversity hotspots.
According to Conservation International, to qualify as a biodiversity hotspot a region must meet two strict criteria:
it must contain at least 1,500 species of vascular plants (∆ 0.5% of the world's total) as endemics,
it has to have lost at least 70% of its original habitat.
Biodiversity hotspots make up 1.4% of the earth's land area, yet they contain more than half of our planets species.
Gene sanctuary
A gene sanctuary is an area where plants are conserved. It includes both biosphere reserves as well as national parks. Biosphere reserves are developed to be both a place for biodiversity conservation as well as sustainable development. The concept was first developed in the 1970s and include a core, buffer and transition zones. These zones act together to harmonize the conservation and development aspects of the biosphere.
Since 2004, and 30 years following the invention of the biosphere reserve concept, there have been about 459 conservation areas developed in 97 countries.
Benefits
One benefit of in situ conservation is that it maintains recovering populations in the environment where they have developed their distinctive properties. Another benefit is that this strategy helps ensure the ongoing processes of evolution and adaptation within their environments. As a last resort, ex situ conservation may be used on some or all of the population, when in situ conservation is too difficult, or impossible. The species gets adjusted to the natural disasters like drought, floods, forest fires and this method is very cheap and convenient.
Reserves
Wildlife and livestock conservation involves the protection of wildlife habitats. Sufficiently large reserves must be maintained to enable the target species to exist in large numbers. The population size must be sufficient to enable the necessary genetic diversity to survive, so that it has a good chance of continuing to adapt and evolve over time. This reserve size can be calculated for target species by examining the population density in naturally occurring situations. The reserves must then be protected from intrusion or destruction by man, and against other catastrophes.
Agriculture
In agriculture, in situ conservation techniques are an effective way to improve, maintain, and use traditional or native varieties of agricultural crops. Such methodologies link the positive output of scientific research with farmers' experience and field work.
First, the accessions of a variety stored at a germplasm bank and those of the same variety multiplied by farmers are jointly tested in the producers field and in the laboratory, under different situations and stresses. Thus, the scientific knowledge about the production characteristics of the native varieties is enhanced. Later, the best tested accessions are crossed, mixed, and multiplied under replicable situations. At last, these improved accessions are supplied to the producers. Thus, farmers are enabled to crop improved selections of their own varieties, instead of being lured to substitute their own varieties with commercial ones or to abandon their crop. This technique of conservation of agricultural biodiversity is more successful in marginal areas, where commercial varieties are not expedient, due to climate and soil fertility constraints, or where the taste and cooking characteristics of traditional varieties compensate for their lower yields.
In India
About 4% of the total geographical area of India is used for in situ conservation.
There are 18 biosphere reserves in India, including Nanda Devi in Uttarakhand, Nokrek in Meghalaya, Manas National Park in Assam and Sundarban in West Bengal.
There are 106 national parks in India, including The Kaziranga National Park which conserves The one-horned rhino, Periyar National Park conserving the tiger and elephant, and Ranthambore National Park conserving the tiger.
There are 551 wildlife sanctuaries in India.
Biodiversity hotspots include the Himalayas, the Western Ghats, the Indo-Burma region and the Sundaland.
India has set up its first gene sanctuary in the Garo Hills of Meghalaya for wild relatives of citrus. Efforts are also being made to set up gene sanctuaries for banana, sugarcane, rice and mango.
Community reserves were established as a type of protected area in India in the Wildlife Protection Amendment Act 2002, to provide legal support to community or privately owned reserves which cannot be designated as national park or wildlife sanctuary.
Sacred groves are tracts of forest set aside where all the trees and wildlife within are venerated and given total protection.
In China
China has up to 2538 nature reserves which cover 15% of the entire country.
The majority of in situ conservation areas are concentrated in the regions of Tibet, Qinghai and Xinjiang. These provinces, all in western China, take up about 56% of the nature reserves in the country.
Eastern and southern China contain 90% of the country's population, and in these areas there are few nature reserves. In these regions, nature reserves actively compete with human development projects to support a growing demand for infrastructure. One consequence of this competing development has been the movement of the South China tiger out of its natural habitat.
In eastern and southern China many natural landscapes that remain undeveloped are fragmented; however nature reserves may provide crucial refuge for key species as well as ecosystem services.
See also
Arid Forest Research Institute
Biodiversity
Food plot – the practice of planting crops specifically to support wildlife
Genetic erosion
Habitat corridor
Habitat fragmentation
Refuge (ecology)
Reintroduction
Regional Red List
Restoration ecology
Wildlife corridor
References
Further reading
External links
In-Situ Conservation, The Convention on Biological Diversity
Ex-Situ Conservation, The Convention on Biological Diversity
IUCN/SSC Re-introduction Specialist Group
IUCN Red List of Threatened Species
The Convention on Biological Diversity
In situ conservation
Guidelines: In vivo conservation of animal genetic resources, Food and Agriculture Organization of the UN
Conservation biology
Ecological restoration
Environmental design
Environmental conservation | 0.778878 | 0.980976 | 0.764061 |
Geoinformatics | Geoinformatics is a scientific field primarily within the domains of Computer Science and technical geography. It focuses on the programming of applications, spatial data structures, and the analysis of objects and space-time phenomena related to the surface and underneath of Earth and other celestial bodies. The field develops software and web services to model and analyse spatial data, serving the needs of geosciences and related scientific and engineering disciplines. The term is often used interchangeably with Geomatics, although the two have distinct focuses; Geomatics emphasizes acquiring spatial knowledge and leveraging information systems, not their development. At least one publication has claimed the discipline is pure computer science outside the realm of geography.
Overview
In a general sense, geoinformatics can be understood as "a variety of efforts to promote collaboration between computer scientists and geoscientists to solve complex scientific questions". More technically, geoinformatics has been described as "the science and technology dealing with the structure and character of spatial information, its capture, its classification and qualification, its storage, processing, portrayal and dissemination, including the infrastructure necessary to secure optimal use of this information" or "the art, science or technology dealing with the acquisition, storage, processing production, presentation and dissemination of geoinformation". Along with the thriving of data science and artificial intelligence since the 2010s, the field of geoinformatics has also incorporated the latest methodology and technical progress from the cyberinfrastructure ecosystem.
Geoinformatics has at its core the technologies supporting the processes of acquisition, analysis and visualization of spatial data. Both geomatics and geoinformatics include and rely heavily upon the theory and practical implications of geodesy. Geography and earth science increasingly rely on digital spatial data acquired from remotely sensed images analyzed by geographical information systems (GIS), photo interpretation of aerial photographs, and Web mining. Geoinformatics combines geospatial analysis and modeling, development of geospatial databases, information systems design, human-computer interaction and both wired and wireless networking technologies. Geoinformatics uses geocomputation and geovisualization for analyzing geoinformation.
Areas related to geoinformatics include:
Research
Research in this field is used to support global and local environmental, energy and security programs. The Geographic Information Science and Technology group of Oak Ridge National Laboratory is supported by various government departments and agencies including the United States Department of Energy. It is currently the only group in the United States Department of Energy National Laboratory System to focus on advanced theory and application research in this field. A lot of interdisciplinary research exists that involves geoinformatics fields including computer science, information technology, software engineering, biogeography, geography, conservation, architecture, spatial analysis and reinforcement learning.
Applications
Many fields benefit from geoinformatics, including urban planning and land use management, in-car navigation systems, virtual globes, land surveying, public health, local and national gazetteer management, environmental modeling and analysis, military, transport network planning and management, agriculture, meteorology and climate change, oceanography and coupled ocean and atmosphere modelling, business location planning, architecture and archeological reconstruction, telecommunications, criminology and crime simulation, aviation, biodiversity conservation and maritime transport.
The importance of the spatial dimension in assessing, monitoring and modelling various issues and problems related to sustainable management of natural resources is recognized all over the world.
Geoinformatics becomes very important technology to decision-makers across a wide range of disciplines, industries, commercial sector, environmental agencies, local and national government, research, and academia, national survey and mapping organisations, International organisations, United Nations, emergency services, public health and epidemiology, crime mapping, transportation and infrastructure, information technology industries, GIS consulting firms, environmental management agencies), tourist industry, utility companies, market analysis and e-commerce, mineral exploration, Seismology etc. Many government and non government agencies started to use spatial data for managing their day-to-day activities.
See also
Cyberinfrastructure
Data science
Informatics
Geographic information science
Geomathematics
Organizations
Federation of Earth Science Information Partners
Open Geospatial Consortium
American Geophysical Union
European Geosciences Union
Geological Society of America
International Association for Mathematical Geosciences
International Union of Geodesy and Geophysics
References
External links
Geoinformatics Jobs Portal
Earth sciences
Geographical technology
Information science by discipline
Computational fields of study
Geographic data and information fields of study | 0.7723 | 0.989329 | 0.764059 |
Gradualism | Gradualism, from the Latin ("step"), is a hypothesis, a theory or a tenet assuming that change comes about gradually or that variation is gradual in nature and happens over time as opposed to in large steps. Uniformitarianism, incrementalism, and reformism are similar concepts.
Gradualism can also refer to desired, controlled change in society, institutions, or policies. For example, social democrats and democratic socialists see the socialist society as achieved through gradualism.
Geology and biology
In the natural sciences, gradualism is the theory which holds that profound change is the cumulative product of slow but continuous processes, often contrasted with catastrophism. The theory was proposed in 1795 by James Hutton, a Scottish geologist, and was later incorporated into Charles Lyell's theory of uniformitarianism. Tenets from both theories were applied to biology and formed the basis of early evolutionary theory.
Charles Darwin was influenced by Lyell's Principles of Geology, which explained both uniformitarian methodology and theory. Using uniformitarianism, which states that one cannot make an appeal to any force or phenomenon which cannot presently be observed (see catastrophism), Darwin theorized that the evolutionary process must occur gradually, not in saltations, since saltations are not presently observed, and extreme deviations from the usual phenotypic variation would be more likely to be selected against.
Gradualism is often confused with the concept of phyletic gradualism. It is a term coined by Stephen Jay Gould and Niles Eldredge to contrast with their model of punctuated equilibrium, which is gradualist itself, but argues that most evolution is marked by long periods of evolutionary stability (called stasis), which is punctuated by rare instances of branching evolution.
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution. While the traditional model of palaeontology, the phylogenetic model, states that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes do not happen over a gradual period but in localized, rare, rapid events of branching speciation. Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states.
Politics and society
In politics, gradualism is the hypothesis that social change can be achieved in small, discrete increments rather than in abrupt strokes such as revolutions or uprisings. Gradualism is one of the defining features of political liberalism and reformism. Machiavellian politics pushes politicians to espouse gradualism.
Gradualism in social change implemented through reformist means is a moral principle to which the Fabian Society is committed. In a more general way, reformism is the assumption that gradual changes through and within existing institutions can ultimately change a society's fundamental economic system and political structures; and that an accumulation of reforms can lead to the emergence of an entirely different economic system and form of society than present-day capitalism. That hypothesis of social change grew out of opposition to revolutionary socialism, which contends that revolution is necessary for fundamental structural changes to occur.
In socialist politics and within the socialist movement, the concept of gradualism is frequently distinguished from reformism, with the former insisting that short-term goals need to be formulated and implemented in such a way that they inevitably lead into long-term goals. It is most commonly associated with the libertarian socialist concept of dual power and is seen as a middle way between reformism and revolutionism.
Martin Luther King Jr. was opposed to the idea of gradualism as a method of eliminating segregation. The United States government wanted to try to integrate African-Americans and European-Americans slowly into the same society, but many believed it was a way for the government to put off actually doing anything about racial segregation:
Conspiracy theories
In the terminology of NWO-related speculations, gradualism refers to the gradual implementation of a totalitarian world government.
Linguistics and language change
In linguistics, language change is seen as gradual, the product of chain reactions and subject to cyclic drift. The view that creole languages are the product of catastrophism is heavily disputed.
Morality
Christianity
Buddhism, Theravada and Yoga
Gradualism is the approach of certain schools of Buddhism and other Eastern philosophies (e.g. Theravada or Yoga), that enlightenment can be achieved step by step, through an arduous practice. The opposite approach, that insight is attained all at once, is called subitism. The debate on the issue was very important to the history of the development of Zen, which rejected gradualism, and to the establishment of the opposite approach within the Tibetan Buddhism, after the Debate of Samye. It was continued in other schools of Indian and Chinese philosophy.
Philosophy
Contradictorial gradualism is the paraconsistent treatment of fuzziness developed by Lorenzo Peña which regards true contradictions as situations wherein a state of affairs enjoys only partial existence.
See also
Evolution
Uniformitarianism
Incrementalism
Normalization (sociology)
Reformism
Catastrophism
Saltation
Punctuated equilibrium
Accelerationism
Boiling frog
References
Geology theories
Rate of evolution
Liberalism
Social democracy
Democratic socialism
Historical linguistics
Social theories | 0.775728 | 0.984947 | 0.764051 |
Field trip | A field trip or excursion is a journey by a group of associated peers, such as coworkers or school students, to a place away from their normal environment for the purpose of education or leisure, either within their country or abroad.
When arranged by a school administration for students, it is also known as school trip in the United Kingdom, Australia, Kenya, New Zealand and Bangladesh, and school tour in Ireland.
A 2022 study, which used randomized controlled trial data, found that culturally enriching field trips led students to show a greater interest in arts, greater tolerance for people with different views, and boosted their educational outcomes.
Overview
The purpose of the field trip is usually observation for education, non-experimental research or to provide students with experiences outside their everyday activities, such as going camping with teachers and their classmates. The aim of this research is to observe the subject in its natural state and possibly collect samples. It is seen that more-advantaged children may have already experienced cultural institutions outside of school, and field trips provide common ground between more-advantaged and less-advantaged children to share the same cultural experiences.
Field trips often involve three steps: preparation, activities and follow-up activity. Preparation applies to both the students and the teachers. Teachers often take the time to learn about the destination and the subject before the trip. Activities on the field trips often include: lectures, tours, worksheets, videos and demonstrations. Follow-up activities are generally discussions in the classroom once the field trip is completed.
In Western culture people first come across this method during school years when classes are taken on school trips to visit a geological or geographical feature of the landscape, for example. Much of the early research into the natural sciences was of this form. Charles Darwin is an important example of someone who has contributed to science through the use of field trips.
Popular field trip sites include zoos, nature centers, community agencies such as fire stations and hospitals, government agencies, local businesses, amusement parks, science museums and factories. Field trips provide alternative educational opportunities for children and can benefit the community if they include some type of community service. Field trips also let students take a break from their normal routine and experience more hands-on learning. Places like zoos and nature centers often have an interactive display that allows children to touch plants or animals.
Today, culturally enriching field trips are in decline. Museums across the United States report a steep drop in school tours. For example, the Field Museum in Chicago at one time welcomed more than 300,000 students every year. Recently, the number is below 200,000. Between 2002 and 2007, Cincinnati arts organizations saw a 30 percent decrease in student attendance. A survey by the American Association of School Administrators found that more than half of schools eliminated planned field trips in 2010–11.
Site school
A variation on the field trip is the "site-based program" or "site-school" model, where a class temporarily relocates to a non-school location for an entire week to take advantage of the resources on the site. As with a multi-day field trip, appropriate overnight camping or lodging arrangements are often made to accommodate the experience. The approach was first developed at the Calgary Zoo in Alberta, Canada in 1993, and "Zoo School" was inaugurated in 1994. The Calgary Board of Education then approached the Glenbow Museum and Archives to create a "Museum School" in 1995 followed by the Calgary Science Centre (1996), the University of Calgary (1996), Canada Olympic Park (1997), the Inglewood Bird Sanctuary (1998), Calgary City Hall (2000), Cross Conservation Area (2000), the Calgary Stampede (2002), the Calgary Aero-Space Museum (2005), and the Fire Training Academy (2008). One of the newer schools in Calgary is Tinker School and Social Enterprise School as STEM Learning Lab (2018) The model spread across Alberta (with 15 sites in Edmonton alone), throughout Canada and in the United States. Global coordination of the model is through the "Beyond the Classroom Network".
Europe
In Europe, School Trip, a 2002 German-Polish film, describes the German students' trip to Poland during the summer.
School trips in east Asia
In Japan, in addition to the one-day field trip, the school trip, called shūgaku ryokō (, literally "learning journey"), has a history since 1886, and is now part of the middle school and high school curriculum, with all students participating in such a program. The trip is usually longer than several days, such as a week or several weeks long. The typical locations visited within Japan are regions of national or historical significance, such as ancient capitals of Kyoto and Nara, Nagasaki, for its experience with nuclear weapons and historical significance as the sole international port during the country's 17th–19th century isolationist foreign policy (さこく) and Nikkō 日光, popular onsen spa town renowned for its beauty. Travelling abroad is occasionally chosen as an option by some schools.
In other Asian regions/countries such as South Korea, Taiwan and Singapore, the school trip, when arranged, tends to become a voluntary part of the school curriculum. When Japan was selected, the Japanese government waived the entry visa.
See also
School bus
Museum education
Excursion
Grand Tour
Experiential learning
References
Research methods
School terminology
Museum education
Types of travel
Childhood
Field research
High school research
Education by method
Educational environment
Education events
Educational projects | 0.771585 | 0.990232 | 0.764048 |
Questioning Collapse | Questioning Collapse: Human Resilience, Ecological Vulnerability, and the Aftermath of Empire is a 2009 non-fiction book compiled by editors Patricia A. McAnany and Norman Yoffee that features a series of eleven essays from fifteen authors discussing how societies have developed, evolved, and whether they have or have not collapsed throughout history, with a focus on how ancient and contemporary societies have advanced to the current global society and issues being faced in modern times. The collection of essays acts as a direct critique in the collective title and subject matter of Jared Diamond's book Collapse and, to a lesser extent, Guns, Germs, and Steel.
Begun as a concept at a 2006 special meeting of the American Anthropological Association, the book was further constructed after individual presentations at an October 2007 meeting of archaeologists, cultural anthropologists, and historians in order to address each of the societies and locations brought up by Diamond in his books. These authors showcased how each society did not collapse, but merely changed culturally, politically, or geographically into a new form that followed chronologically with the same traditions and systems, focusing on the concept of resilience has kept together the same cultures even to modern day. This is expanded upon by including scientific research and vignettes from living members of the covered indigenous cultures.
Reviews of the book were overwhelmingly positive, with critics noting that the expanded data and discussion of broader context beyond just criticism of Diamond helped improve the book's message and themes and make it perfect for use in university level courses on the subject of historical societal evolution. Some reviewers wished for additional perspectives to be included beyond just resilience, as other representations of societal change have been used to critique Diamond's claims and these were not as well discussed in the book as they could have been, along with the desire for the current issue of climate change to be integrated more thoroughly in what was shown. A controversy occurred between the authors and Jared Diamond when he published a highly negative review of the book for the journal Nature as a part of its editorial staff without directly stating that Questioning Collapse was a critique of his books in particular, causing the authors alongside Cambridge University Press to call him out on his conflict of interest.
Background
The idea for creating Questioning Collapse came about during a 2006 meeting at the American Anthropological Association that was specially organized to determine how to respond to the claims made in Diamond's books, particularly Collapse and Guns, Germs, and Steel, and how to do so while explaining to the general public how society has actually progressed throughout history and led to our current world. The essays that make up the book were written to be presented at the meeting symposium and were also presented at a follow-up week long advanced seminar in October 2007 at the Amerind Foundation. The main claim in Diamond's works that was being addressed was that of self interest of leaders and geographic location being the factors that have determined the survival of past societies. The purpose of Questioning Collapse was to instead suggest that societies do not collapse based on such factors, but that societies are ever evolving entities that exhibit resilience and adapt into new forms with different names rather than dissolving entirely.
Content
The book begins with an introductory chapter that introduces the focus of the following essays, which themselves are split into a series of case studies in three primary sections titled "Human Resilience and Ecological Vulnerability", "Surviving Collapse: Studies of Societal Regeneration", and "Societies in the Aftermath of Empire". All three sections address three "fundamental questions" in different aspects of society and history, specifically the questions of "why are ancient societies portrayed as either successful or failures in the popular media, how can contemporary society be characterized in the shadow of prior empires, and how are contemporary environmental issues, namely global climate change, similar to those of the past." In addition to criticizing Diamond's claims about human actions, the book also responds to other arguments by Diamond, such as overpopulation and environmental mismanagement, by disputing the factual basis of the claims over longer amounts of human societal time. The authors argue that the resilience of societies, even those that last for hundreds of years and collapse quickly, results in a society that migrates to a new form or location while retaining and changing their cultural traits.
Part one on resilience relating to ecology discusses environmental issues faced by past civilizations and how they have adapted to those challenges, with specific examples examining Rapa Nui, the Norse settlements in Greenland, and China's changes throughout the 19th and 20th centuries. Part two is about the resilience of indigenous communities in Asia and the Americas, particularly those of Mesopotamia, the lowland Maya area, and the rapid social and ecological changes faced by tribes in the American Southwest. Lastly, part three reframes the themes into current environmental problems as a result of European colonialism and how those have affected societies including the Inca and countries including Rwanda, Hispaniola, Australia, and New Guinea. The book concludes with a final chapter written by J. R. McNeill that brings up the broader question of what the truth of sustainability is for our future endeavors.
The book also features inset sidebars that give photographic examples of living descendants of societies and populations that are being discussed, to reinforce the idea that their cultures only changed and were not destroyed. There are additional vignettes throughout each essay chapter that include work and discussion by indigenous scholars from the peoples being discussed and showcases research on their own cultures' histories. The work as a whole features 91 graphical figures, with 24 maps included.
Critical reception
Pacific Affairs reviewer James L. Flexner praised Questioning Collapse for its critical analysis of Diamond's works and debunking not of minor details, but of broad claims made in his works. Flexner notes that the essays are able to get across the idea that "transformation is likely the one inevitable factor in history" and instead of "tragic catastrophe and destruction", it is the perspective that "these processes, while sometimes accompanied by violent upheaval, usually reflect more of the resilience and adaptability of dynamic human cultures" that matters. In a review for the Journal of Cultural Geography, Ryan D. Bergstrom concluded that, while the book has successfully added to knowledge and understanding on the topic of societal collapses, the "truth of how and why societies collapse is likely found somewhere between the arguments made in this book and those of Diamond’s", but adds that those in the field of cultural geography would "applaud the truth-seeking process" and find the information useful. Writing for Transforming Anthropology, Luis Silva Barros complimented the book's explanation and use of the "process" view of societal development and collapse, as compared to Diamond's "results" view, suggesting that the book would be a "very useful addition to any upper-level undergraduate or graduate course syllabus" if supported with background material and in-class discussion.
Patrick Vinton Kirch in the Journal of Anthropological Research positively stated that the "collection of provocative essays" contained in Questioning Collapse furthers the conversation among scholars about what is the proper way to frame historical events and "whether they even should try to read lessons from the past in order to address contemporary problems". The Journal of World History'''s Emily Wakild pointed out that while the different authors involved makes separate essays somewhat uneven when reading them together, the thematic organization of the sections helps to smooth over the general tone issues and they manage to "incisively show the weaknesses of Diamond’s narrative(s)". Covering the book in Human Ecology, Joseph Tainter criticized how some of the authors went along with Diamond's "progressivist framework" on societies choosing to succeed or fail and should have more directly debunked Diamond's central claim as several of the other authors in the book did. Tainter concluded that it is a "difficult task" that the fifteen authors have taken to counter popular science misinformation, a " noble attempt to make an unfortunate situation better", and deserve "our respect and admiration" for it.
For the International Journal of Comparative Sociology, Kirk S Lawrence considered the book perfect for college level courses and that it "deserved to be read," though with reading of both of Diamond's books required to properly understand the critiques and breakdowns of his arguments found in Questioning Collapse. Science's Krista Lewis praises that the book is much more than just "Diamond-bashing" on Diamond's historical and theoretical inaccuracies, but also gives "lively debate, critique, and engagement" on the broader issues brought up by Diamond in the first place, such as how his and other archaeological romanticism of the past has ignored "cultural and historical perspectives" of the indigenous peoples being talked about. While supporting the book for its focus on the "environmental context of human endeavors" against Diamond's claims, T. J. Wilkinson in American Antiquity wished that additional other perspectives and data that contradicted Diamond's claims had also been utilized, such as the emerging field of global change archaeology. Wilkinson hoped for an additional volume in the future that can tie together all of these other scientific perspectives into a single work for the public, but also more comprehensively integrate discussions of climate change into the historical narrative.
Jared Diamond review controversy
On February 17, 2010, Jared Diamond authored a joint book review of Questioning Collapse and Cynthia W. Shelmerdine's The Cambridge Companion to the Aegean Bronze Age in the journal Nature . In the review, Diamond heavily criticized Questioning Collapse, without mentioning that the book was meant to be a direct critique to his own works. The authors released an open letter on March 22, 2010 through Cambridge University Press calling out Diamond for his conflict of interest and for the multiple errors and misinformation in his Nature review regarding the content of the book. The publicist for Cambridge University Press, Caitlin Graf, stated that the open letter was originally sent to Nature to be published in response to the review, but it was refused. Therefore, the Press wanted to keep "with our mission to advance learning, knowledge, and research worldwide" and published the letter themselves, with Graf extending an invitation for Diamond to respond to the letter and "engage in a conversation". A different response by Patricia A. McAnany and Norman Yoffee was later accepted and published by Nature'' on April 14, 2010.
References
2009 non-fiction books
2009 in the environment
Cambridge University Press books
Environmental non-fiction books
Societal collapse
Works about the theory of history | 0.793415 | 0.962982 | 0.764044 |
GxP | GxP is a general abbreviation for the "good practice" quality guidelines and regulations. The "x" stands for the various fields, including the pharmaceutical and food industries, for example good agricultural practice, or GAP.
A "c" or "C" is sometimes added to the front of the initialism. The preceding "c" stands for "current." For example, cGMP is an abbreviation for "current good manufacturing practice". The term GxP is frequently used to refer in a general way to a collection of quality guidelines.
Purpose
The purpose of the GxP quality guidelines is to ensure a product is safe and meets its intended use. GxP guides quality manufacture in regulated industries including food, drugs, medical devices, and cosmetics.
The most central aspects of GxP are Good Documentation Practices (GDP), which are expected to be ALCOA:
Attributable: documents are attributable to an individual
Legible: they are readable
Contemporaneously Recorded: not dated in the past or the future, but when the documented task is completed
Original or a True Copy
Accurate: accurately reflecting the activity documented
and Permanent,
The products that are the subject of the GxP are expected to be
Traceability: the ability to reconstruct the development history of a drug or medical device.
Accountability: the ability to resolve who has contributed what to the development and when.
GxPs require that a Quality System be established, implemented, documented, and maintained.
As explained above, documentation is a critical tool for ensuring GxP adherence. For more information, see good manufacturing practice.
Examples of GxPs
Good agricultural and collection practices, or GACP(s)
Good agricultural practice, or GAP
Good auditing practice, or GAP
Good automated laboratory practice, or GALP
Good automated manufacturing practice, or GAMP
Good business practice, or GBP
Good cell culture practice, or GCCP
Good clinical data management practice, or GCDMP
Good clinical laboratory practice, or GCLP
Good clinical practice, or GCP
Good documentation practice, or GDP, or GDocP (to distinguish from "good distribution practice")
Good distribution practice, or GDP
Good engineering practice, or GEP
Good financial practice, or GFP
Good guidance practice, or GGP
Good hygiene practice, or GHP
Good hygiene practice, or GHP
Good laboratory practice, or GLP
Good machine learning practice, or GMLP
Good management practice, or GMP
Good manufacturing practice, or GMP
Good microbiological practice, or GMiP
Good participatory practice, or GPP
Good pharmacovigilance practice, or GPvP or even GVP
Good pharmacy practice, or GPP
Good policing practice, or GPP
Good recruitment practice, or GRP
Good research practice, or GRP
Good safety practice, or GSP
Good storage practice, or GSP
Good tissue practice, or GTP
See also
Best practice
European Medicines Agency (EMA)
Food and Drug Administration (FDA)
International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH)
Organisation for Economic Co-operation and Development (OECD)
Validation (drug manufacture)
References
Good | 0.772811 | 0.988647 | 0.764037 |
Problem-posing education | Problem-posing education, coined by the Brazilian educator Paulo Freire in his 1970 book Pedagogy of the Oppressed, is a method of teaching that emphasizes critical thinking for the purpose of liberation. Freire used problem posing as an alternative to the banking model of education.
Origins
Freire's pedagogy emerged from his observations and experiences working as an instructor in literacy programs with peasant laborers in Brazil. During this work, Freire became aware of the economic, political, and social domination resulting from paternalism. Paternalism leads to a culture of silence, which keeps people from confronting their oppression. He turned this philosophy to pedagogy because "the whole education system was one of the major instruments for the maintenance of this culture of silence".
Freire's philosophy of education centers on critical consciousness, whereby the oppressed recognize the causes of their oppression "so that through transforming action they can create a new situation, one which makes possible the pursuit of fuller humanity". Problem-posing education is the path to critical consciousness.
Freire's work has its roots in the constructivist theory of learning, and specifically the work of Jean Piaget and John Dewey. The constructivist theory holds that knowledge is constructed by individuals by using their experiences, which is what Freire drew upon in developing his pedagogy. In Pedagogy of the Oppressed Freire wrote:
Education as the practice of freedom—as opposed to education as the practice of domination—denies that man is abstract, isolated, independent, and unattached to the world; it also denies that the world exists as a reality apart from people. Authentic reflection considers neither abstract man nor the world without people, but people in their relations with the world.
Philosophy
The philosophy of problem-posing education is the foundation of modern critical pedagogy. Problem-posing education solves the student–teacher contradiction by recognizing that knowledge is not deposited from one (the teacher) to another (the student) but is instead formulated through dialogue between the two. Freire's argument concludes that "authentic education is not carried on by "A" for "B" or by "A" about "B", but rather by "A" with "B". The representation of knowledge rather than the imposition of it leads to liberation.
Method
As a method of teaching, problem-posing involves "listening ..., dialogue ..., and action". Many models for applying problem-posing in the classroom have been formulated since Freire first coined the term.
One of the most influential models is the book Freire for the Classroom: A Sourcebook for Liberatory Teaching, edited by Ira Shor. When teachers implement problem-posing education in the classroom, they approach students as fellow learners and partners in dialogue (or dialoguers), which creates an atmosphere of hope, love, humility, and trust. This is done through six points of reference:
Learners (students/teachers in dialogue) approach their acts of knowing as grounded in individual experience and circumstance.
Learners approach the historical and cultural world as a transformable reality shaped by human ideological representations of reality.
Learners make connections between their own conditions and the conditions produced through the making of reality.
Learners consider the ways that they can shape this reality through their methods of knowing. This new reality is collective, shared, and shifting.
Learners develop literacy skills that put their ideas into print, thus giving potency to the act of knowing.
Learners identify the myths in the dominant discourse and work to destabilize these myths, ending the cycle of oppression.
Examples
The Montessori method, developed by Maria Montessori, is an example of problem-posing education in an early childhood model.
Ira Shor, a professor of Composition and Rhetoric at CUNY, who has worked closely with Freire, also advocates a problem posing model in his use of critical pedagogy. He has published on the use of contract grading, the physical set-up of the classroom, and the political aspects of student and teacher roles.
James D. Kirylo, in his book, Paulo Freire: The Man from Recife, reiterated Freire's thought, and stated that a problem-posing education is one where human beings are viewed as conscious beings who are unfinished, yet in process of becoming.
Other advocates of problem-posing critical pedagogy include Henry Giroux, Peter McLaren, and bell hooks.
See also
Inquiry-based learning
Problem-based learning
Problem solving
Unschooling
References
Footnotes
Bibliography
Pedagogy | 0.770173 | 0.992 | 0.764012 |
Amatonormativity | Amatonormativity is the set of societal assumptions that everyone prospers with an exclusive romantic relationship. Elizabeth Brake coined the neologism to capture societal assumptions about romance. Brake wanted to describe the pressure she received by many to prioritize marriage in her own life when she did not want to. Amatonormativity extends beyond social pressures for marriage to include general pressures involving romance.
Etymology
The word amatonormativity comes from amatus, which is the Latin word for "loved", and normativity, referring to societal norms. Another word which is similarly related to the word amatonormativity is amative. Merriam-Webster dictionary defines the word amative as: strongly moved by love and especially sexual love. Relating to or indicative of love. Amorous is a closely related word also derived from amatus. Related terms include allonormativity, which means a worldview that assumes all people experience sexual and romantic attraction, and compulsory sexuality, which means social norms and practices that marginalizes non-sexuality.
The term was modeled after the term heteronormativity, the belief that heterosexuality is the default for sexual orientation. Normative bias against ethical non-monogamy in particular is instead known as mononormativity.
Examples
Elizabeth Brake describes the term as a pressure or desire for monogamy, romance, and/or marriage.
The desire to find relationships that are romantic, sexual, monogamous, and lifelong has many social consequences. People who are asexual, aromantic, and/or nonmonogamous become social oddities. According to researcher Bella DePaulo, it puts a stigma on single people as incomplete and pushes romantic partners to stay in unhealthy relationships because of a fear the partners may have of being single.
According to Brake, one way in which amatonormativity is institutionally applied is the law and morality surrounding marriage. Loving friendships, queerplatonic, and other relationships are not given the same legal protections romantic partners are given through marriage.
In her 2012 book Minimizing Marriage, Brake defines amatonormativity as "the widespread assumption that everyone is better off in an exclusive, romantic, long-term coupled relationship, and that everyone is seeking such a relationship."
See also
Allonormativity
Aromanticism
Criticism of marriage
Discrimination against asexual people
Heteronormativity
Polyamory
Relationship anarchy
References
External links
Anti-LGBTQ sentiment
Aromanticism
Feminist terminology
Gender-related prejudices
Intimate relationships
LGBTQ erasure
Neologisms
Philosophy of love
Romance | 0.770981 | 0.990958 | 0.76401 |
Superorganism | A superorganism, or supraorganism, is a group of synergetically-interacting organisms of the same species. A community of synergetically-interacting organisms of different species is called a holobiont.
Concept
The term superorganism is used most often to describe a social unit of eusocial animals in which division of labour is highly specialised and individuals cannot survive by themselves for extended periods. Ants are the best-known example of such a superorganism. A superorganism can be defined as "a collection of agents which can act in concert to produce phenomena governed by the collective", phenomena being any activity "the hive wants" such as ants collecting food and avoiding predators, or bees choosing a new nest site. In challenging environments, micro organisms collaborate and evolve together to process unlikely sources of nutrients such as methane. This process called syntrophy ("eating together") might be linked to the evolution of eukaryote cells and involved in the emergence or maintenance of life forms in challenging environments on Earth and possibly other planets. Superorganisms tend to exhibit homeostasis, power law scaling, persistent disequilibrium and emergent behaviours.
The term was coined in 1789 by James Hutton, the "father of geology", to refer to Earth in the context of geophysiology. The Gaia hypothesis of James Lovelock, and Lynn Margulis as well as the work of Hutton, Vladimir Vernadsky and Guy Murchie, have suggested that the biosphere itself can be considered a superorganism, but that has been disputed. This view relates to systems theory and the dynamics of a complex system.
The concept of a superorganism raises the question of what is to be considered an individual. Toby Tyrrell's critique of the Gaia hypothesis argues that Earth's climate system does not resemble an animal's physiological system. Planetary biospheres are not tightly regulated in the same way that animal bodies are: "planets, unlike animals, are not products of evolution. Therefore we are entitled to be highly skeptical (or even outright dismissive) about whether to expect something akin to a 'superorganism'". He concludes that "the superorganism analogy is unwarranted".
Some scientists have suggested that individual human beings can be thought of as "superorganisms"; as a typical human digestive system contains 1013 to 1014 microorganisms whose collective genome, the microbiome studied by the Human Microbiome Project, contains at least 100 times as many genes as the human genome itself. Salvucci wrote that superorganism is another level of integration that is observed in nature. These levels include the genomic, the organismal and the ecological levels. The genomic structure of organisms reveals the fundamental role of integration and gene shuffling along evolution.
In social theory
The 19th-century thinker Herbert Spencer coined the term super-organic to focus on social organization (the first chapter of his Principles of Sociology is entitled "Super-organic Evolution"), though this was apparently a distinction between the organic and the social, not an identity: Spencer explored the holistic nature of society as a social organism while distinguishing the ways in which society did not behave like an organism. For Spencer, the super-organic was an emergent property of interacting organisms, that is, human beings. And, as has been argued by D. C. Phillips, there is a "difference between emergence and reductionism".
The economist Carl Menger expanded upon the evolutionary nature of much social growth but never abandoned methodological individualism. Many social institutions arose, Menger argued, not as "the result of socially teleological causes, but the unintended result of innumerable efforts of economic subjects pursuing 'individual' interests".
Both Spencer and Menger argued that because individuals choose and act, any social whole should be considered less than an organism, but Menger emphasized that more strongly. Spencer used the idea to engage in extended analysis of social structure and conceded that it was primarily an analogy. For Spencer, the idea of the super-organic best designated a distinct level of social reality above that of biology and psychology, not a one-to-one identity with an organism. Nevertheless, Spencer maintained that "every organism of appreciable size is a society", which has suggested to some that the issue may be terminological.
The term superorganic was adopted by the anthropologist Alfred L. Kroeber in 1917. Social aspects of the superorganism concept are analysed by Alan Marshall in his 2002 book "The Unity of Nature". Finally, recent work in social psychology has offered the superorganism metaphor as a unifying framework to understand diverse aspects of human sociality, such as religion, conformity, and social identity processes.
In cybernetics
Superorganisms are important in cybernetics, particularly biocybernetics, since they are capable of the so-called "distributed intelligence", a system composed of individual agents that have limited intelligence and information. They can pool resources and so can complete goals that are beyond reach of the individuals on their own. Existence of such behavior in organisms has many implications for military and management applications and is being actively researched.
Superorganisms are also considered dependent upon cybernetic governance and processes. This is based on the idea that a biological system – in order to be effective – needs a sub-system of cybernetic communications and control. This is demonstrated in the way a mole rat colony uses functional synergy and cybernetic processes together.
Joël de Rosnay also introduced a concept called "cybionte" to describe cybernetic superorganism. The notion associates superorganism with chaos theory, multimedia technology, and other new developments.
See also
Collective intelligence
Group mind (science fiction)
Holobiont
Organismic computing
Quorum sensing, collective behaviour of bacteria
Stigmergy
Siphonophorae
Gaia hypothesis
References
Literature
Jürgen Tautz, Helga R. Heilmann: The Buzz about Bees – Biology of a Superorganism, Springer-Verlag 2008.
Bert Hölldobler, E. O. Wilson: "The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies", W.W. Norton, 2008.
External links
People Are Human-Bacteria Hybrid, Wired Magazine, October 11, 2004
First Bees and Ants, and Now People: This Evolutionary Transition Might Be Coming for Humanity, Haaretz Magazine, November 19, 2022
Biocybernetics
Collective intelligence
Cybernetics
Holism
Biological classification
Emergence
Management cybernetics | 0.76921 | 0.993233 | 0.764004 |
Oxidative phosphorylation | Oxidative phosphorylation (UK , US ) or electron transport-linked phosphorylation or terminal oxidation is the metabolic pathway in which cells use enzymes to oxidize nutrients, thereby releasing chemical energy in order to produce adenosine triphosphate (ATP). In eukaryotes, this takes place inside mitochondria. Almost all aerobic organisms carry out oxidative phosphorylation. This pathway is so pervasive because it releases more energy than alternative fermentation processes such as anaerobic glycolysis.
The energy stored in the chemical bonds of glucose is released by the cell in the citric acid cycle, producing carbon dioxide and the energetic electron donors NADH and FADH. Oxidative phosphorylation uses these molecules and O2 to produce ATP, which is used throughout the cell whenever energy is needed. During oxidative phosphorylation, electrons are transferred from the electron donors to a series of electron acceptors in a series of redox reactions ending in oxygen, whose reaction releases half of the total energy.
In eukaryotes, these redox reactions are catalyzed by a series of protein complexes within the inner membrane of the cell's mitochondria, whereas, in prokaryotes, these proteins are located in the cell's outer membrane. These linked sets of proteins are called the electron transport chain. In eukaryotes, five main protein complexes are involved, whereas in prokaryotes many different enzymes are present, using a variety of electron donors and acceptors.
The energy transferred by electrons flowing through this electron transport chain is used to transport protons across the inner mitochondrial membrane, in a process called electron transport. This generates potential energy in the form of a pH gradient and the resulting electrical potential across this membrane. This store of energy is tapped when protons flow back across the membrane and down the potential energy gradient, through a large enzyme called ATP synthase in a process called chemiosmosis. The ATP synthase uses the energy to transform adenosine diphosphate (ADP) into adenosine triphosphate, in a phosphorylation reaction. The reaction is driven by the proton flow, which forces the rotation of a part of the enzyme. The ATP synthase is a rotary mechanical motor.
Although oxidative phosphorylation is a vital part of metabolism, it produces reactive oxygen species such as superoxide and hydrogen peroxide, which lead to propagation of free radicals, damaging cells and contributing to disease and, possibly, aging and senescence. The enzymes carrying out this metabolic pathway are also the target of many drugs and poisons that inhibit their activities.
Chemiosmosis
Oxidative phosphorylation works by using energy-releasing chemical reactions to drive energy-requiring reactions. The two sets of reactions are said to be coupled. This means one cannot occur without the other. The chain of redox reactions driving the flow of electrons through the electron transport chain, from electron donors such as NADH to electron acceptors such as oxygen and hydrogen (protons), is an exergonic process – it releases energy, whereas the synthesis of ATP is an endergonic process, which requires an input of energy. Both the electron transport chain and the ATP synthase are embedded in a membrane, and energy is transferred from the electron transport chain to the ATP synthase by movements of protons across this membrane, in a process called chemiosmosis. A current of protons is driven from the negative N-side of the membrane to the positive P-side through the proton-pumping enzymes of the electron transport chain. The movement of protons creates an electrochemical gradient across the membrane, is called the proton-motive force. It has two components: a difference in proton concentration (a H+ gradient, ΔpH) and a difference in electric potential, with the N-side having a negative charge.
ATP synthase releases this stored energy by completing the circuit and allowing protons to flow down the electrochemical gradient, back to the N-side of the membrane. The electrochemical gradient drives the rotation of part of the enzyme's structure and couples this motion to the synthesis of ATP.
The two components of the proton-motive force are thermodynamically equivalent: In mitochondria, the largest part of energy is provided by the potential; in alkaliphile bacteria the electrical energy even has to compensate for a counteracting inverse pH difference. Inversely, chloroplasts operate mainly on ΔpH. However, they also require a small membrane potential for the kinetics of ATP synthesis. In the case of the fusobacterium Propionigenium modestum it drives the counter-rotation of subunits a and c of the FO motor of ATP synthase.
The amount of energy released by oxidative phosphorylation is high, compared with the amount produced by anaerobic fermentation. Glycolysis produces only 2 ATP molecules, but somewhere between 30 and 36 ATPs are produced by the oxidative phosphorylation of the 10 NADH and 2 succinate molecules made by converting one molecule of glucose to carbon dioxide and water, while each cycle of beta oxidation of a fatty acid yields about 14 ATPs. These ATP yields are theoretical maximum values; in practice, some protons leak across the membrane, lowering the yield of ATP.
Electron and proton transfer molecules
The electron transport chain carries both protons and electrons, passing electrons from donors to acceptors, and transporting protons across a membrane. These processes use both soluble and protein-bound transfer molecules. In the mitochondria, electrons are transferred within the intermembrane space by the water-soluble electron transfer protein cytochrome c. This carries only electrons, and these are transferred by the reduction and oxidation of an iron atom that the protein holds within a heme group in its structure. Cytochrome c is also found in some bacteria, where it is located within the periplasmic space.
Within the inner mitochondrial membrane, the lipid-soluble electron carrier coenzyme Q10 (Q) carries both electrons and protons by a redox cycle. This small benzoquinone molecule is very hydrophobic, so it diffuses freely within the membrane. When Q accepts two electrons and two protons, it becomes reduced to the ubiquinol form (QH2); when QH2 releases two electrons and two protons, it becomes oxidized back to the ubiquinone (Q) form. As a result, if two enzymes are arranged so that Q is reduced on one side of the membrane and QH2 oxidized on the other, ubiquinone will couple these reactions and shuttle protons across the membrane. Some bacterial electron transport chains use different quinones, such as menaquinone, in addition to ubiquinone.
Within proteins, electrons are transferred between flavin cofactors, iron–sulfur clusters and cytochromes. There are several types of iron–sulfur cluster. The simplest kind found in the electron transfer chain consists of two iron atoms joined by two atoms of inorganic sulfur; these are called [2Fe–2S] clusters. The second kind, called [4Fe–4S], contains a cube of four iron atoms and four sulfur atoms. Each iron atom in these clusters is coordinated by an additional amino acid, usually by the sulfur atom of cysteine. Metal ion cofactors undergo redox reactions without binding or releasing protons, so in the electron transport chain they serve solely to transport electrons through proteins. Electrons move quite long distances through proteins by hopping along chains of these cofactors. This occurs by quantum tunnelling, which is rapid over distances of less than 1.4 m.
Eukaryotic electron transport chains
Many catabolic biochemical processes, such as glycolysis, the citric acid cycle, and beta oxidation, produce the reduced coenzyme NADH. This coenzyme contains electrons that have a high transfer potential; in other words, they will release a large amount of energy upon oxidation. However, the cell does not release this energy all at once, as this would be an uncontrollable reaction. Instead, the electrons are removed from NADH and passed to oxygen through a series of enzymes that each release a small amount of the energy. This set of enzymes, consisting of complexes I through IV, is called the electron transport chain and is found in the inner membrane of the mitochondrion. Succinate is also oxidized by the electron transport chain, but feeds into the pathway at a different point.
In eukaryotes, the enzymes in this electron transport system use the energy released from O2 by NADH to pump protons across the inner membrane of the mitochondrion. This causes protons to build up in the intermembrane space, and generates an electrochemical gradient across the membrane. The energy stored in this potential is then used by ATP synthase to produce ATP. Oxidative phosphorylation in the eukaryotic mitochondrion is the best-understood example of this process. The mitochondrion is present in almost all eukaryotes, with the exception of anaerobic protozoa such as Trichomonas vaginalis that instead reduce protons to hydrogen in a remnant mitochondrion called a hydrogenosome.
NADH-coenzyme Q oxidoreductase (complex I)
NADH-coenzyme Q oxidoreductase, also known as NADH dehydrogenase or complex I, is the first protein in the electron transport chain. Complex I is a giant enzyme with the mammalian complex I having 46 subunits and a molecular mass of about 1,000 kilodaltons (kDa). The structure is known in detail only from a bacterium; in most organisms the complex resembles a boot with a large "ball" poking out from the membrane into the mitochondrion. The genes that encode the individual proteins are contained in both the cell nucleus and the mitochondrial genome, as is the case for many enzymes present in the mitochondrion.
The reaction that is catalyzed by this enzyme is the two electron oxidation of NADH by coenzyme Q10 or ubiquinone (represented as Q in the equation below), a lipid-soluble quinone that is found in the mitochondrion membrane:
The start of the reaction, and indeed of the entire electron chain, is the binding of a NADH molecule to complex I and the donation of two electrons. The electrons enter complex I via a prosthetic group attached to the complex, flavin mononucleotide (FMN). The addition of electrons to FMN converts it to its reduced form, FMNH2. The electrons are then transferred through a series of iron–sulfur clusters: the second kind of prosthetic group present in the complex. There are both [2Fe–2S] and [4Fe–4S] iron–sulfur clusters in complex I.
As the electrons pass through this complex, four protons are pumped from the matrix into the intermembrane space. Exactly how this occurs is unclear, but it seems to involve conformational changes in complex I that cause the protein to bind protons on the N-side of the membrane and release them on the P-side of the membrane. Finally, the electrons are transferred from the chain of iron–sulfur clusters to a ubiquinone molecule in the membrane. Reduction of ubiquinone also contributes to the generation of a proton gradient, as two protons are taken up from the matrix as it is reduced to ubiquinol (QH2).
Succinate-Q oxidoreductase (complex II)
Succinate-Q oxidoreductase, also known as complex II or succinate dehydrogenase, is a second entry point to the electron transport chain. It is unusual because it is the only enzyme that is part of both the citric acid cycle and the electron transport chain. Complex II consists of four protein subunits and contains a bound flavin adenine dinucleotide (FAD) cofactor, iron–sulfur clusters, and a heme group that does not participate in electron transfer to coenzyme Q, but is believed to be important in decreasing production of reactive oxygen species. It oxidizes succinate to fumarate and reduces ubiquinone. As this reaction releases less energy than the oxidation of NADH, complex II does not transport protons across the membrane and does not contribute to the proton gradient.
In some eukaryotes, such as the parasitic worm Ascaris suum, an enzyme similar to complex II, fumarate reductase (menaquinol:fumarate
oxidoreductase, or QFR), operates in reverse to oxidize ubiquinol and reduce fumarate. This allows the worm to survive in the anaerobic environment of the large intestine, carrying out anaerobic oxidative phosphorylation with fumarate as the electron acceptor. Another unconventional function of complex II is seen in the malaria parasite Plasmodium falciparum. Here, the reversed action of complex II as an oxidase is important in regenerating ubiquinol, which the parasite uses in an unusual form of pyrimidine biosynthesis.
Electron transfer flavoprotein-Q oxidoreductase
Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-Q oxidoreductase), also known as electron transferring-flavoprotein dehydrogenase, is a third entry point to the electron transport chain. It is an enzyme that accepts electrons from electron-transferring flavoprotein in the mitochondrial matrix, and uses these electrons to reduce ubiquinone. This enzyme contains a flavin and a [4Fe–4S] cluster, but, unlike the other respiratory complexes, it attaches to the surface of the membrane and does not cross the lipid bilayer.
In mammals, this metabolic pathway is important in beta oxidation of fatty acids and catabolism of amino acids and choline, as it accepts electrons from multiple acetyl-CoA dehydrogenases. In plants, ETF-Q oxidoreductase is also important in the metabolic responses that allow survival in extended periods of darkness.
Q-cytochrome c oxidoreductase (complex III)
Q-cytochrome c oxidoreductase is also known as cytochrome c reductase, cytochrome bc1 complex, or simply complex III. In mammals, this enzyme is a dimer, with each subunit complex containing 11 protein subunits, an [2Fe-2S] iron–sulfur cluster and three cytochromes: one cytochrome c1 and two b cytochromes. A cytochrome is a kind of electron-transferring protein that contains at least one heme group. The iron atoms inside complex III's heme groups alternate between a reduced ferrous (+2) and oxidized ferric (+3) state as the electrons are transferred through the protein.
The reaction catalyzed by complex III is the oxidation of one molecule of ubiquinol and the reduction of two molecules of cytochrome c, a heme protein loosely associated with the mitochondrion. Unlike coenzyme Q, which carries two electrons, cytochrome c carries only one electron.
As only one of the electrons can be transferred from the QH2 donor to a cytochrome c acceptor at a time, the reaction mechanism of complex III is more elaborate than those of the other respiratory complexes, and occurs in two steps called the Q cycle. In the first step, the enzyme binds three substrates, first, QH2, which is then oxidized, with one electron being passed to the second substrate, cytochrome c. The two protons released from QH2 pass into the intermembrane space. The third substrate is Q, which accepts the second electron from the QH2 and is reduced to Q.−, which is the ubisemiquinone free radical. The first two substrates are released, but this ubisemiquinone intermediate remains bound. In the second step, a second molecule of QH2 is bound and again passes its first electron to a cytochrome c acceptor. The second electron is passed to the bound ubisemiquinone, reducing it to QH2 as it gains two protons from the mitochondrial matrix. This QH2 is then released from the enzyme.
As coenzyme Q is reduced to ubiquinol on the inner side of the membrane and oxidized to ubiquinone on the other, a net transfer of protons across the membrane occurs, adding to the proton gradient. The rather complex two-step mechanism by which this occurs is important, as it increases the efficiency of proton transfer. If, instead of the Q cycle, one molecule of QH2 were used to directly reduce two molecules of cytochrome c, the efficiency would be halved, with only one proton transferred per cytochrome c reduced.
Cytochrome c oxidase (complex IV)
Cytochrome c oxidase, also known as complex IV, is the final protein complex in the electron transport chain. The mammalian enzyme has an extremely complicated structure and contains 13 subunits, two heme groups, as well as multiple metal ion cofactors – in all, three atoms of copper, one of magnesium and one of zinc.
This enzyme mediates the final reaction in the electron transport chain and transfers electrons to oxygen and hydrogen (protons), while pumping protons across the membrane. The final electron acceptor oxygen is reduced to water in this step. Both the direct pumping of protons and the consumption of matrix protons in the reduction of oxygen contribute to the proton gradient. The reaction catalyzed is the oxidation of cytochrome c and the reduction of oxygen:
Alternative reductases and oxidases
Many eukaryotic organisms have electron transport chains that differ from the much-studied mammalian enzymes described above. For example, plants have alternative NADH oxidases, which oxidize NADH in the cytosol rather than in the mitochondrial matrix, and pass these electrons to the ubiquinone pool. These enzymes do not transport protons, and, therefore, reduce ubiquinone without altering the electrochemical gradient across the inner membrane.
Another example of a divergent electron transport chain is the alternative oxidase, which is found in plants, as well as some fungi, protists, and possibly some animals. This enzyme transfers electrons directly from ubiquinol to oxygen.
The electron transport pathways produced by these alternative NADH and ubiquinone oxidases have lower ATP yields than the full pathway. The advantages produced by a shortened pathway are not entirely clear. However, the alternative oxidase is produced in response to stresses such as cold, reactive oxygen species, and infection by pathogens, as well as other factors that inhibit the full electron transport chain. Alternative pathways might, therefore, enhance an organism's resistance to injury, by reducing oxidative stress.
Organization of complexes
The original model for how the respiratory chain complexes are organized was that they diffuse freely and independently in the mitochondrial membrane. However, recent data suggest that the complexes might form higher-order structures called supercomplexes or "respirasomes". In this model, the various complexes exist as organized sets of interacting enzymes. These associations might allow channeling of substrates between the various enzyme complexes, increasing the rate and efficiency of electron transfer. Within such mammalian supercomplexes, some components would be present in higher amounts than others, with some data suggesting a ratio between complexes I/II/III/IV and the ATP synthase of approximately 1:1:3:7:4. However, the debate over this supercomplex hypothesis is not completely resolved, as some data do not appear to fit with this model.
Prokaryotic electron transport chains
In contrast to the general similarity in structure and function of the electron transport chains in eukaryotes, bacteria and archaea possess a large variety of electron-transfer enzymes. These use an equally wide set of chemicals as substrates. In common with eukaryotes, prokaryotic electron transport uses the energy released from the oxidation of a substrate to pump ions across a membrane and generate an electrochemical gradient. In the bacteria, oxidative phosphorylation in Escherichia coli is understood in most detail, while archaeal systems are at present poorly understood.
The main difference between eukaryotic and prokaryotic oxidative phosphorylation is that bacteria and archaea use many different substances to donate or accept electrons. This allows prokaryotes to grow under a wide variety of environmental conditions. In E. coli, for example, oxidative phosphorylation can be driven by a large number of pairs of reducing agents and oxidizing agents, which are listed below. The midpoint potential of a chemical measures how much energy is released when it is oxidized or reduced, with reducing agents having negative potentials and oxidizing agents positive potentials.
As shown above, E. coli can grow with reducing agents such as formate, hydrogen, or lactate as electron donors, and nitrate, DMSO, or oxygen as acceptors. The larger the difference in midpoint potential between an oxidizing and reducing agent, the more energy is released when they react. Out of these compounds, the succinate/fumarate pair is unusual, as its midpoint potential is close to zero. Succinate can therefore be oxidized to fumarate if a strong oxidizing agent such as oxygen is available, or fumarate can be reduced to succinate using a strong reducing agent such as formate. These alternative reactions are catalyzed by succinate dehydrogenase and fumarate reductase, respectively.
Some prokaryotes use redox pairs that have only a small difference in midpoint potential. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, donating the electrons to oxygen. The small amount of energy released in this reaction is enough to pump protons and generate ATP, but not enough to produce NADH or NADPH directly for use in anabolism. This problem is solved by using a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, causing complex I to generate NADH.
Prokaryotes control their use of these electron donors and acceptors by varying which enzymes are produced, in response to environmental conditions. This flexibility is possible because different oxidases and reductases use the same ubiquinone pool. This allows many combinations of enzymes to function together, linked by the common ubiquinol intermediate. These respiratory chains therefore have a modular design, with easily interchangeable sets of enzyme systems.
In addition to this metabolic diversity, prokaryotes also possess a range of isozymes – different enzymes that catalyze the same reaction. For example, in E. coli, there are two different types of ubiquinol oxidase using oxygen as an electron acceptor. Under highly aerobic conditions, the cell uses an oxidase with a low affinity for oxygen that can transport two protons per electron. However, if levels of oxygen fall, they switch to an oxidase that transfers only one proton per electron, but has a high affinity for oxygen.
ATP synthase (complex V)
ATP synthase, also called complex V, is the final enzyme in the oxidative phosphorylation pathway. This enzyme is found in all forms of life and functions in the same way in both prokaryotes and eukaryotes. The enzyme uses the energy stored in a proton gradient across a membrane to drive the synthesis of ATP from ADP and phosphate (Pi). Estimates of the number of protons required to synthesize one ATP have ranged from three to four, with some suggesting cells can vary this ratio, to suit different conditions.
This phosphorylation reaction is an equilibrium, which can be shifted by altering the proton-motive force. In the absence of a proton-motive force, the ATP synthase reaction will run from right to left, hydrolyzing ATP and pumping protons out of the matrix across the membrane. However, when the proton-motive force is high, the reaction is forced to run in the opposite direction; it proceeds from left to right, allowing protons to flow down their concentration gradient and turning ADP into ATP. Indeed, in the closely related vacuolar type H+-ATPases, the hydrolysis reaction is used to acidify cellular compartments, by pumping protons and hydrolysing ATP.
ATP synthase is a massive protein complex with a mushroom-like shape. The mammalian enzyme complex contains 16 subunits and has a mass of approximately 600 kilodaltons. The portion embedded within the membrane is called FO and contains a ring of c subunits and the proton channel. The stalk and the ball-shaped headpiece is called F1 and is the site of ATP synthesis. The ball-shaped complex at the end of the F1 portion contains six proteins of two different kinds (three α subunits and three β subunits), whereas the "stalk" consists of one protein: the γ subunit, with the tip of the stalk extending into the ball of α and β subunits. Both the α and β subunits bind nucleotides, but only the β subunits catalyze the ATP synthesis reaction. Reaching along the side of the F1 portion and back into the membrane is a long rod-like subunit that anchors the α and β subunits into the base of the enzyme.
As protons cross the membrane through the channel in the base of ATP synthase, the FO proton-driven motor rotates. Rotation might be caused by changes in the ionization of amino acids in the ring of c subunits causing electrostatic interactions that propel the ring of c subunits past the proton channel. This rotating ring in turn drives the rotation of the central axle (the γ subunit stalk) within the α and β subunits. The α and β subunits are prevented from rotating themselves by the side-arm, which acts as a stator. This movement of the tip of the γ subunit within the ball of α and β subunits provides the energy for the active sites in the β subunits to undergo a cycle of movements that produces and then releases ATP.
This ATP synthesis reaction is called the binding change mechanism and involves the active site of a β subunit cycling between three states. In the "open" state, ADP and phosphate enter the active site (shown in brown in the diagram). The protein then closes up around the molecules and binds them loosely – the "loose" state (shown in red). The enzyme then changes shape again and forces these molecules together, with the active site in the resulting "tight" state (shown in pink) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state, releasing ATP and binding more ADP and phosphate, ready for the next cycle.
In some bacteria and archaea, ATP synthesis is driven by the movement of sodium ions through the cell membrane, rather than the movement of protons. Archaea such as Methanococcus also contain the A1Ao synthase, a form of the enzyme that contains additional proteins with little similarity in sequence to other bacterial and eukaryotic ATP synthase subunits. It is possible that, in some species, the A1Ao form of the enzyme is a specialized sodium-driven ATP synthase, but this might not be true in all cases.
Oxidative phosphorylation - energetics
The transport of electrons from redox pair NAD+/ NADH to the final redox pair 1/2 O2/ H2O can be summarized as
1/2 O2 + NADH + H+ → H2O + NAD+
The potential difference between these two redox pairs is 1.14 volt, which is equivalent to -52 kcal/mol or -2600 kJ per 6 mol of O2.
When one NADH is oxidized through the electron transfer chain, three ATPs are produced, which is equivalent to 7.3 kcal/mol x 3 = 21.9 kcal/mol.
The conservation of the energy can be calculated by the following formula
Efficiency = (21.9 x 100%) / 52 = 42%
So we can conclude that when NADH is oxidized, about 42% of energy is conserved in the form of three ATPs and the remaining (58%) energy is lost as heat (unless the chemical energy of ATP under physiological conditions was underestimated).
Reactive oxygen species
Molecular oxygen is a good terminal electron acceptor because it is a strong oxidizing agent. The reduction of oxygen does involve potentially harmful intermediates. Although the transfer of four electrons and four protons reduces oxygen to water, which is harmless, transfer of one or two electrons produces superoxide or peroxide anions, which are dangerously reactive.
These reactive oxygen species and their reaction products, such as the hydroxyl radical, are very harmful to cells, as they oxidize proteins and cause mutations in DNA. This cellular damage may contribute to disease and is proposed as one cause of aging.
The cytochrome c oxidase complex is highly efficient at reducing oxygen to water, and it releases very few partly reduced intermediates; however small amounts of superoxide anion and peroxide are produced by the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, as a highly reactive ubisemiquinone free radical is formed as an intermediate in the Q cycle. This unstable species can lead to electron "leakage" when electrons transfer directly to oxygen, forming superoxide. As the production of reactive oxygen species by these proton-pumping complexes is greatest at high membrane potentials, it has been proposed that mitochondria regulate their activity to maintain the membrane potential within a narrow range that balances ATP production against oxidant generation. For instance, oxidants can activate uncoupling proteins that reduce membrane potential.
To counteract these reactive oxygen species, cells contain numerous antioxidant systems, including antioxidant vitamins such as vitamin C and vitamin E, and antioxidant enzymes such as superoxide dismutase, catalase, and peroxidases, which detoxify the reactive species, limiting damage to the cell.
Oxidative phosphorylation in hypoxic/anoxic conditions
As oxygen is fundamental for oxidative phosphorylation, a shortage in O2 level can alter ATP production rates. Under anoxic conditions, ATP-synthase will commit 'cellular treason' and run in reverse, forcing protons from the matrix back into the inner membrane space, using up ATP in the process. The proton motive force and ATP production can be maintained by intracellular acidosis. Cytosolic protons that have accumulated with ATP hydrolysis and lactic acidosis can freely diffuse across the mitochondrial outer-membrane and acidify the inter-membrane space, hence directly contributing to the proton motive force and ATP production.
Inhibitors
There are several well-known drugs and toxins that inhibit oxidative phosphorylation. Although any one of these toxins inhibits only one enzyme in the electron transport chain, inhibition of any step in this process will halt the rest of the process. For example, if oligomycin inhibits ATP synthase, protons cannot pass back into the mitochondrion. As a result, the proton pumps are unable to operate, as the gradient becomes too strong for them to overcome. NADH is then no longer oxidized and the citric acid cycle ceases to operate because the concentration of NAD+ falls below the concentration that these enzymes can use.
Many site-specific inhibitors of the electron transport chain have contributed to the present knowledge of mitochondrial respiration. Synthesis of ATP is also dependent on the electron transport chain, so all site-specific inhibitors also inhibit ATP formation. The fish poison rotenone, the barbiturate drug amytal, and the antibiotic piericidin A inhibit NADH and coenzyme Q.
Carbon monoxide, cyanide, hydrogen sulphide and azide effectively inhibit cytochrome oxidase. Carbon monoxide reacts with the reduced form of the cytochrome while cyanide and azide react with the oxidised form. An antibiotic, antimycin A, and British anti-Lewisite, an antidote used against chemical weapons, are the two important inhibitors of the site between cytochrome B and C1.
Not all inhibitors of oxidative phosphorylation are toxins. In brown adipose tissue, regulated proton channels called uncoupling proteins can uncouple respiration from ATP synthesis. This rapid respiration produces heat, and is particularly important as a way of maintaining body temperature for hibernating animals, although these proteins may also have a more general function in cells' responses to stress.
History
The field of oxidative phosphorylation began with the report in 1906 by Arthur Harden of a vital role for phosphate in cellular fermentation, but initially only sugar phosphates were known to be involved. However, in the early 1940s, the link between the oxidation of sugars and the generation of ATP was firmly established by Herman Kalckar, confirming the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that the coenzyme NADH linked metabolic pathways such as the citric acid cycle and the synthesis of ATP. The term oxidative phosphorylation was coined by in 1939.
For another twenty years, the mechanism by which ATP is generated remained mysterious, with scientists searching for an elusive "high-energy intermediate" that would link oxidation and phosphorylation reactions. This puzzle was solved by Peter D. Mitchell with the publication of the chemiosmotic theory in 1961. At first, this proposal was highly controversial, but it was slowly accepted and Mitchell was awarded a Nobel prize in 1978. Subsequent research concentrated on purifying and characterizing the enzymes involved, with major contributions being made by David E. Green on the complexes of the electron-transport chain, as well as Efraim Racker on the ATP synthase. A critical step towards solving the mechanism of the ATP synthase was provided by Paul D. Boyer, by his development in 1973 of the "binding change" mechanism, followed by his radical proposal of rotational catalysis in 1982. More recent work has included structural studies on the enzymes involved in oxidative phosphorylation by John E. Walker, with Walker and Boyer being awarded a Nobel Prize in 1997.
See also
Respirometry
TIM/TOM Complex
Notes
References
Further reading
Introductory
Advanced
General resources
Animated diagrams illustrating oxidative phosphorylation Wiley and Co Concepts in Biochemistry
On-line biophysics lectures Antony Crofts, University of Illinois at Urbana–Champaign
ATP Synthase Graham Johnson
Structural resources
PDB molecule of the month:
ATP synthase
Cytochrome c
Cytochrome c oxidase
Interactive molecular models at Universidade Fernando Pessoa:
NADH dehydrogenase
succinate dehydrogenase
Coenzyme Q - cytochrome c reductase
cytochrome c oxidase
Cellular respiration
Integral membrane proteins
Metabolism
Redox | 0.76672 | 0.996456 | 0.764003 |
Spoon theory | Spoon theory is a metaphor describing the amount of physical or mental energy that a person has available for daily activities and tasks, and how it can become limited. The term was coined in a 2003 essay by American writer Christine Miserandino. In the essay, Miserandino describes her experience with chronic illness, using a handful of spoons as a metaphor for units of energy available to perform everyday actions. The metaphor has since been used to describe a wide range of disabilities, mental health issues, forms of marginalization, and other factors that might place unseen burdens on individuals.
Origin
In her 2003 essay "The Spoon Theory", American writer Christine Miserandino tells a story about a time she told a friend about her experience with lupus. As they were at a restaurant, Miserandino grabbed spoons and gave them to her friend. Miserandino used the spoons to demonstrate that people with chronic illness often start their days off with limited quantities of energy. The number of spoons represented how much energy she had to spend throughout the day. As Miserandino's friend stated the different tasks she completed throughout the day, Miserandino took away a spoon for each activity. The exercise demonstrated how people with chronic illness may plan their actions in advance in order to conserve their energy.
Chronic illness and spoon theory
Those with chronic illness or pain have reported feelings of difference and division between themselves and people without disabilities. This theory and the claiming of the term spoonie is utilized to build communities for those with chronic illness that can support each other.
Because of this, many people with chronic illness have to plan around and ration their energy and activities throughout the day. Ordinary activities must often be curtailed or avoided, because they carry an invisible cost in terms of spoons available later for other things. This has been described as being a major concern of people with a (fatigue-related) disability or chronic condition/illness/disease because people without these disabilities are not typically concerned with the energy expended during ordinary tasks such as bathing and getting dressed. The theory explains the difference and facilitates discussion between those with limited energy reserves and those with (seemingly) limitless energy reserves.
Other uses
Spoon theory has since spread throughout the disability community and even to marginalized groups to describe the exhaustion that may characterize their specific situations. It is most commonly used to refer to the experience of having an invisible disability, because people with no outward symptoms or symbols of their condition are often perceived as lazy, inconsistent or having poor time management skills by those who have no first-hand knowledge of living with a chronic illness or disability. Naomi Chainey has described how the term has also spread to use by some in the wider disability community, and eventually the non-disabled community tried to appropriate it for other uses, to refer to non-chronic forms of fatigue and mental exhaustion – which she attributes to people with invisible disabilities being a sometimes marginalized group even within the disability community.
Those with mental health issues such as anxiety or depression may similarly find it challenging to go about seemingly simple tasks throughout the day, or to deal with a crisis. Spoon theory could even be used to show the exhaustion of having a newborn baby, as this situation often leads to a chronic lack of sleep on the part of the baby's caregiver(s).
See also
References
Bibliography
Further reading
2003 neologisms
disability
psychological theories | 0.765058 | 0.998592 | 0.763981 |
IMRAD | In scientific writing, IMRAD or IMRaD (Introduction, Methods, Results, and Discussion) is a common organizational structure (a document format). IMRaD is the most prominent norm for the structure of a scientific journal article of the original research type.
Overview
Original research articles are typically structured in this basic order
Introduction – Why was the study undertaken? What was the research question, the tested hypothesis or the purpose of the research?
Methods – When, where, and how was the study done? What materials were used or who was included in the study groups (patients, etc.)?
Results – What answer was found to the research question; what did the study find? Was the tested hypothesis true?
Discussion – What might the answer imply and why does it matter? How does it fit in with what other researchers have found? What are the perspectives for future research?
The plot and the flow of the story of the IMRaD style of writing are explained by a 'wine glass model' or hourglass model.
Writing, compliant with IMRaD format (IMRaD writing) typically first presents "(a) the subject that positions the study from the wide perspective", "(b) outline of the study", develops through "(c) study method", and "(d) the results", and concludes with "(e) outline and conclusion of the fruit of each topics", and "(f) the meaning of the study from the wide and general point of view". Here, (a) and (b) are mentioned in the section of the "Introduction", (c) and (d) are mentioned in the section of the "Method" and "Result" respectively, and (e) and (f) are mentioned in the section of the "Discussion" or "Conclusion".
In this sense, to explain how to line up the information in IMRaD writing, the 'wine glass model' (see the pattern diagram shown in Fig.1) will be helpful (see pp 2–3 of the Hilary Glasman-deal ). As mentioned in abovementioned textbook, the scheme of 'wine glass model' has two characteristics. The first one is "top-bottom symmetric shape", and the second one is "changing width" i.e. "the top is wide and it narrows towards the middle, and then widens again as it goes down toward the bottom".
The First one, "top-bottom symmetric shape", represents the symmetry of the story development. Note the shape of the top trapezoid (representing the structure of Introduction) and the shape of the trapezoid at the bottom are reversed. This is expressing that the same subject introduced in Introduction will be taken up again in suitable formation for the section of Discussion/Conclusion in these section in the reversed order. (See the relationship between abovementioned (a), (b) and (e), (f).)
The Second one, "the change of the width" of the schema shown in Fig.1, represents the change of generality of the view point. As along the flow of the story development, when the viewpoints are more general, the width of the diagram is expressed wider, and when they are more specialized and focused, the width is expressed narrower.
As the standard format of academic journals
The IMRAD format has been adopted by a steadily increasing number of academic journals since the first half of the 20th century. The IMRAD structure has come to dominate academic writing in the sciences, most notably in empirical biomedicine. The structure of most public health journal articles reflects this trend. Although the IMRAD structure originates in the empirical sciences, it now also regularly appears in academic journals across a wide range of disciplines. Many scientific journals now not only prefer this structure but also use the IMRAD acronym as an instructional device in the instructions to their authors, recommending the use of the four terms as main headings. For example, it is explicitly recommended in the "Uniform Requirements for Manuscripts Submitted to Biomedical Journals" issued by the International Committee of Medical Journal Editors (previously called the Vancouver guidelines): The text of observational and experimental articles is usually (but not necessarily) divided into the following sections: Introduction, Methods, Results, and Discussion. This so-called "IMRAD" structure is not an arbitrary publication format but rather a direct reflection of the process of scientific discovery. Long articles may need subheadings within some sections (especially Results and Discussion) to clarify their content. Other types of articles, such as case reports, reviews, and editorials, probably need to be formatted differently.
The IMRAD structure is also recommended for empirical studies in the 6th edition of the publication manual of the American Psychological Association (APA style). The APA publication manual is widely used by journals in the social, educational and behavioral sciences.
Benefits
The IMRAD structure has proved successful because it facilitates literature review, allowing readers to navigate articles more quickly to locate material relevant to their purpose. But the neat order of IMRAD rarely corresponds to the actual sequence of events or ideas of the research presented; the IMRAD structure effectively supports a reordering that eliminates unnecessary detail, and allows the reader to assess a well-ordered and noise-free presentation of the relevant and significant information. It allows the most relevant information to be presented clearly and logically to the readership, by summarizing the research process in an ideal sequence and without unnecessary detail.
Caveats
The idealised sequence of the IMRAD structure has on occasion been criticised for being too rigid and simplistic. In a radio talk in 1964 the Nobel laureate Peter Medawar criticised this text structure for not giving a realistic representation of the thought processes of the writing scientist: "… the scientific paper may be a fraud because it misrepresents the processes of thought that accompanied or gave rise to the work that is described in the paper". Medawar's criticism was discussed at the XIXth General Assembly of the World Medical Association in 1965. While respondents may argue that it is too much to ask from such a simple instructional device to carry the burden of representing the entire process of scientific discovery, Medawar's caveat expressed his belief that many students and faculty throughout academia treat the structure as a simple panacea. Medawar and others have given testimony both to the importance and to the limitations of the device.
Abstract considerations
In addition to the scientific article itself a brief abstract is usually required for publication. The abstract should, however, be composed to function as an autonomous text, even if some authors and readers may think of it as an almost integral part of the article. The increasing importance of well-formed autonomous abstracts may well be a consequence of the increasing use of searchable digital abstract archives, where a well-formed abstract will dramatically increase the probability for an article to be found by its optimal readership. Consequently, there is a strong recent trend toward developing formal requirements for abstracts, most often structured on the IMRAD pattern, and often with strict additional specifications of topical content items that should be considered for inclusion in the abstract. Such abstracts are often referred to as structured abstracts. The growing importance of abstracts in the era of computerized literature search and information overload has led some users to modify the IMRAD acronym to AIMRAD, in order to give due emphasis to the abstract.
Heading style variations
Usually, the IMRAD article sections use the IMRAD words as headings. A few variations can occur, as follows:
Many journals have a convention of omitting the "Introduction" heading, based on the idea that the reader who begins reading an article does not need to be told that the beginning of the text is the introduction. This print-era proscription is fading since the advent of the Web era, when having an explicit "Introduction" heading helps with navigation via document maps and collapsible/expandable TOC trees. (The same considerations are true regarding the presence or proscription of an explicit "Abstract" heading.)
In some journals, the "Methods" heading may vary, being "Methods and materials", "Materials and methods", or similar phrases. Some journals mandate that exactly the same wording for this heading be used for all articles without exception; other journals reasonably accept whatever each submitted manuscript contains, as long as it is one of these sensible variants.
The "Discussion" section may subsume any "Summary", "Conclusion", or "Conclusions" section, in which case there may or may not be any explicit "Summary", "Conclusion", or "Conclusions" subheading; or the "Summary"/"Conclusion"/"Conclusions" section may be a separate section, using an explicit heading on the same heading hierarchy level as the "Discussion" heading. Which of these variants to use as the default is a matter of each journal's chosen style, as is the question of whether the default style must be forced onto every article or whether sensible inter-article flexibility will be allowed. The journals which use the "Conclusion" or "Conclusions" along with a statement about the "Aim" or "Objective" of the study in the "Introduction" is following the newly proposed acronym "IaMRDC" which stands for "Introduction with aim, Materials and Methods, Results, Discussion, and Conclusion."
Other elements that are typical although not part of the acronym
Disclosure statements (see main article at conflicts of interest in academic publishing)
Reader's theme that is the point of this element's existence: "Why should I (the reader) trust or believe what you (the author) say? Are you just making money off of saying it?"
Appear either in opening footnotes or a section of the article body
Subtypes of disclosure:
Disclosure of funding (grants to the project)
Disclosure of conflict of interest (grants to individuals, jobs/salaries, stock or stock options)
Clinical relevance statement
Reader's theme that is the point of this element's existence: "Why should I (the reader) spend my time reading what you say? How is it relevant to my clinical practice? Basic research is nice, other people's cases are nice, but my time is triaged, so make your case for 'why bother'"
Appear either as a display element (sidebar) or a section of the article body
Format: short, a few sentences or bullet points
Ethical compliance statement
Reader's theme that is the point of this element's existence: "Why should I believe that your study methods were ethical?"
"We complied with the Declaration of Helsinki."
"We got our study design approved by our local institutional review board before proceeding."
"We got our study design approved by our local ethics committee before proceeding."
"We treated our animals in accordance with our local Institutional Animal Care and Use Committee."
Diversity, equity, and inclusion statement
Reader's theme that is the point of this element's existence: "Why should I believe that your study methods consciously included people?" (for example, avoided inadvertently underrepresenting some people—participants or researchers—by race, ethnicity, sex, gender, or other factors)
"We worked to ensure that people of color and transgender people were not underrepresented among the study population."
"One or more of the authors of this paper self-identifies as living with a disability."
"One or more of the authors of this paper self-identifies as transgender."
Additional standardization (reporting guidelines)
In the late 20th century and early 21st, the scientific communities found that the communicative value of journal articles was still much less than it could be if best practices were developed, promoted, and enforced. Thus reporting guidelines (guidelines for how best to report information) arose. The general theme has been to create templates and checklists with the message to the user being, "your article is not complete until you have done all of these things." In the 1970s, the ICMJE (International Committee of Medical Journal Editors) released the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (Uniform Requirements or URM). Other such standards, mostly developed in the 1990s through 2010s, are listed below. The academic medicine community is working hard on trying to raise compliance with good reporting standards, but there is still much to be done; for example, a 2016 review of instructions for authors in 27 emergency medicine journals found insufficient mention of reporting standards, and a 2018 study found that even when journals' instructions for authors mention reporting standards, there is a difference between a mention or badge and enforcing the requirements that the mention or badge represents.
The advent of a need for best practices in data sharing has expanded the scope of these efforts beyond merely the pages of the journal article itself. In fact, from the most rigorous versions of the evidence-based perspective, the distance to go is still quite formidable. FORCE11 is an international coalition that has been developing standards for how to share research data sets properly and most effectively.
Most researchers cannot be familiar with all of the many reporting standards that now exist, but it is enough to know which ones must be followed in one's own work, and to know where to look for details when needed. Several organizations provide help with this task of checking one's own compliance with the latest standards:
The EQUATOR Network
The BioSharing collaboration (biosharing.org)
Several important webpages on this topic are:
NLM's list at Research Reporting Guidelines and Initiatives: By Organization
The EQUATOR Network's list at Reporting guidelines and journals: fact & fiction
TRANSPOSE (Transparency in Scholarly Publishing for Open Scholarship Evolution), "a grassroots initiative to build a crowdsourced database of journal policies," allowing faster and easier lookup and comparison, and potentially spurring harmonization
Relatedly, SHERPA provides compliance-checking tools, and AllTrials provides a rallying point, for efforts to enforce openness and completeness of clinical trial reporting. These efforts stand against publication bias and against excessive corporate influence on scientific integrity.
See also
Case report
Case series
Eight-legged essay
Five paragraph essay
IRAC
Journal Article Tag Suite (JATS)
Literature review
Meta-analyses
Schaffer paragraph
References
Writing
Academic publishing
Scientific documents
Technical communication
Style guides for technical and scientific writing
Academic terminology
Medical publishing | 0.76676 | 0.99636 | 0.763969 |
Olduvai theory | The Olduvai Theory states that the current industrial civilization would have a maximum duration of one hundred years, counted from 1930. From 2030 onwards, humankind would gradually return to levels of civilization comparable to those previously experienced, culminating in about a thousand years (3000 AD) in a hunting-based culture, such as existed on Earth three million years ago, when the Oldowan industry developed; hence the name of this theory, put forward by Richard C. Duncan based on his experience in handling energy sources and his love of archaeology.
Originally, the theory was proposed in 1989 under the name "pulse-transient theory". Subsequently, in 1996, its current name was adopted, inspired by the famous archaeological site, but the theory does not rely in any way on data collected at that site. Richard C. Duncan has published several versions since the appearance of his first paper with different parameters and predictions, which has been a source of criticism and controversy.
In 2007, Duncan defined five postulates based on the observation of data on:
The world energy production per capita.
Earth carrying capacity.
The return to the use of coal as a primary source and the peak oil production.
Migratory movements.
The stages of energy utilization in the United States.
In 2009, he again published an update restating the postulate concerning world energy consumption per capita concerning OECD countries, where previously he only compared with the United States, downplaying the role of emerging economies.
Different people, such as Pedro A. Prieto, based on this and other theories of catastrophic collapse or die-off, have formulated probable scenarios with various dates and social events. On the other hand, there is a group of people, such as Richard Heinberg or Jared Diamond, who also believe in social collapse, but still visualize the possibility of more benevolent scenarios where degrowth can occur with continued welfare.
This theory has been criticized for the way in which the problem of migratory movements is posed and for the ideological orientation of the publishing house that published its articles, the Social Contract Press, which is an advocate of anti-immigration measures and birth control. There are major criticisms on each of the argumentative bases and different ideologies contrary to such approaches such as the Cornucopians, the advocates of the natural resource-based economy, environmentalist positions and the positions of various nations also fail to establish a consistent basis for such claims.
History
Richard C. Duncan is an author who first proposed Olduvai's theory in 1989 under the title "The pulse-transient theory of industrial civilization." Later this theory was supplemented in 1993 with the article "The life-expectancy of industrial civilization: The decline to global equilibrium."
In June 1996, Duncan presented a paper titled "The Olduvai Theory: Falling Towards a post-industrial stone-age Era", adopting the term "Olduvai theory" in place of "pulse-transient theory" used in earlier work. Duncan published a more updated version of his theory under the title "The Peak of World Oil Production and the Road to the Olduvai Gorge" at the 2000 Symposium Summit of the Geological Society of America on November 13, 2000. In 2005, Duncan extended the data set within his theory to 2003 in the article "The Olduvai Theory: Energy, Population, and Industrial Civilization."
Description
The Olduvai theory is a model that is mainly based on the peak oil theory and the per capita energy yield of oil. In the face of a foreseeable depletion, it establishes that the rate of energy consumption and world population growth cannot be the same as that of the 20th century.
Put differently, Olduvai's theory is defined by the rise and fall of the material quality of life (MQOL) which consists of the rate resulting from the increase or decrease of the production, use and consumption of energy sources (E) between the growth of the world population (P), (MQOL = E/P). From 1954 to 1979 that rate grew annually by about 2.8%, from that date to 2000 it increased erratically by 0.2% per year. From 2000 to 2007 it grew again at an exponential rate due to the development of emerging economies.
In works before 2000, Richard C. Duncan considered the peak of per capita energy consumption in 1979 as the peak of civilization. Currently, due to the growth since 2000 of the emerging economies, he considers 2010 as the likely date of peak energy per capita. But despite that adjustment, he continues to claim that in 2030 that rate of energy production per capita would be similar to that of 1930, considering that date as the end of the current civilization.
The theory argues that the first reliable signs of collapse are likely to consist of a series of widespread blackouts in the developed world. With the lack of electrical power and fossil fuels, there will be a transition from today's civilization to a situation close to that of the pre-industrial era. He goes on to argue that in events following that collapse the technological level is expected to eventually move from Dark Ages-like levels to those observed in the Stone Age within approximately three thousand years.
Duncan takes as a basis for the formulation of his theory data consisting of the following facts:
Data obtained on world energy production per capita.
The development of population from 1850 to 2005.
The carrying capacity of the Earth in the absence of oil.
Energy utilization stages and their level of growth in United States anticipate global ones, due to their dominance.
Estimation of the year 2007 as the time of the peak oil.
Migratory movements or attractiveness principle.
According to Duncan, the theory has five postulates:
The exponential growth of world energy production ended in 1970.
The intervals of growth, stagnation, and final decline of energy production per capita in the United States anticipate the intervals of energy production per capita in the rest of the world. In such intervals, there is a shift from oil to coal as the primary energy source.
The final decline of industrial civilization will begin around 2008-2012.
Partial and total blackouts will be reliable indicators of terminal or final decline.
The world population will decline in line with world energy production per capita.
Bases for the formulation of the theory
Carrying capacity limit and demographic explosion
He stipulates that the real capacity of the Earth without oil in the long run is between 500 and 2000 million people, which has been exceeded by a factor of three thanks to an artificial welfare bubble due to cheap oil. He argues that since the homeostatic balance of the Earth is around at most 2 billion people, as oil runs out at least 4 billion people will not be able to be regulated by the system, resulting in a large mortality rate.
Prior to 1800 the world population was doubling at a rate of between 500 and 1000 years, and by that date the world living human population was just under 1 billion. With the first industrial revolution and colonialism, the population in the Western world began to double at a rate just over 100 years, with the rest of the world following soon after, with 1550 million inhabitants by 1900. With the second industrial revolution the world began to double at a rate of less than 100 years, and with maximum oil extraction and the digital revolution it doubled at a rate of about 50 years, from 2.4 billion people in 1950 to 6070 million people in 2000.
The theory not only predicts that the Earth's net load does not allow for the rate of such growth but that its population already exceeded its capacity after 1925. Thus one can see an apocalyptic scenario where the population would slow down in 2012 due to sudden global economic decline and peak in 2015 at around 6900 million (see critiques section), and would never in history grow to these levels again, there being as many deaths as births at any given time (1:1), roughly around the year 2017 or so. Thereafter the number of deaths would exceed the number of births (>1:1) and the world population would begin to contract dramatically with approximately 6.8 billion people remaining by the end of 2020, 6500 million by 2025, 5260 million by 2027, 4600 million by 2030 (reduction between 1800 and 2000 million people in 5 years), until the number of humans stabilizes at a figure between 2000 and 500 million inhabitants at a point between the years 2050 and 2100.
Duncan compares the forecast of his theory with that of Dennis Meadows in his book The Limits to Growth (1972). While Duncan expects the peak population in 2015 to be around 6.9 billion, Meadows expects the peak in 2027 to be around 7.47 billion. In addition, Duncan forecasts only 2000 million inhabitants by 2050, while Meadows estimates 6450 million inhabitants by 2050.
Other estimates similar to Olduvai's theory predict that the population will reach a zenith around the year 2025-2030 reaching a number between 7100 and 8000 million inhabitants and thereafter the population will decrease at the same rate it grew before the zenith describing a symmetric Gaussian bell.
Scholars, such as Paul Chefurka, point out that the Earth's carrying capacity will be defined both by factors such as the level of damage caused to ecosystems during the industrial period (pollution, alterations and even depletion of ecosystems, highly polluting and long-lasting waste and destruction of resources due to possible competition for them), the development of alternative technologies or oil substitutes and the existence of knowledge that would allow the survival of the remaining population in a sustainable manner (such as the rescue of traditional ways of life prior to the industrial revolution).
Principle of attractiveness
The formulation of this basis, supported on the work on the dynamics of complex social systems by Jay Forrester, proposes that the variables of per capita natural resource and material standard of living are subordinated to the per capita energy yield of oil. This principle holds that attractiveness is the difference in material standard of living between nations. Thus the US material standard of living in 2005 was 57.7 barrels of oil equivalent (BOE) per capita while the material standard of living of the rest of the world was 9.8 BOE per capita, there being a difference in consumption of 47.9 BOE equivalent per capita. Put another way, the huge difference in lifestyle and consumption becomes attractive to immigrants.
The new immigrant, upon arriving in that society, adopts the same consumerist lifestyle, further overloading that system. Duncan argues that the greater the immigration the greater the number of population where the differences in the material standard of living of the attracting country will diminish in an equalizing process until that country reaches the world's material standard of living.
This proposition has already been criticized in several parts of the world, because although Duncan insinuates that borders should be closed, he does not stop to consider that the main cause of resource depletion is the consumerist and predatory lifestyle of these attractive countries (see critiques section).
Return to the use of coal as a primary source
The theory proposes that due to the predominance of one nation the rest of the world will follow the same sequence in the implementation of a resource as a primary source. It thus comparatively analyzes a chronology of resource utilization as a primary source between United States and the rest of the world:
Utilization of biomass as a primary source.
In the United States until 1886.
In the rest of the world until 1900.
Use of coal as primary source.
In the United States from 1886 to 1951.
In the rest of the world from 1900 to 1963.
Use of oil as primary source.
In the United States from 1951 to 1986.
In the rest of the world from 1963 to 2005.
Return to the use of coal as primary source.
In the United States since 1986.
In the rest of the world from 2005.
According to Duncan, from 2000 to 2005 while world coal production increased by 4.8% per year, oil increased by just 1.6%.
The return to coal as a primary source, another taboo fact due to its high level of pollution, has been muted in the media as has the carrying capacity of the Earth for obvious political reasons, Duncan says.
Energy consumption of the population
Just as the shift from oil to coal as a primary source in the U.S. is marking global changes in advance, the indicator of the level of per capita energy consumption and production over time in the U.S. is also marking that of the rest of the world. Thus, Duncan distinguishes three stages in U.S. consumption that were subsequently reflected in world consumption:
Growth
1945-1970: U.S. growth stage, average growth of 1.4% per capita energy production per year is observed during the period.
1954-1979: World growth stage, an average growth of 2.8% per capita energy production per year is observed during the period.
Stagnation
1970-1998: U.S. stagnation stage, average decline of 0.6% p.a. of energy production per capita during the period.
1979-2008: A period of global stagnation, an average growth of 0.2% per capita energy production per year is observed during the period, after 2000 an upturn is observed due to the growth of emerging economies.
Final decline or decay
1998 onwards: US final decline stage, an average decline of 1.8% per year of energy production per capita is observed during the period 1998-2005.
2008-2012 onwards: Probable stage of final global decline. The development of emerging economies and the huge coal utilization in China may slow down this process until 2012.
Theory updates
2009 update
After criticism received for the discrepancy shown by the United States per capita energy consumption curve, which tends to decrease, concerning the world curve, which has tended to increase extraordinarily after 2000, Duncan published an update in 2009 of his theory where he compares a curve of the OECD members (30 countries) relative to the curve of the rest of the non-OECD world (165 countries) in which Brazil, India, and China are included.
In this new paper on the various peaks of per capita energy consumption in the world, Duncan concludes the following:
1973: Peak per capita energy in the United States.
2005: Peak energy per capita in OECD countries at around 4.75 tonnes of oil equivalent (toe) per capita.
2008: After having increased from 2000 to 2007 the per capita consumption of non-OECD countries by 28%, the composite leading indicator of China, India and Brazil declined sharply in 2008, leading him to conclude that the average standard of living in non-OECD countries has already begun to fall. However, a February 2010 OECD report appears to contradict this claim (see critiques section).
2010: Most likely date of peak energy per capita globally.
In this new scenario, it forecasts that the United States average standard of living or energy per capita would fall by 90% between 2008 and 2030, OECD levels would fall by 86% and the level of non-OECD countries would fall by 60%. The average standard of living in the OECD would catch up with the average level of the rest of the world by 2030 standing at 3.53 barrels of oil equivalent per capita.
Societal scenarios according to the theory
Pedro A. Prieto, one of the Spanish-language specialists on the subject, has gone so far as to outline a probable scenario of societal collapse based on aspects of this theory.
Crisis of the Nation-State
Wealthy nations would suffer increased insecurity, and what had been democratic societies would become totalitarian and ultraconservative societies where the population itself would demand outside resources and increased security. It is possible that before the great final die-off, large developed nations would dispute scarce resources in a sort of World War III, without ruling out scenarios similar to the final solution or nuclear war. Others argue that such a war, if it were to happen, would be an intercapitalist war involving three blocks of civilizations. The first would be constituted by the Western civilization, the second by the Orthodox civilization as well as Sinic, and a third block formed by the Islamic civilization. Japan and India would play a major role in such a war as they define their position.
If some nations survived, lack of resources could trigger famines in large urban centers forcing widespread looting, and governments would issue decrees and martial laws restricting social freedoms and eliminating property rights to keep the starving population at bay. In the face of permanent shortages, governments would impose rationing that would fall short of the required minimums which would cause the very ones imposing force to plunder for their profit, this would be the first symptom of the fading of the states.
In a major economic crisis, the value of fiat money could plummet, and people could end up in a situation where a necessity like a loaf of bread might be worth as much as something more extravagant. The dominant minorities and military forces would plunder for themselves, and form small dictatorships and kingdoms within what were once great nations. On the other hand, the "great masses of the disinherited" would form disorganized groups of very unstable characters that would act violently and chaotically to take scarce resources. Between one and the other, the conflict would be served and in the end, both would succumb like the rest of the population.
Survivor's profile
It is estimated that cities with more than twenty thousand inhabitants would be very unstable, having better life expectancy in the first place those societies of hunters and gatherers in the Amazon, the Central African jungles, those of Southeast Asia, the Bushmen, and the aborigines in Australia. In second place of survival would be the fairly homogeneous nuclei of three hundred to two thousand inhabitants with an agricultural lifestyle close to places with uncontaminated water resources, inaccessible and hundreds of kilometers away from the large cities and from the hordes of starving people that would exude these cities or from the decaying military forces that would engage in looting.
In the end there could also be a huge number of small agricultural villages vying for the few privileged places, with only those villages surviving that the land carrying capacity would allow.
Other visions
Pedro A. Prieto himself speculates that war scenarios similar to World War III or other types of destructive war conflicts would be less likely to occur if the social collapse is rapid, such as the one predicted by Olduvai's theory. The difference between scenarios is that the majority of the population, contained in the cities, dies of famine in the rapid collapse, while in the slow collapse the war would spread to the safest areas, ranging from large cities to small, isolated rural communities.
The conjectures of those who opine on the possibility of a post-industrial era are spread across a spectrum ranging from scenarios of rapid and catastrophic social collapse to scenarios of slow and benevolent collapse, and even scenarios where they still envision degrowths with continued welfare.
Catastrophic collapse or die-off
The first group, the pessimists, is framed by the same Olduvai theory of Duncan and other works such as the die-off or catastrophic collapse proposed by David Price, Reg Morrison and Jay Hanson. They usually invoke several determinisms such as strong, genetic, and energetic determinism (Leslie A. White's Basic Law of Evolution) to announce the inevitable collapse that will lead to the decomposition of civilized life ruling out the possibility of a peaceful decline.
Smooth decline or "prosperous downhill path"
Among those who predict slow and benevolent collapse scenarios where the degrowth option with continuity of the welfare state we can mention the "prosperous way downhill" of Elizabeth and Howard T. Odum, the end of suburbanization and the return to ruralization proposed by James Kunstler, societies that can still choose to save themselves or fail proposed by Jared Diamond and Richard Heinberg's "gradual shutdown" option.
Heinberg, in his book "Shutdown: Options and Actions in a Post-Coal World", proposes the four possible paths that nations could take in the face of coal and oil depletion:
"Last one and we're out" or "last one standing": Scenario where there is fierce global competition for the remaining resources.
"Gradual shutdown": Where there is global cooperation in reducing energy use, conservation, sound water management, and global population reduction.
"Denial": Posture in the hopes that some unforeseen element or serendipity will solve the problem (see also black swan theory).
"Life-saving community": Sustainably preparing local areas if the global economic project collapses.
The Renaissance of utopias
These are visions where collapse is both an outcome and an objective. As in the 19th century, and at the beginning of the industrial era, romanticism and the utopian movements arose, again and in the face of the prediction of a collapse of the industrial era, new hatching of utopian visions is registered. This renaissance advances in the opposite direction to the decline of sociological theories which can no longer provide adequate solutions due to the translimitation situation.
For Joseph Tainter, a collapsing complex society is suddenly smaller, simpler, less stratified, and with fewer social differences. This situation, according to Theodore Roszak, evokes the utopian dogma of the old environmentalist program of reducing, slowing down, democratizing, and decentralizing.
According to Ernest Garcia, many of these proponents are scientists engaged in areas ranging from the ecologist discipline to geology, computer science, biochemistry, and evolutionary genetics, far removed from the study of the social sciences. Among the most palpable recent utopian movements are anarchoprimitivism, deep ecology, and techno-utopias such as transhumanism.
Critiques and positions on the theory
Criticism of the basis of the argument
Criticism of the limit of carrying capacity and population explosion
This forecast also differs from that of a 2004 United Nations report where estimates of world population development from 1800 to 2300 were calculated, with the worst-case scenario being that where world population reaches a peak of 7500 million between 2035-2040, subsequently reducing to 7000 million by 2065, 6000 million by 2090 and 5500 million approximately by the year 2100.
A report issued in 2011 by the United Nations Population Division states that on October 31, 2011 officially the world's population would reach 7 billion and in the year 2019 it was estimated a total population of 7.8 billion people. All contradicting Duncan's estimate that by 2015 there would be around 6.9 billion living humans in the world population. However, recent times have seen a decline in population growth, albeit due to the increasingly common decision to have fewer children or discard parenthood due to cultural and social factors rather than the deaths caused by famine and disease mentioned in the theory. Because of these factors China abolished its one-child policy and in several places around the world their governments offer incentives to have children.
Criticism of the principle of attractiveness
Of the critics who object to some point of the theory, those who criticize the xenophobic and racist cultural biases that are reflected to a greater extent on the principle of attractiveness stand out. Pedro A. Prieto criticizes the proposal of closing borders to immigrants, but not the closure to the entry of depredated resources that end up serving the high US consumption. Nevertheless, he concludes that the more general tenets of the theory such as peak oil, land carrying capacity, and a return to coal as a primary source are feasible to some degree.
Many of Richard C. Duncan's works have been published by the Social Contract Press, an American publishing house founded by John Tanton and directed by Wayne Lutton. This publishing house is an advocate of birth control and the reduction of immigration, as well as emphasizing issues such as culture and the environment covering everything from the point of view of the political right. Among its most controversial publications is the book The Camp of the Saints by French author Jean Raspail, causing such publisher to be described by the Southern Poverty Law Center as a "hate group" that "publishes a series of racist works."
Critics of the peak oil estimate
Some positions speak from the peak oil theory may be a hoax, as argued by Lindsey Williams (2006), to that of different governments, social organizations or private companies that predict the peak at dates ranging from two years before to forty years after the date proposed by Duncan and with very different behaviors in the production curve.
The abiogenic petroleum origin theory argument, proposed since the 19th century, holds that natural petroleum formed in deep coal deposits, perhaps dating back to the formation of the Earth. This would therefore prove that fossil fuel reserves are more numerous, according to geophysicist Alexander Goncharov of the Carnegie Institution in Washington, who simulated in 2009 the conditions of the mantle with a diamond probe and a laser creating from methane other molecules such as ethane, propane, butane, molecular hydrogen and graphite. Goncharov says all estimates of peak to date have been wrong, so believing in peak oil is unreliable and asserts that oil companies could look for new abiotic deposits.
Criticism of the return to coal
Another data that can be observed and that does not correspond with the prediction that coal replaced oil in 2005 differs from other reports such as the EDRO website, where for the year 2006 oil still represented 35.27% as a source of consumption, while coal still represented 28.02%, although the same page admits the increasing use of coal over oil. Similarly on the BP Global page, in its energy graphs tool mode, it can be seen that within the year 2007, oil consumption had a slight decrease from 3939.4 Mtoe to 3927.9 Mtoe. Yet coal consumption during the same period rose from 3194.5 Mtoe to 3303.7 Mtoe.
Critiques on per capita energy consumption
Duncan's articles assume that peak per capita energy was 11.15 bep/c/yr in 1979, but other data from the U.S. Department of Energy (EIA) show that since that date there has been an increase in that figure to 12.12 bep/c/yr after 2004. This is in contradiction with the postulate of the theory that energy per capita does not grow exponentially from 1979 to 2008.
TheOilDrum.com page argues that a true peak in per capita energy consumption of around 12.50 bep/year was observed between 2004 and 2005 based on data from the United Nations, British Petroleum and the International Energy Agency. These proponents mention that Duncan relied primarily on the per capita energy consumption of oil, but with notable omissions of the growth in per capita energy consumption of coal since 2000, attributed to the Asian emergency, and of the uninterrupted growth of natural gas since 1965.
They point out that the civilizational peak was not in 1979 but at a date after 2004 and with a duration of industrial civilization between 1950 and 2044. They also add that if other resources are not so dependent on the behavior of oil consumption probably the civilizational duration will be much longer than a hundred years.
After the reliability of the postulate that the rest of the world was following in the footsteps of the United States in the behavior of per capita energy consumption dynamics was challenged, in 2009 he published a new article called "Olduvai's Theory: Towards the Re-Equalization of the World Standard of Living", in which he compared the behavior of world per capita consumption with that of the most developed countries (OECD). In that article, based on a March 2009 OECD report of the composite leading indicator for China, India, and Brazil, he claims that world per capita energy consumption would start to decline, however, a new OECD composite leading indicator report in February 2010 sees a huge recovery, which contradicts Duncan's assertion.
Political and ideological criticisms
Ecologist criticism
Social ecologists and international associations such as Greenpeace are more optimistic, pinning their hopes on the alternative energies that neo- Malthusians despise such as geothermal energy, solar energy, wind energy and others with low or no pollution, but reject fusion energy, as they consider it potentially polluting. They say that data such as population growth are counted without taking into account the scenarios opened up by a large number of social and technological changes to solve problems, such as alternative energies and radical lifestyle changes that can reduce the effects that such a theory predicts. In contrast, market ecologists claim that such changes will occur by forcing them on consumers through the use of the laws of supply and demand.
Meanwhile, anarcho-primitivists and deep ecologists see this catastrophist scenario as a painful path to which civilization is leading us. Thus, they tend to see civilizational collapse as an inevitable outcome as much as a goal to be reached.
Left-wing criticism
Some libertarians, anarchists, and socialists think that these types of theories are lies or exaggerations that benefit economic speculation and that they have the purpose of selling more expensive and easily controllable resources that are depleted or scarce, to perpetuate the free market game and the ruling classes.
Jacque Fresco mentions that energy resources are not only inappropriate, but also that there are other very abundant energy sources that the social elites could not easily control because they are not speculable, since their reserves would be virtually inexhaustible in no less than 4000 years at the current rate of consumption, and this is only counting the case of geothermal energy.
He has also created The Venus Project in supposed opposition to the current capitalist economic model based on monetary gain.
Already some time ago there was a wide movement on the web to check the movement and, above all, the figure of Jacques Fresco. From the results, we can infer a possible fraud on Jacques Fresco's shares.
In the meantime, authors such as Peter Lindemann or Jeane Manning, add that there are a number of alternatives for obtaining and distributing energy freely, which if employed, would end the capitalist model of hoarding procurement and distribution. This has led them to formulate a conspiracy theory for the suppression of free energy. Prominent among such forms of free and free energy distribution is the wireless power transfer devised by Nikola Tesla.
In turn, all authors of such arguments about alleged conspiracies, see as an agenda of the elitists the formulations of peak oil, warmongering ideas, catastrophism, and neo-Malthusianism.
Right-wing criticism
Cornucopians are libertarians who argue that population growth, resource scarcity, and its polluting potential are exaggerations or lies, such as peak oil or the devastating environmental effect of coal. They argue that the same laws of the market would solve such problems if they were real.
The main theses defended by cornucopians are usually optimistic and pragmatic. Meanwhile, others consider them conservative, moralistic, and exclusionary. These theses consist of the following points:
Technological progress equals environmental progress. Environmental deterioration is minimized as technologies appear that use resources cleanly and efficiently.
Anti-environmentalism. They criticize catastrophist positions, such as Olduvai's theory, for being based on inadequate models that produce precarious scenarios that do not portray economic dynamics in their historical perspective. They reject the idea of degrowth because it goes against technological and, in turn, environmental progress.
Technological optimism. Technological progress continually invents energy substitutes before a resource is exhausted. In this way man since the Neolithic has continually exceeded the earth load by moving from one technology or energy source to another. Also the availability and efficiency of land for food production increases with the use of new and efficient technologies such as better agrochemicals, pesticides and genetic manipulation.
Growth is green. Economic growth solves all problems, i.e., it is poverty and not wealth that degrades and misuses the environment.
Reliance on the free market. The creation of new forms of ownership and new markets exerts pressures to switch from one technology or energy source to another through the use of economic speculation. For this reason, Cornucopians do not approve of State intervention.
Abolition of birth control. They argue that for every new mouth that demands resources for its nourishment, a brain and a pair of hands are also born, contributing to technological progress. In other words, contrary to what neo-Malthusians think, population is seen as a resource that far from causing problems solves them.
Defense by the anthropocentric aesthetic value of resources rather than by their future value.
Criticism and national positions
Conservatives, traditionalists and nationalists focus their positions only on temporal benefit from the ethnocentric or anthropocentric point of view without accounting for adverse effects to the environment, and do not usually outright deny peak oil or Olduvai's theory, but usually omit some points or all of the theory as a form of institutional denial. It is easy, and in fact according to theory, it predicts that most countries in the world will take this line and move from oil to coal or nuclear power like United States or China without caring about the social or ecological consequences.
An argument in favor of the positions of the various countries, especially China and the United States, is that while there is a shift from oil to coal, coal is beginning to be used in a non-polluting way through integrated gasification combined cycle, although their rate of energy return may be lower than doing it in a polluting way.
Another argument in favor is the cooperation of China, India, Japan, United States and Europe in the ITER project to demonstrate the scientific and technological feasibility of nuclear fusion, although participation from some countries has been intermittent.
If fusion energy were possible, the energy potential of the deuterium contained in all the planet's seas, rivers and lakes would be equivalent to approximately 1,068 x 109 times the world's oil reserves in 2009, i.e., each cubic meter of water on land would be equivalent to 150 tonnes of oil in energy content.
At the world "consumption rate" of 2007 this would equate to an approximate duration of 17.5 billion years of modern industrial civilization before this resource could be exhausted assuming a constant population of 6.5 billion people not growing and no economic growth. In reality the current system is based on economic, productive, demographic, material, or energy growth, and this growth rate is usually measured on an annualized basis. For example, at a growth rate of 2% per year the energy consumption of oil would be doubling every 34.65 years and, at the end of 1220 years, as much energy would be consumed as is available in all the seas in the form of deuterium to perform nuclear fusion. At a growth rate of 5% per year all the deuterium would be used up in 488 years, and at a growth rate of 11.4% per year in only 214 years.
Some positions and several developed countries have opted for the non-anthropogenic global warming or solar-origin version, seeing the environmentalist warnings as an exaggeration. Other countries, the Third World countries, see the depletion theories and the international environmental agreements as measures imposed by the First World countries to curb their development.
See also
Malthusian catastrophe
Climate change
Doomsday argument
Societal collapse
Notes
References
Bibliography
Futurism
Peak oil
World population
Energy consumption
Energy sources | 0.769718 | 0.992521 | 0.763962 |
Electroejaculation | Electroejaculation is a procedure used to obtain semen samples from sexually mature male mammals. The procedure is used for breeding programs and research purposes in various species, as well as in the treatment of ejaculatory dysfunction in human males. This procedure is used frequently with large mammals, particularly bulls and some domestic animals, as well as humans who have certain types of anejaculation. Electroejaculation has also been used for the cryoconservation of animal genetic resources, where semen is stored in low temperatures with the intent of conserving genetic material and future revival.
In the practice of veterinary medicine and animal science, it is common to collect semen from domestic ruminants using electro-ejaculation without sedation or anesthesia. Only in goats is mild sedation sometimes used. Because of the significant skeletal muscle contractions it causes, electroejaculation is not used in stallions — except in rare cases, under general anesthesia.
In humans, electroejaculation is usually carried out under a general anesthetic. An electric probe is inserted into the rectum adjacent to the prostate gland. The probe delivers an AC voltage, usually 12–24 volts sine wave at a frequency of 60 Hz, with a current limited to usually 500 mA, although some devices can generate currents of up to 1 A. The probe is activated for 1–2 seconds, referred to as a stimulus cycle. Ejaculation usually occurs after 2–3 stimulus cycles. Care must be taken when using currents greater than 500 mA, as tissue burns may result due to heating of the probe. The electric current stimulates nearby nerves, resulting in contraction of the pelvic muscles and ejaculation.
Variant names
Rectal electroejaculation (REE)
Trans-rectal electro-ejaculation (TREE)
Application to endangered species conservation
The procedure has been adopted and modified as an assisted reproduction technique for managing endangered species, to ensure the production of offspring from incompatible pairs of animals where artificial insemination is feasible.
Other uses
Electroejaculation may also be used for posthumous sperm retrieval in brain-dead humans.
See also
Artificial vagina#Veterinary use
Cryoconservation of animal genetic resources
Erotic electrostimulation
Frozen bovine semen#How semen is collected
Horse breeding
Semen collection
Vibroejaculation
JoGayle Howard, pioneer in animal electroejaculation techniques
References
External links
Semen collection techniques: the artificial vagina, digital manipulation, and electroejaculation.
Andrology
Reproduction in mammals
Animal breeding
Ejaculation
Artificial insemination
Ejaculation inducing devices | 0.768376 | 0.994255 | 0.763961 |
Penile-vaginal intercourse | Penile-vaginal intercourse or vaginal intercourse is a form of penetrative sexual intercourse in human sexuality, in which an erect penis is inserted into a vagina. Synonyms are: vaginal sex, cohabitation, coitus (Latin: coitus per vaginam), (in elegant colloquial language) intimacy, or (poetic) lovemaking. (Some of the synonyms are used for other variants of sexual intercourse as well.) It corresponds to mating or copulation in non-human animals.
Various sex positions can be used. Following insertion, additional stimulation is often achieved through rhythmic pelvic thrusting or a gyration of the hips, among other techniques. The biological imperative is to achieve male ejaculation so that sperm can enter the female reproductive tract and fertilize the egg, thus beginning the next stage in human reproduction, pregnancy.
Biological function
The desire for sensual pleasure is usually the main motivation for humans, and sometimes the wish to have a baby or more children. The biological function of vaginal intercourse is human reproduction. During coitus without a condom, sperm enter the vagina, first with the pre-ejaculate and then a larger amount through male ejaculation.
Sperm swim through the cervix and the uterus into the fallopian tubes of the woman. If they meet a fertilisable egg cell after or during an ovulation, or if an ovulation occurs hours or days later, one sperm can fertilize it. The resulting zygote develops into the early embryonic stages and, in the meantime, migrates from the fallopian tube into the uterus. The nidation of the embryo called blastocyst at this stage of development with the beginning of the production of hCG marks the beginning of a pregnancy. Without contraceptives, during a woman's fertile days, there is a relatively high probability that conception will follow.
For people who do not want (another) child, contraception has made it possible to separate vaginal intercourse from its biological function of procreation. Worldwide, about 57 per cent of couples with women of reproductive age use modern methods of contraception.
Since there is no mating season (estrus) in humans, the partners can have penile-vaginal intercourse distributed over the menstrual cycle regardless of the time of ovulation, even when the woman is already pregnant and after the menopause.<ref>Desmond Morris: The Naked Ape: A Zoologist's Study of the Human Animal</ref>
The principles of safer sex eliminate the reproductive function. Couples who wish to have offspring can avail themselves of the tests for sexually transmitted infections recommended by the WHO, so that after ruling out or treating any detected infection they can have penile-vaginal intercourse without using a condom.Mein Baby, mein Kids to go: Gesundheits-Check-up bei Kinderwunsch. 2023.Federal Ministry of Health (Germany): Pregnancy check-up and chlamydia screening.WHO: Planning pregnancy and having safe sex.
There are currently more than 7.5 billion people (World Population 8 billion) whose biological parents conceived this way. The increasing proportion of people conceived through intrauterine insemination and in vitro fertilisation is still comparatively small.
Legal situation
Vaginal intercourse between private individuals is part of their private sphere. Sexual intercourse between an adult and a young person is generally only permitted after the age of consent in the respective country has been reached, though some countries/jurisdictions have special exceptions to this rule. These exceptions may include when the minor is legally married to the adult or within no more than a specified age gap with the adult. Nowadays intimate intercourse between unmarried teenagers is permitted and common in many countries, but not in Muslim culture. Lack of sexual education about contraception often leads to teenage pregnancy. In many countries, after marriage, the first cohabitation is considered a (sexual) "consummation of marriage". In countries with Sharia, the religious regulations from the Quran, which prohibit any sexual activity with a person to whom one is not married, are a part of the legislation. In every country of the world vaginal intercourse performed without the consent of the other person constitutes rape.
Psychological aspects
A desire for pleasure is a natural motivation for sex in general. Human intimacy favours a pleasurable experience. For people who prefer non-committal sex, emotional closeness plays a lesser role.Jessie Sage: If you require an emotional connection to feel any sexual connection, you are not alone. In: pghcitypaper.com vom 14 August 2019. In studies consensual vaginal intercourse has been associated with signs of better physiological and psychological functions. In women, regular orgasms during vaginal sex correlate positively with passion, love and relationship quality.
In experimental studies with men and women whose hormone levels were examined, one having vaginal intercourse and the other self-pleasuring to orgasm, it was found that in both sexes the increase in prolactin was 400% higher after vaginal intercourse than after masturbation. This is interpreted to mean that vaginal intercourse is physiologically more satisfying. In satisfying relationships, positive effects on health and well-being have been proven. One study (2012) showed a stress reduction effect for both partners in satisfying relationships, but not in unsatisfactory relationships.BKK-web TV Gesundheitsmagazin: Sex im Alter. 26 February 2010. As preliminary for the natural procreation of a new human being, for females penile-vaginal intercourse is connected with various attributes like psychological onto sacramental aspects beyond the reproductive function. Sometimes sex is also driven by motives like to degrade, to punish or to overcome loneliness and boredom.
In 2006, the WHO reported a worldwide prevalence of between 8% and 21.1% of painful vaginal sex for women.
In a U.S. study, about 30% of women and 7% of men reported pain, for most only mild and of short duration. This study found that a large percentage of Americans do not talk about the pain with their partner.
In a Swedish study of young women aged 18 to 22, as many as 47% reported pain, but they said they did not want to interrupt the sex act. Some pretended to enjoy it instead of giving the man any feedback. The most common reason was that they put the man's pleasure above their own and tended toward submissiveness during sex.
Data from an online survey in the United States suggest that a proportion of men engage in sexual behaviours described as dominant and purposeful, in which they mimic behaviors seen in porn. Unless a woman's pain has a physical cause, it is often related to impatient partner action or lack of open communication. The prevention of sexual disappointment and dyspareunia caused by the behaviour of the male partner is summarized by Betty Dodson in the following words:
“It’s a pleasure to be with a man who is self-assured, confident in his ability to get erect and maintain his erection long enough to enjoy the dance of erotic love. If he’s not a cocksman, he has mastered oral and manual skills. He has a sensitive touch and never hesitates to ask how I like my clitoris touched. He is never in a hurry. Before touching my clitoris, he always applies some kind of lubrication. When entering my vagina, he savours slow penetration.”
Description
In all mammals including humans, penile penetration of the vagina is an instinctual behaviour serving the continuation of the species.Desmond Morris: The Naked Ape. In humans, learned behaviour also plays an important role (sexual scripts). From Shere Hite, there is the suggestion to define "onset" not by penetration, but by the covering of the vulva by the penis.
Preparation for vaginal coitus usually involves foreplay in the form of caresses, petting, manual sex and/or oral sex. For the woman, physical sexual arousal and clitoral erection resulting from the foreplay are the prerequisites for the reaction of the intravaginal G-spot.
The sex positions, the pelvic movements of the woman and the man, how slowly or quickly they are performed and the lesser or greater depth have an influence on the two arousal curves. Duration can be influenced by positions, gentle or stronger movements, and by touching erogenous zones with the hands. The needs for movements are individually different for both women and men. For women, pelvic floor training and active movements of their pelvis during vaginal intercourse increase the chance of orgasm.
A man's arousal curve usually rises faster, while women need plenty of time. According to an investigation of 2005 the time from penile insertion to male orgasm, the intravaginal ejaculation latency time (IELT), varies between 0.55 and 44.1 minutes. Sexually experienced men use delaying techniques to give their female partner the time she needs. Most men can learn to intentionally delay their own arousal and orgasm by practicing this doing masturbation. The reason why men are faster by nature:
In a man, his glans penis is constantly enveloped by the vagina, it is continuously stimulated, making it likely that moving in and out will cause him to have an orgasm relatively soon.
In women, the glans clitoridis lies in a distance to the vaginal entrance often without physical contact.
The clitoral glans has an essential function in triggering sexual arousal and then orgasm,
For many women, the movement of the penis in the vagina causes only a limited increase in their arousal. Many women reach orgasm when both the extravaginally located parts of the clitoris and the erogenous zones inside the vagina are continuously stimulated simultaneously for long enough. Sexual arousal can increase to the point where one or both partners experience an orgasm either in succession, or simultaneously. The hypothesis of two modes of female orgasm - "vaginal" or "clitoral" - is not tenable. Rather, it is a complex reaction in which all organ systems of the human body are involved. Without clitoral stimulation, 23.3% of women reach orgasm during vaginal intercourse, with simultaneous clitoral stimulation 74%.
When the man is sitting upright with the woman sitting on his lap, she can rub her clitoris against his pubic bone.
In lateral coital position there are also possibilities for clitoral stimulation while the penis is moving inside. In the Flanquette position, the man can give some pressure with his thigh to her mons pubis and the clitoral glans.
Another variation of vaginal sex is with lesbians who use a single or double-sided dildo.
Injury risks
In a woman with an intact moist vaginal mucosa, friction by the penis is painless. In case of insufficient vaginal lubrication or excessive temporal extension of the coitus, the mucous membranes may become sore due to mechanical irritation. If sand gets into the vagina on a beach or in an unclean dwelling, small abrasions occur in the vagina and on the glans penis. A vaginal douche has physiological disadvantages. The sand is excreted by the natural self-cleaning of the mucous membrane.
The length of the stretched vagina varies from person to person. The mean value is 13 cm (+ - 3 cm) which corresponds to the average length of the human penis. At rest, the vagina is considerably shorter. In a study from 1993, the mean value was given as 9.2 cm, in a study from 2006 only 6.27 cm with a variation of the lengths between 4.1 and 9.5 cm.
If the woman is not sufficiently aroused with a deep penetration the penis bumps against the cervix causing pain. If the stretching capacity of the vagina is exceeded by a too large penis, pain and inflammation will result. The same problem can occur with a relatively short vagina. The remedy in both situations is to be mindful of the time for clitoral stimulation by foreplay and to avoid penetrating too deeply.
A comparative study between women who had consensual vaginal sex and victims of rape found that in consensual sex, 6.9 percent of women had genital injuries. Among women who were raped, 22.8 percent suffered genital injuries.
In men, there is a risk of penile rupture if the penis is bent when erect. This is a case of medical emergency. According to studies (2017 and 2022), accidents in which the man suffers a penile fracture occur predominantly in the doggy style position, but a careless movement by the woman on top can also inflict such a serious injury on the man.
One of the causes is the penis slipping out of the vagina and, during the next thrusting movement, forcefully hitting an area of the vulva under which her pubis bone and the pubic symphysis are, causing the penis to suddenly bend downwards. In the missionary position such accidents are rare. Unsuitable angles and changes of position of one or both partners can also lead to severe misstrain of the penile corpus cavernosum and thus to a penile rupture.
Partnering techniques preferred by women
In 2021, a study of 3017 American women identified the ways women have discovered to make vaginal sex with a male partner more pleasurable and arousing for themselves.
"Angling": 87.5% of women find it pleasurable to circle their pelvis or lift and lower it to control where the penis pushes or rubs and how it feels.
Rocking": 76% of women find it sexually arousing to have the penis constantly deep inside the vagina without any long in and out movements and to rub their clitoral glans against the base of the penis.
"Shallowing": 84% of women enjoy and respond to "shallow" penetration, i.e. when the tip of the penis moves only in the front part of the vagina (G-spot), but not on the outside or deep inside.
Pairing": 69.7% of women are most likely to reach orgasm during vaginal intercourse when they or their partner stimulate their clitoris with a finger or vibrator or Hitachi Magic Wand simultaneously.
The knowledge of such techniques enables women to communicate their preferences to their partners. Pairing has been tested successfully since the 1970s by Betty Dodson in her coachings for women suffering from anorgasmia by using a dildo to penetrate the vagina and a vibrator to place next to the clitoral glans. The other techniques were also part of her coaching for women who wished to experience orgasm during vaginal intercourse as much as their partner.
Physical conditions
Menstrual cramps, hygienic or cultural reasons may condition abstinence during menstruation. A necessity for painless intimate intercourse is vaginal lubrication. In women with vaginal aplasia, a neovagina can be surgically created by vaginoplasty. In men, the prerequisite is a painless penis and the ability to have an erection.Encyclopedia.com: Penetration. 30 March 2023.
An investigation by the Charité Berlin (2002) found that, for women, the partner's smell had the first effect on stimulating or inhibiting pleasure, followed by mood, personal hygiene, clitoral stimulation and safety from disease. Attractiveness and penis length played a subordinate role. Women generally respond more to olfactory perception, men more to visual perceptions.
A variety of factors can lead to discomfort or pain (see dyspareunia). Specialists in gynaecology are responsible for treatment in women; specialists in urology and dermatology are responsible for treatment in men.
For people with physical impairments (disability), sex positions that do not cause discomfort are usually possible. In a study of patients with chronic lumbar spine pain, 81 per cent complained of sexual problems, and 66 per cent never talked about the issue with their physician.
See also
Sexual and reproductive health
Sex education
Safe sex
References
Further reading
William H. Masters, Virginia E. Johnson, Robert C. Kolodny: Heterosexuality.'' New York; London: HarperCollins, 1994. ISBN 978-0-7225-3027-6.
Sexual acts
Sexology
Sexual intercourse
Sex | 0.764146 | 0.99975 | 0.763955 |
Effective altruism | Effective altruism (EA) is a 21st-century philosophical and social movement that advocates impartially calculating benefits and prioritizing causes to provide the greatest good. It is motivated by "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, who are sometimes called , follow a variety of approaches proposed by the movement, such as donating to selected charities and choosing careers with the aim of maximizing positive impact. The movement has achieved significant popularity outside of academia, spurring the creation of university-based institutes, research centers, advisory organizations and charities, which, collectively, have donated several hundreds of millions of dollars.
Effective altruists emphasize impartiality and the global equal consideration of interests when choosing beneficiaries. Popular cause priorities within effective altruism include global health and development, social and economic inequality, animal welfare, and risks to the survival of humanity over the long-term future. EA has an especially influential status within animal advocacy.
The movement developed during the 2000s, and the name was coined in 2011. Philosophers influential to the movement include Peter Singer, Toby Ord, and William MacAskill. What began as a set of evaluation techniques advocated by a diffuse coalition evolved into an identity. Effective altruism has strong ties to the elite universities in the United States and Britain, and Silicon Valley has become a key centre for the "longtermist" submovement, with a tight subculture there.
The movement received mainstream attention and criticism with the bankruptcy of the cryptocurrency exchange FTX as founder Sam Bankman-Fried was a major funder of effective altruism causes prior to late 2022. Some in the San Francisco Bay Area criticized what they described as a culture of sexual misconduct.
History
Beginning in the latter half of the 2000s, several communities centered around altruist, rationalist, and futurological concerns started to converge, such as:
The evidence-based charity community centered around GiveWell, including Open Philanthropy, which originally came out of GiveWell Labs but then became independent.
The community around pledging and career selection for effective giving, centered around the Giving What We Can and 80,000 Hours organisations.
The Singularity Institute (now MIRI) for studying the safety of artificial intelligence, the Future of Humanity Institute studying topics such as existential risk, and the LessWrong discussion forum, which focuses on rationalism.
In 2011, Giving What We Can and 80,000 Hours decided to incorporate into an umbrella organization and held a vote for their new name; the "Centre for Effective Altruism" was selected. The Effective Altruism Global conference has been held since 2013. As the movement formed, it attracted individuals who were not part of a specific community, but who had been following the Australian moral philosopher Peter Singer's work on applied ethics, particularly "Famine, Affluence, and Morality" (1972), Animal Liberation (1975), and The Life You Can Save (2009). Singer himself used the term in 2013, in a TED talk titled "The Why and How of Effective Altruism".
Notable philanthropists
An estimated $416 million was donated to effective charities identified by the movement in 2019, representing a 37% annual growth rate since 2015. Two of the largest donors in the effective altruism community, Dustin Moskovitz, who had become wealthy through co-founding Facebook, and his wife Cari Tuna, hope to donate most of their net worth of over $11 billion for effective altruist causes through the private foundation Good Ventures. Others influenced by effective altruism include Sam Bankman-Fried, as well as professional poker players Dan Smith and Liv Boeree. Jaan Tallinn, the Estonian billionaire founder of Skype, is known for donating to some effective altruist causes. Sam Bankman-Fried launched a philanthropic organization called the FTX Foundation in February 2021, and it made contributions to a number of effective altruist organizations, but it was shut down in November 2022 when FTX collapsed.
Notable publications and media
A number of books and articles related to effective altruism have been published that have codified, criticized, and brought more attention to the movement. In 2015, philosopher Peter Singer published The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically. The same year, the Scottish philosopher and ethicist William MacAskill published Doing Good Better: How Effective Altruism Can Help You Make a Difference.
In 2018, American news website Vox launched its Future Perfect section, led by journalist Dylan Matthews, which publishes articles and podcasts on "finding the best ways to do good".
In 2019, Oxford University Press published the volume Effective Altruism: Philosophical Issues, edited by Hilary Greaves and Theron Pummer.
More recent books have emphasized concerns for future generations. In 2020, the Australian moral philosopher Toby Ord published The Precipice: Existential Risk and the Future of Humanity, while MacAskill published What We Owe the Future in 2022.
In 2023, Oxford University Press published the volume The Good it Promises, The Harm it Does: Critical Essays on Effective Altruism, edited by Carol J. Adams, Alice Crary, and Lori Gruen.
Philosophy
Effective altruists focus on the many philosophical questions related to the most effective ways to benefit others. Such philosophical questions shift the starting point of reasoning from "what to do" to "why" and "how". There is not a consensus on the answers, and there are also differences between effective altruists who believe that they should do the most good they possibly can with all of their resources and those who only try do the most good they can within a defined budget.
According to MacAskill, the view of effective altruism as doing the most good one can within a defined budget can be compatible with a wide variety of views on morality and meta-ethics, as well as traditional religious teachings on altruism such as in Christianity. Effective altruism can also be in tension with religion where religion emphasizes spending resources on worship and evangelism instead of causes that do the most good.
Other than Peter Singer and William MacAskill, philosophers associated with effective altruism include Nick Bostrom, Toby Ord, Hilary Greaves, and Derek Parfit. Economist Yew-Kwang Ng conducted similar research in welfare economics and moral philosophy.
The Centre for Effective Altruism lists the following four principles that unite effective altruism: prioritization, impartial altruism, open truthseeking, and a collaborative spirit. To support people's ability to act altruistically on the basis of impartial reasoning, the effective altruism movement promotes values and actions such as a collaborative spirit, honesty, transparency, and publicly pledging to donate a certain percentage of income or other resources.
Impartiality
Effective altruism aims to emphasize impartial reasoning in that everyone's well-being counts equally. Singer, in his 1972 essay "Famine, Affluence, and Morality", wrote:
It makes no moral difference whether the person I can help is a neighbor's child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away ... The moral point of view requires us to look beyond the interests of our own society.
Impartiality combined with seeking to do the most good leads to prioritizing benefits to those who are in a worse state, because anyone who happens to be worse off will benefit more from an improvement in their state, all other things being equal.
Scope of moral consideration
One issue related to moral impartiality is the question of which beings are deserving of moral consideration. Some effective altruists consider the well-being of non-human animals in addition to humans, and advocate for animal welfare issues such as ending factory farming. Those who subscribe to longtermism include future generations as possible beneficiaries and try to improve the moral value of the long-term future by, for example, reducing existential risks.
Criticism of impartiality
The drowning child analogy in Singer's essay provoked philosophical debate. In response to a version of Singer's drowning child analogy, philosopher Kwame Anthony Appiah in 2006 asked whether the most effective action of a man in an expensive suit, confronted with a drowning child, would not be to save the child and ruin his suit—but rather, sell the suit and donate the proceeds to charity. Appiah believed that he "should save the drowning child and ruin my suit". In a 2015 debate, when presented with a similar scenario of either saving a child from a burning building or saving a Picasso painting to sell and donate the proceeds to charity, MacAskill responded that the effective altruist should save and sell the Picasso. Psychologist Alan Jern called MacAskill's choice "unnatural, even distasteful, to many people", although Jern concluded that effective altruism raises questions "worth asking". MacAskill later endorsed a "qualified definition of effective altruism" in which effective altruists try to do the most good "without violating constraints" such as any obligations that someone might have to help those nearby.
William Schambra has criticized the impartial logic of effective altruism, arguing that benevolence arising from reciprocity and face-to-face interactions is stronger and more prevalent than charity based on impartial, detached altruism. Such community-based charitable giving, he wrote, is foundational to civil society and, in turn, democracy. Larissa MacFarquhar said that people have diverse moral emotions, and she suggested that some effective altruists are not unemotional and detached but feel as much empathy for distant strangers as for people nearby. Richard Pettigrew concurred that many effective altruists "feel more profound dismay at the suffering of people unknown to them than many people feel", and he argued that impartiality in EA need not be dispassionate and "is not obviously in tension with much in care ethics" as some philosophers have argued. Ross Douthat of The New York Times criticized the movement's telescopic philanthropy' aimed at distant populations" and envisioned "effective altruists sitting around in a San Francisco skyscraper calculating how to relieve suffering halfway around the world while the city decays beneath them", while he also praised the movement for providing "useful rebukes to the solipsism and anti-human pessimism that haunts the developed world today".
Cause prioritization
A key component of effective altruism is "cause prioritization". Cause prioritization is based on the principle of cause neutrality, the idea that resources should be distributed to causes based on what will do the most good, irrespective of the identity of the beneficiary and the way in which they are helped. By contrast, many non-profits emphasize effectiveness and evidence with respect to a single cause such as education or climate change.
One tool that EA-based organizations may use to prioritize cause areas is the framework. Importance is the amount of value that would be created if a problem were solved, tractability is the fraction of a problem that would be solved if additional resources were devoted to it, and neglectedness is the quantity of resources already committed to a cause.
The information required for cause prioritization may involve data analysis, comparing possible outcomes with what would have happened under other conditions (counterfactual reasoning), and identifying uncertainty. The difficulty of these tasks has led to the creation of organizations that specialize in researching the relative prioritization of causes.
Criticism of cause prioritization
This practice of "weighing causes and beneficiaries against one another" was criticized by Ken Berger and Robert Penna of Charity Navigator for being "moralistic, in the worst sense of the word" and "elitist". William MacAskill responded to Berger and Penna, defending the rationale for comparing one beneficiary's interests against another and concluding that such comparison is difficult and sometimes impossible but often necessary. MacAskill argued that the more pernicious form of elitism was that of donating to art galleries (and like institutions) instead of charity. Ian David Moss suggested that the criticism of cause prioritization could be resolved by what he called "domain-specific effective altruism", which would encourage "that principles of effective altruism be followed within an area of philanthropic focus, such as a specific cause or geography" and could resolve the conflict between local and global perspectives for some donors.
Cost-effectiveness
Some charities are considered to be far more effective than others, as charities may spend different amounts of money to achieve the same goal, and some charities may not achieve the goal at all. Effective altruists seek to identify interventions that are highly cost-effective in expectation. Many interventions have uncertain benefits, and the expected value of one intervention can be higher than that of another if its benefits are larger, even if it has a smaller chance of succeeding. One metric effective altruists use to choose between health interventions is the estimated number of quality-adjusted life years (QALY) added per dollar.
Some effective altruist organizations prefer randomized controlled trials as a primary form of evidence, as they are commonly considered the highest level of evidence in healthcare research. Others have argued that requiring this stringent level of evidence unnecessarily narrows the focus to issues where the evidence can be developed. Kelsey Piper argues that uncertainty is not a good reason for effective altruists to avoid acting on their best understanding of the world, because most interventions have mixed evidence regarding their effectiveness.
Pascal-Emmanuel Gobry and others have warned about the "measurement problem", with issues such as medical research or government reform worked on "one grinding step at a time", and results being hard to measure with controlled experiments. Gobry also argues that such interventions risk being undervalued by the effective altruism movement. As effective altruism emphasizes a data-centric approach, critics say principles which do not lend themselves to quantification—justice, fairness, equality—get left in the sidelines.
Counterfactual reasoning
Counterfactual reasoning involves considering the possible outcomes of alternative choices. It has been employed by effective altruists in a number of contexts, including career choice. Many people assume that the best way to help others is through direct methods, such as working for a charity or providing social services. However, since there is a high supply of candidates for such positions, it makes sense to compare the amount of good one candidate does to how much good the next-best candidate would do. According to this reasoning, the marginal impact of a career is likely to be smaller than the gross impact.
Differences from utilitarianism
Although EA aims for maximizing like utilitarianism, EA differs from utilitarianism in a few ways; for example, EA does not claim that people should always maximize the good regardless of the means, and EA does not claim that the good is the sum total of well-being. Toby Ord has described utilitarians as "number-crunching", compared with most effective altruists whom he called "guided by conventional wisdom tempered by an eye to the numbers". Other philosophers have argued that EA still retains some core ethical commitments that are essential and distinctive to utilitarianism, such as the principle of impartiality, welfarism and good-maximization.
MacAskill has argued that one shouldn't be absolutely certain about which ethical view is correct, and that "when we are morally uncertain, we should act in a way that serves as a best compromise between different moral views". He also wrote that even from a purely consequentialist perspective, "naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct".
Cause priorities
The principles and goals of effective altruism are wide enough to support furthering any cause that allows people to do the most good, while taking into account cause neutrality. Many people in the effective altruism movement have prioritized global health and development, animal welfare, and mitigating risks that threaten the future of humanity.
Global health and development
The alleviation of global poverty and neglected tropical diseases has been a focus of some of the earliest and most prominent organizations associated with effective altruism. Charity evaluator GiveWell was founded by Holden Karnofsky and Elie Hassenfeld in 2007 to address poverty, where they believe additional donations to be the most impactful. GiveWell's leading recommendations include: malaria prevention charities Against Malaria Foundation and Malaria Consortium, deworming charities Schistosomiasis Control Initiative and Deworm the World Initiative, and GiveDirectly for direct cash transfers to beneficiaries. The organization The Life You Can Save, which originated from Singer's book of the same name, works to alleviate global poverty by promoting evidence-backed charities, conducting philanthropy education, and changing the culture of giving in affluent countries.
Animal welfare
Improving animal welfare has been a focus of many effective altruists. Singer and Animal Charity Evaluators (ACE) have argued that effective altruists should prioritize changes to factory farming over pet welfare. 60 billion land animals are slaughtered and between 1 and 2.7 trillion individual fish are killed each year for human consumption.
A number of non-profit organizations have been established that adopt an effective altruist approach toward animal welfare. ACE evaluates animal charities based on their cost-effectiveness and transparency, particularly those tackling factory farming. Faunalytics focuses on animal welfare research. Other animal initiatives affiliated with effective altruism include Animal Ethics' and Wild Animal Initiative's work on wild animal suffering, addressing farm animal suffering with cultured meat, and increasing concern for all kinds of animals. The Sentience Institute is a think tank founded to expand the moral circle to other sentient beings.
Long-term future and global catastrophic risks
The ethical stance of longtermism, emphasizing the importance of positively influencing the long-term future, developed closely in relation to effective altruism. Longtermism argues that "distance in time is like distance in space", suggesting that the welfare of future individuals matters as much as the welfare of currently existing individuals. Given the potentially extremely high number of individuals that could exist in the future, longtermists seek to decrease the probability that an existential catastrophe irreversibly ruins it. Toby Ord has stated that "the people of the future may be even more powerless to protect themselves from the risks we impose than the dispossessed of our own time".
Existential risks, such as dangers associated with biotechnology and advanced artificial intelligence, are often highlighted and the subject of active research. Existential risks have such huge impacts that achieving a very small change in such a risk—say a 0.0001-percent reduction—"might be worth more than saving a billion people today", reported Gideon Lewis-Kraus in 2022, but he added that nobody in the EA community openly endorses such an extreme conclusion.
Organizations that work actively on research and advocacy for improving the long-term future, and have connections with the effective altruism community, are the Future of Humanity Institute at the University of Oxford, the Centre for the Study of Existential Risk at the University of Cambridge, and the Future of Life Institute. In addition, the Machine Intelligence Research Institute is focused on the more narrow mission of managing advanced artificial intelligence.
Approaches
Effective altruists pursue different approaches to doing good, such as donating to effective charitable organizations, using their career to make more money for donations or directly contributing their labor, and starting new non-profit or for-profit ventures.
Donation
Financial donation
Many effective altruists engage in charitable donation. Some believe it is a moral duty to alleviate suffering through donations if other possible uses of those funds do not offer comparable benefits to oneself. Some lead a frugal lifestyle in order to donate more.
Giving What We Can (GWWC) is an organization whose members pledge to donate at least 10% of their future income to the causes that they believe are the most effective. GWWC was founded in 2009 by Toby Ord, who lives on £18,000 ($27,000) per year and donates the balance of his income. In 2020, Ord said that people had donated over $100 million to date through the GWWC pledge.
Founders Pledge is a similar initiative, founded out of the non-profit Founders Forum for Good, whereby entrepreneurs make a legally binding commitment to donate a percentage of their personal proceeds to charity in the event that they sell their business. As of April 2024, nearly 1,900 entrepreneurs had pledged around $10 billion and nearly $1.1 billion had been donated.
Organ donation
EA has been used to argue that humans should donate organs, whilst alive or after death, and some effective altruists do.
Career choice
Effective altruists often consider using their career to do good, both by direct service and indirectly through their consumption, investment, and donation decisions. 80,000 Hours is an organization that conducts research and gives advice on which careers have the largest positive impact.
Earning to give
Founding effective organizations
Some effective altruists start non-profit or for-profit organizations to implement cost-effective ways of doing good. On the non-profit side, for example, Michael Kremer and Rachel Glennerster conducted randomized controlled trials in Kenya to find out the best way to improve students' test scores. They tried new textbooks and flip charts, as well as smaller class sizes, but found that the only intervention that raised school attendance was treating intestinal worms in children. Based on their findings, they started the Deworm the World Initiative. From 2013 to August 2022, GiveWell designated Deworm the World (now run by nonprofit Evidence Action) as a top charity based on their assessment that mass deworming is "generally highly cost-effective"; however, there is substantial uncertainty about the benefits of mass deworming programs, with some studies finding long-term effects and others not. The Happier Lives Institute conducts research on the effectiveness of cognitive behavioral therapy (CBT) in developing countries; Canopie develops an app that provides cognitive behavioural therapy to women who are expecting or postpartum; Giving Green analyzes and ranks climate interventions for effectiveness; the Fish Welfare Initiative works on improving animal welfare in fishing and aquaculture; and the Lead Exposure Elimination Project works on reducing lead poisoning in developing countries.
Incremental versus systemic change
While much of the initial focus of effective altruism was on direct strategies such as health interventions and cash transfers, more systematic social, economic, and political reforms have also attracted attention. Mathew Snow in Jacobin wrote that effective altruism "implores individuals to use their money to procure necessities for those who desperately need them, but says nothing about the system that determines how those necessities are produced and distributed in the first place". Philosopher Amia Srinivasan criticized William MacAskill's Doing Good Better for a perceived lack of coverage of global inequality and oppression, while noting that effective altruism is in principle open to whichever means of doing good is most effective, including political advocacy aimed at systemic change. Srinivasan said, "Effective altruism has so far been a rather homogeneous movement of middle-class white men fighting poverty through largely conventional means, but it is at least in theory a broad church." Judith Lichtenberg in The New Republic said that effective altruists "neglect the kind of structural and political change that is ultimately necessary". An article in The Ecologist published in 2016 argued that effective altruism is an apolitical attempt to solve political problems, describing the concept as "pseudo-scientific". The Ethiopian-American AI scientist Timnit Gebru has condemned effective altruists "for acting as though their concerns are above structural issues as racism and colonialism", as Gideon Lewis-Kraus summarized her views in 2022.
Philosophers such as Susan Dwyer, Joshua Stein, and Olúfẹ́mi O. Táíwò have criticized effective altruism for furthering the disproportionate influence of wealthy individuals in domains that should be the responsibility of democratic governments and organizations.
Arguments have been made that movements focused on systemic or institutional change, for example democratization, are compatible with effective altruism. Philosopher Elizabeth Ashford posits that people are obligated to both donate to effective aid charities and to reform the structures that are responsible for poverty. Open Philanthropy has given grants for progressive advocacy work in areas such as criminal justice, economic stabilization, and housing reform, despite pegging the success of political reform as being "highly uncertain".
Psychological research
Researchers in psychology and related fields have identified psychological barriers to effective altruism that can cause people to choose less effective options when they engage in altruistic activities such as charitable giving.
Other criticism and controversies
While originally the movement leaders were associated with frugal lifestyles, the arrival of big donors, including Bankman-Fried, led to more spending and opulence, which seemed incongruous to the movement's espoused values. In 2022, Effective Ventures Foundation purchased the estate of Wytham Abbey for the purpose of running workshops, but put it up for sale in 2024.
Timnit Gebru claimed that effective altruism has acted to overrule any other concerns regarding AI ethics (e.g. deepfake porn, algorithmic bias), in the name of either preventing or controlling artificial general intelligence. She and Émile P. Torres further assert that the movement belongs to a network of interconnected movements they've termed TESCREAL, which they contend serves as intellectual justification for wealthy donors to shape humanity's future.
Sam Bankman-Fried
Sam Bankman-Fried, the eventual founder of the cryptocurrency exchange FTX, had a seminal lunch with philosopher William MacAskill in 2012 while he was an undergraduate at MIT in which MacAskill encouraged him to go earn money and donate it, rather than volunteering his time for causes. Bankman-Fried went on to a career in investing and around 2019 became more publicly associated with the effective altruism movement, announcing that his goal was to "donate as much as [he] can". Bankman-Fried founded the FTX Future Fund, which brought on MacAskill as one of its advisers, and which made a $13.9 million grant to the Centre for Effective Altruism where MacAskill holds a board role.
After the collapse of FTX in late 2022, the movement underwent additional public scrutiny. Bankman-Fried's relationship with effective altruism has been called into question as a public relations strategy, while the movement's embrace of him proved damaging to its reputation. Some journalists asked whether the effective altruist movement was "complicit" in FTX's collapse, because it was convenient for leaders to overlook specific warnings about Bankman-Fried's behavior or questionable ethics at the trading firm Alameda. Fortune'''s crypto editor Jeff John Roberts said that "Bankman-Fried and his cronies professed devotion to 'EA,' but all their high-minded words turned out to be flimflam to justify robbing people".
However, several leaders of the effective altruism movement, including William MacAskill and Robert Wiblin, condemned FTX's actions. MacAskill reemphasized that bringing about good consequences does not justify violating rights or sacrificing integrity.
Philosopher Leif Wenar argued that Bankman-Fried's conduct typified much of the movement by focusing on positive impacts and expected value without adequately weighing risk and harm from philanthropy. He argued that the FTX case is not separable, as some in the EA community maintained, from the assumptions and reasoning that molded effective altruism as a philosophy in the first place and that Wenar considered to be simplistic. Philosophers Richard Pettigrew and Richard Yetter Chappell were among those who responded to Wenar with defenses of EA.
Sexual misconduct accusations
Critiques arose not only in relation to Bankman-Fried's role and his close association with William MacAskill, but also concerning issues of exclusion and sexual harassment. A 2023 Bloomberg article featured some members of the effective altruism community who alleged that the philosophy masked a culture of predatory behavior. In a 2023 Time magazine article, seven women reported misconduct and controversy in the effective altruism movement. They accused men within the movement, typically in the Bay Area, of using their power to groom younger women for polyamorous sexual relationships. The accusers argued that the majority male demographic and the polyamorous subculture combined to create an environment where sexual misconduct was tolerated, excused or rationalized away. In response to the accusations, the Centre for Effective Altruism told Time that some of the alleged perpetrators had already been banned from the organization and said it would investigate new claims. The organization also argued that it is challenging to discern to what extent sexual misconduct issues were specific to the effective altruism community or reflective of broader societal misogyny.
Other prominent people
Businessman Elon Musk spoke at an effective altruism conference in 2015. He described MacAskill's 2022 book What We Owe the Future as "a close match for my philosophy", but has not officially joined the movement. An article in The Chronicle of Philanthropy argued that the record of Musk's substantive alignment with effective altruism was "choppy", and Bloomberg News noted that his 2021 charitable contributions showed "few obvious signs that effective altruism... impacted Musk's giving."
Actor Joseph Gordon-Levitt has publicly stated he would like to bring the ideas of effective altruism to a broader audience.
Sam Altman, the CEO of OpenAI, has called effective altruism an "incredibly flawed movement" that shows "very weird emergent behavior". Effective altruist concerns about AI risk were present among the OpenAI board members who fired Altman in November 2023; he has been reinstated as CEO and the Board membership has changed.
See also
"The Gospel of Wealth"Article written by Andrew Carnegie
Notes and references
Further reading
An article based on the preface and first chapter of Singer's book The Most Good You Can Do was published in the Boston Review'' on July 1, 2015, with a forum of responses by other writers and a final response by Singer.
External links
EffectiveAltruism.org, an online introduction and resource compilation on effective altruism | 0.764789 | 0.998909 | 0.763954 |
Multiple choice | Multiple choice (MC), objective response or MCQ(for multiple choice question) is a form of an objective assessment in which respondents are asked to select only correct answer from the choices offered as a list. The multiple choice format is most frequently used in educational testing, in market research, and in elections, when a person chooses between multiple candidates, parties, or policies.
Although E. L. Thorndike developed an early scientific approach to testing students, it was his assistant Benjamin D. Wood who developed the multiple-choice test. Multiple-choice testing increased in popularity in the mid-20th century when scanners and data-processing machines were developed to check the result. Christopher P. Sole created the first multiple-choice examinations for computers on a Sharp Mz 80 computer in 1982. It was developed to aid people with dyslexia cope with agricultural subjects, as Latin plant names can be difficult to understand and write.
Nomenclature
Single Best Answer (SBA or One Best Answer) is a written examination form of MCQ used extensively in medical education. This form, from which the candidate must choose the best answer, has been distinguished from Single Correct Answer forms, which can produce confusion where more than one of the possible answers has some validity. The SBA form makes it explicit that more than one answer may have elements that are correct, but that one answer will be superior.
Structure
Multiple choice items consist of a stem and several alternative answers. The stem is the opening—a problem to be solved, a question asked, or an incomplete statement to be completed. The options are the possible answers that the examinee can choose from, with the correct answer called the key and the incorrect answers called distractors. Only one answer may be keyed as correct. This contrasts with multiple response items in which more than one answer may be keyed as correct.
Usually, a correct answer earns a set number of points toward the total mark, and an incorrect answer earns nothing. However, tests may also award partial credit for unanswered questions or penalize students for incorrect answers, to discourage guessing. For example, the SAT Subject tests remove a quarter point from the test taker's score for an incorrect answer.
For advanced items, such as an applied knowledge item, the stem can consist of multiple parts. The stem can include extended or ancillary material such as a vignette, a case study, a graph, a table, or a detailed description which has multiple elements to it. Anything may be included as long as it is necessary to ensure the utmost validity and authenticity to the item. The stem ends with a lead-in question explaining how the respondent must answer. In a medical multiple choice items, a lead-in question may ask "What is the most likely diagnosis?" or "What pathogen is the most likely cause?" in reference to a case study that was previously presented.
The items of a multiple choice test are often colloquially referred to as "questions," but this is a misnomer because many items are not phrased as questions. For example, they can be presented as incomplete statements, analogies, or mathematical equations. Thus, the more general term "item" is a more appropriate label. Items are stored in an item bank.
Examples
Ideally, the multiple choice question (MCQ) should be asked as a "stem", with plausible options, for example:
(The correct answers are B, C and A respectively.)
A well written multiple-choice question avoids obviously wrong or implausible distractors (such as the non-Indian city of Detroit being included in the third example), so that the question makes sense when read with each of the distractors as well as with the correct answer.
A more difficult and well-written multiple choice question is as follows:
Advantages
There are several advantages to multiple choice tests. If item writers are well trained and items are quality assured, it can be a very effective assessment technique. If students are instructed on the way in which the item format works and myths surrounding the tests are corrected, they will perform better on the test. On many assessments, reliability has been shown to improve with larger numbers of items on a test, and with good sampling and care over case specificity, overall test reliability can be further increased.
Multiple choice tests often require less time to administer for a given amount of material than would tests requiring written responses.
Multiple choice questions lend themselves to the development of objective assessment items, but without author training, questions can be subjective in nature. Because this style of test does not require a teacher to interpret answers, test-takers are graded purely on their selections, creating a lower likelihood of teacher bias in the results. Factors irrelevant to the assessed material (such as handwriting and clarity of presentation) do not come into play in a multiple-choice assessment, and so the candidate is graded purely on their knowledge of the topic. Finally, if test-takers are aware of how to use answer sheets or online examination tick boxes, their responses can be relied upon with clarity. Overall, multiple choice tests are the strongest predictors of overall student performance compared with other forms of evaluations, such as in-class participation, case exams, written assignments, and simulation games.
Prior to the widespread introduction of SBAs into medical education, the typical form of examination was true-false questions. But during the 2000s, educators found that SBAs would be superior.
Disadvantages
The most serious disadvantage is the limited types of knowledge that can be assessed by multiple choice tests. Multiple choice tests are best adapted for testing well-defined or lower-order skills. Problem-solving and higher-order reasoning skills are better assessed through short-answer and essay tests. However, multiple choice tests are often chosen, not because of the type of knowledge being assessed, but because they are more affordable for testing a large number of students. This is especially true in the United States and India, where multiple choice tests are the preferred form of high-stakes testing and the sample size of test-takers is large respectively.
Another disadvantage of multiple choice tests is possible ambiguity in the examinee's interpretation of the item. Failing to interpret information as the test maker intended can result in an "incorrect" response, even if the taker's response is potentially valid. The term "multiple guess" has been used to describe this scenario because test-takers may attempt to guess rather than determine the correct answer. A free response test allows the test taker to make an argument for their viewpoint and potentially receive credit.
In addition, even if students have some knowledge of a question, they receive no credit for knowing that information if they select the wrong answer and the item is scored dichotomously. However, free response questions may allow an examinee to demonstrate partial understanding of the subject and receive partial credit. Additionally if more questions on a particular subject area or topic are asked to create a larger sample then statistically their level of knowledge for that topic will be reflected more accurately in the number of correct answers and final results.
Another disadvantage of multiple choice examinations is that a student who is incapable of answering a particular question can simply select a random answer and still have a chance of receiving a mark for it. If randomly guessing an answer, there is usually a 25 percent chance of getting it correct on a four-answer choice question. It is common practice for students with no time left to give all remaining questions random answers in the hope that they will get at least some of them right. Many exams, such as the Australian Mathematics Competition and the SAT, have systems in place to negate this, in this case by making it no more beneficial to choose a random answer than to give none.
Another system of negating the effects of random selection is formula scoring, in which a score is proportionally reduced based on the number of incorrect responses and the number of possible choices. In this method, the score is reduced by the number of wrong answers divided by the average number of possible answers for all questions in the test, w/(c – 1) where w is the number of wrong responses on the test and c is the average number of possible choices for all questions on the test. All exams scored with the three-parameter model of item response theory also account for guessing. This is usually not a great issue, moreover, since the odds of a student receiving significant marks by guessing are very low when four or more selections are available.
Additionally, it is important to note that questions phrased ambiguously may confuse test-takers. It is generally accepted that multiple choice questions allow for only one answer, where the one answer may encapsulate a collection of previous options. However, some test creators are unaware of this and might expect the student to select multiple answers without being given explicit permission, or providing the trailing encapsulation options.
Critics like philosopher and education proponent Jacques Derrida, said that while the demand for dispensing and checking basic knowledge is valid, there are other means to respond to this need than resorting to crib sheets.
Despite all the shortcomings, the format remains popular because MCQs are easy to create, score and analyse.
Changing answers
The theory that students should trust their first instinct and stay with their initial answer on a multiple choice test is a myth worth dispelling. Researchers have found that although some people believe that changing answers is bad, it generally results in a higher test score. The data across twenty separate studies indicate that the percentage of "right to wrong" changes is 20.2%, whereas the percentage of "wrong to right" changes is 57.8%, nearly triple. Changing from "right to wrong" may be more painful and memorable (Von Restorff effect), but it is probably a good idea to change an answer after additional reflection indicates that a better choice could be made. In fact, a person's initial attraction to a particular answer choice could well derive from the surface plausibility that the test writer has intentionally built into a distractor (or incorrect answer choice). Test item writers are instructed to make their distractors plausible yet clearly incorrect. A test taker's first-instinct attraction to a distractor is thus often a reaction that probably should be revised in light of a careful consideration of each of the answer choices. Some test takers for some examination subjects might have accurate first instincts about a particular test item, but that does not mean that all test takers should trust their first instinct.
Notable multiple-choice examinations
ACT
AIEEE in India
AP
ASVAB
AMC
Australian Mathematics Competition
CFA
CISSP
CLEP
COMLEX
CLAT
Hong Kong Diploma of Secondary Education
F = ma, leading up to the United States Physics Olympiad
FE
GCE Ordinary Level
GED
GRE
GATE
IB Diploma Programme science subject exams
IIT-JEE
Indonesian National Exam
LSAT
MCAT
Multistate Bar Examination
NCLEX
PLAB for non-EEA medical graduates to practice in the UK
PSAT
SAT
Test of English as a Foreign Language
TOEIC
USMLE
NTSE
NEET(UG) in India
UGC NET in India
UPSC CSE Preliminary in India
UTME University Admission Exam in Nigeria
See also
Concept inventory
Extended matching items
Objective test
Test (student assessment)
Closed-ended question
References
Questionnaire construction
Standardized tests | 0.768344 | 0.99426 | 0.763933 |
Cognitive skill | Cognitive skills are skills of the mind, as opposed to other types of skills such as motor skills or social skills. Some examples of cognitive skills are literacy, self-reflection, logical reasoning, abstract thinking, critical thinking, introspection and mental arithmetic. Cognitive skills vary in processing complexity, and can range from more fundamental processes such as perception and various memory functions, to more sophisticated processes such as decision making, problem solving and metacognition.
Specialisation of functions
Cognitive science has provided theories of how the brain works, and these have been of great interest to researchers who work in the empirical fields of brain science. A fundamental question is whether cognitive functions, for example visual processing and language, are autonomous modules, or to what extent the functions depend on each other. Research evidence points towards a middle position, and it is now generally accepted that there is a degree of modularity in aspects of brain organisation. In other words, cognitive skills or functions are specialised, but they also overlap or interact with each other. Deductive reasoning, on the other hand, has been shown to be related to either visual or linguistic processing, depending on the task; although there are also aspects that differ from them. All in all, research evidence does not provide strong support for classical models of cognitive psychology.
Cognitive functioning
Cognitive functioning refers to a person's ability to process thoughts. It is defined as "the ability of an individual to perform the various mental activities most closely associated with learning and problem-solving. Examples include the verbal, spatial, psychomotor, and processing-speed ability." Cognition mainly refers to things like memory, speech, and the ability to learn new information. The brain is usually capable of learning new skills in the aforementioned areas, typically in early childhood, and of developing personal thoughts and beliefs about the world. Old age and disease may affect cognitive functioning, causing memory loss and trouble thinking of the right words while speaking or writing ("drawing a blank"). Multiple sclerosis (MS), for example, can eventually cause memory loss, an inability to grasp new concepts or information, and depleted verbal fluency.
Humans generally have a high capacity for cognitive functioning once born, so almost every person is capable of learning or remembering. Intelligence is tested with IQ tests and others, although these have issues with accuracy and completeness. In such tests, patients may be asked a series of questions, or to perform tasks, with each measuring a cognitive skill, such as level of consciousness, memory, awareness, problem-solving, motor skills, analytical abilities, or other similar concepts. Early childhood is when the brain is most malleable to orientate to tasks that are relevant in the person's environment.
See also
Adaptive behavior
Adaptive functioning
Intelligence Quotient (IQ)
Cognition
Cognitive Abilities Test
Jungian cognitive functions
Notes
References
NCME - Glossary of Important Assessment and Measurement Terms [cognitive ability]
Cognition
Skills | 0.767248 | 0.995677 | 0.763931 |
Cultural sustainability | Cultural sustainability as it relates to sustainable development (or to sustainability), has to do with maintaining cultural beliefs, cultural practices, heritage conservation, culture as its own entity, and the question of whether or not any given cultures will exist in the future. From cultural heritage to cultural and creative industries, culture is both an enabler and a driver of the economic, social, and environmental dimensions of sustainable development. Culture is defined as a set of beliefs, morals, methods, institutions and a collection of human knowledge that is dependent on the transmission of these characteristics to younger generations. Cultural sustainability has been categorized under the social pillar of the three pillars of sustainability, but some argue that cultural sustainability should be its own pillar, due to its growing importance within social, political, environmental, and economic spheres. The importance of cultural sustainability lies within its influential power over the people, as decisions that are made within the context of society are heavily weighed by the beliefs of that society.
Cultural sustainability can be regarded as a fundamental issue, even a precondition to be met on the path towards sustainable development. However, the theoretical and conceptual understanding of cultural sustainability within the general frames of sustainable development remains vague. And consequently, the role of culture is poorly implemented in the environmental, as well as political and social policy. Determining the impact of cultural sustainability is found by investigating the concept of culture in the context of sustainable development, through multidisciplinary approaches and analyses. This means examining the best practices for bringing culture into political and social policy as well as practical domains, and developing means and indicators for assessing the impacts of culture on sustainable development.
Sociopolitical landscapes
Culture has an overwhelming effect on social, economic and political planning, but as of yet, has failed to be incorporated into social, and political policy on a grand scale. However, certain policies regarding both policy and politics have managed to be implemented into some conventions that are implemented on a global scale. Culture is found everywhere within a society, from the relics of previous generations, to the accumulated values of a society. Culture within society can be divided into two, equally important subtopics that aid in the description of cultural specific characterizations. These categories, as defined by the United Nations Educational, Scientific and Cultural Organization (UNESCO) are "Material" and "Immaterial". Material objects such as shrines, paintings, buildings, landscapes and other humanistic formations act as a physical representation of the culture in that area. Although they have little social and political utility, they serve as physical landmarks and culturally dependent objects whose meaning is created and maintained within the context of that society. The accumulation of these cultural characteristics are what measures a society's cultural integrity, and these characteristics are inherently capable of transforming landscapes of political, social and environment nature via the influence that these values and historical remains have on the population.
Little success has come with the implementations of cultural policy within the context of politics due to a lack of empirical information regarding the topic of cultural sustainability. The Immaterial category contains more socially and politically applicable characteristics such as practices, traditions, aesthetics, knowledge, expressions etc. These characteristics embody social and political utility through education of people, housing, social justice, human rights, employment and more. These values contribute to the well-being of a society through the use of collective thinking and ideals i.e. culture. Culture also presents more room for expansion on its effects on a society. Specifically, creativity, respect, empathy, and other practices are being used to create social integration and also to create a sense of "self" in the world.
Convention implementation
Implementation of policy on a global scale has had little success, but enough to show an increasing interest in the topic of Cultural Sustainability. The conventions that have been implemented, have done so on a large scale, involving multiple countries, across most continents. UNESCO has been responsible for the vast majority of these conventions, maintaining that cultural sustainability and cultural heritage are a strong cornerstone of society. One of the more relevant conventions created in 2003 is the "Convention for the Safeguarding of the Intangible Cultural Heritage" which proclaims that culture must be protected against all adversarial combatants. This safeguard was implemented as an understanding that culture guarantees sustainability. Implementing policy based on cultural history is in the process of becoming a widely talked about subject and holds that cultures will be able to thrive in the context of both present and future. Conventions made by UNESCO regarding cultural preservation and sustainability are surrounding the promotion of cultural diversity, which means multiple cultures and ideals within one grand culture.
Cultural heritage
Cultural memorabilia and artifacts from a cultures history maintain an important role in modern society as they are kept as relics and shrines in order to remember the stories, knowledge, skills and methods of ancestors and learn invaluable lessons from the past. Today, cultures use libraries, art exhibits and museums as a placeholder for these important objects and other culturally significant artifacts. Not only are these objects revered, but the buildings themselves are oftentimes a symbol of cultural integrity to the community which it belongs. Linking with the other pillars of sustainability, the biggest barrier to cultural sustainability is funding. Economic sustainability relies on a number of systems with goals to ensure economic prosperity by eliminating spending where it is not needed. Cultural buildings such as museums oftentimes fail to receive the funding it needs to continue the preservation of culturally significant artifacts.
Human-centered design and cultural collaboration have been popular frameworks for sustainable development in marginalized communities. These frameworks involve open dialogue which entails sharing, debating, and discussing, as well as holistic evaluation of the site of development.
Sustainable tourism
Tourism is a traveling method for which people can venture to different areas of the globe and experience new ways of living, and explore landscapes not native to their country of origin. Tourism is constantly being criticized for its impact on the social, political and environmental landscapes due to its high volume of mass consumers. Within the realm of tourism exists more sustainable practices and ideals that are aligned with the idea of cultural sustainability.
Geotourism
Geotourism is a form of tourism which relies heavily upon the sustainability, or even the improvement of a selected geological location. Serving as an alternative to mass tourism, Geotourism was created with the purpose of aiding in the sustainable development movement. Geotourism is a method which focuses on Sustainable culture, ecological preservation and restoration, welfare of local populous, and the wildlife in the immediate area. The link between Geotourism and Cultural sustainability lies within their role in maintaining the natural state of the environment, including the social and cultural environment. Preservation of the local culture has been a key element of Geotourism from its inception, and due to this form of tourism, travelers are able to experience true local culture, lifestyles, and practices experienced by the people native to that region. The scope of Geotourism covers many geological features, from wider areas such as mountains or coasts to smaller rock formations. This form of tourism provides education regarding destinations they have traveled to via ethnographic methods, and also calls upon the traveler to become aware of the footprint they leave on the environment, as well as social changes that may be harmful to the indigenous peoples. Responsibility plays an important role in Geotourism by informing travelers of their duties to respect, and preserve the local culture. Many countries have adapted this method of tourism, going as far as to implement geotourism sites equipped with guides that discuss matters of importance in that area such as environmental or cultural concerns. Such countries include:
Many states within the U.S. including California and Arizona
Romania
Norway
Honduras
Geotourism in Honduras includes The Bay Islands, a Caribbean archipelago made up of three main islands, Utila, Roatan, and Guanaja and a few lesser islands and cays located off the north coast of Honduras. These islands have been blessed with stunning natural scenery, highlighted by idyllic beaches, tropical hillsides, and mangrove forests.
Mexico
Geotourism in Mexico includes Puerto Peñasco. This region includes the protected Sea of Cortez, the Pinacate Crator which offers barren deserts, sacred tribal and Indian lands, fishing zones, estuaries, oyster beds, and vibrant farmland and wine country. Over the past decade, Puerto Peñasco, a former modest fishing village in Sonora situated just 65 kilometers away from the US border, has transformed into one of the most rapidly expanding urban areas in Mexico.
Canada
Geotourism in Canada includes Nova Scotia. It was visited by explorers and geologists from around the world for centuries, Nova Scotia's geological sites are now recognized for their beauty as much as for their rich history. Nova Scotia’s sites include the iconic lighthouses, which is built on rocky precipices that reach out into the sea.
Portugal
Geotourism in Portugal includes the Geopark of Arouca, which, in 2010, was officially recognized and joined the Global network of Geoparks, under the auspices of UNESCO. The park is known for its natural, gastronomical, and cultural heritage. This mountainous area, with rivers, natural parks, steep slopes and lush vegetation covers the entire municipality of Arouca. The granite used to build so many religious and historical monuments, Romanesque chapels and Baroque churches in the region came was extracted from its mountains – Freita and Montemuro.
As the practice of sustainability in all forms (environmental, social, and economy) becomes a more revered topic and gains traction within political spheres, sociologists suggest refining the practices of tourism to fit a mold that is more conducive to the sustainability models.
Tazim et al. suggests the key to sustainable tourism lies within the responsible practice of travelers, but also within the direct participation of the locals in tourism practices. Although Geotourism shows to be a viable alternative for mass tourism, reducing the footprint of travellers on different parts of the world, there have been criticisms made regarding its fairness to the local population. Issues of fair pay, and the rights of the local people are the basis of the ethical dilemmas this kind of tourism faces. Tourism has a direct effect on the culture of the local populous, and as such, the focus of sociologists has been how to maintain the local environment (physically, socioculturally, and economically) while at the same time, introduce people to a new culture.
See also
Circles of Sustainability
Geotourism
Human rights
References
Sustainability
Sociology of culture | 0.785259 | 0.972821 | 0.763916 |
Educational perennialism | Educational perennialism is a normative educational philosophy. Perennialists believe that the priority of education should be to teach principles that have persisted for centuries, not facts. Since people are human, one should teach first about humans, rather than machines or techniques, and about liberal, rather than vocational, topics.
Perennialism appears similar to essentialism but focuses first on personal development, while essentialism focuses first on essential skills. Essentialist curricula tend to be more vocational and fact-based, and far less liberal and principle-based. Both philosophies are typically considered to be teacher-centered, as opposed to student-centered philosophies of education such as progressivism. Teachers associated with perennialism are authors of the Western masterpieces and are open to student criticism through the associated Socratic method.
Secular perennialism
The word "perennial" in secular perennialism suggests something that lasts an indefinite amount of time, recurs again and again, or is self-renewing. Robert Hutchins and Mortimer Adler promoted a universal curriculum based upon the common and essential nature of all human beings and encompassing humanist and scientific traditions. Hutchins and Adler implemented these ideas with great success at the University of Chicago, where they still strongly influence the Undergraduate Common Core. Other notable figures in the movement include Stringfellow Barr and Scott Buchanan (who together initiated the Great Books program at St. John's College in Annapolis, Maryland), Mark Van Doren, Alexander Meiklejohn, and Sir Richard Livingstone, an English classicist with an American following. Inspired by Adler's lectures, Sister Miriam Joseph wrote a textbook on the scholastic trivium and taught it as the Freshman seminar at Saint Mary's College.
Secular perennialists espouse the idea that education should focus on the historical development of a continually advancing common orienting base of human knowledge and art, the timeless value of classic thought on central human issues by landmark thinkers, and revolutionary ideas critical to historical paradigm shifts or changes in world view. A program of studies which is highly general, nonspecialized, and nonvocational is advocated. They firmly believe that exposure of all people to the development of thought by those most responsible for the evolution of the occidental oriented tradition is integral to the survival of the freedoms, human rights, and responsibilities inherent to a true democracy.
Adler states: ... our political democracy depends upon the reconstitution of our schools. Our schools are not turning out young people prepared for the high office and the duties of citizenship in a democratic republic. Our political institutions cannot thrive, they may not even survive, if we do not produce a greater number of thinking citizens, from whom some statesmen of the type we had in the 18th century might eventually emerge. We are, indeed, a nation at risk, and nothing but radical reform of our schools can save us from impending disaster... Whatever the price... the price we will pay for not doing it will be much greater.
Hutchins writes in the same vein: The business of saying ... that people are not capable of achieving a good education is too strongly reminiscent of the opposition of every extension of democracy. This opposition has always rested on the allegation that the people were incapable of exercising the power they demanded. Always the historic statement has been verified: you cannot expect the slave to show the virtues of the free man unless you first set him free. When the slave has been set free, he has, in the passage of time, become indistinguishable from those who have always been free ... There appears to be an innate human tendency to underestimate the capacity of those who do not belong to "our" group. Those who do not share our background cannot have our ability. Foreigners, people who are in a different economic status, and the young seem invariably to be regarded as intellectually backward ...
As with the essentialists, perennialists are educationally conservative in the requirement of a curriculum focused upon fundamental subject areas, but they stress that the overall aim should be exposure to history's finest thinkers as models for discovery. The student should be taught such basic subjects as English, languages, history, mathematics, natural science, philosophy, and fine arts. Adler states: "The three R's, which always signified the formal disciplines, are the essence of liberal or general education."
Secular perennialists agree with progressivists that memorization of vast amounts of factual information and a focus on second-hand information in textbooks and lectures does not develop rational thought. They advocate learning through the development of meaningful conceptual thinking and judgement by means of a directed reading list of the profound, aesthetic, and meaningful great books of the Western canon. These books, secular perennialists argue, are written by the world's finest thinkers, and cumulatively comprise the "Great Conversation" of humanity with regard to the central human questions. Their basic argument for the use of original works (abridged translations being acceptable as well) is that these are the products of "genius". Hutchins remarks:
Great books are great teachers; they are showing us every day what ordinary people are capable of. These books come out of ignorant, inquiring humanity. They are usually the first announcements for success in learning. Most of them were written for, and addressed to, ordinary people.
The Great Conversation is not static but, along with the set of related great books, changes as the representative thought of man changes or progresses. In this way, it seeks to represent an evolution of thought not based upon the latest cultural fads. Hutchins clarifies this:
In the course of history... new books have been written that have won their place in the list. Books once thought entitled to belong to it have been superseded; and this process of change will continue as long as men can think and write. It is the task of every generation to reassess the tradition in which it lives, to discard what it cannot use, and to bring into context with the distant and intermediate past the most recent contributions to the Great Conversation. ...the West needs to recapture and reemphasize and bring to bear upon its present problems the wisdom that lies in the works of its greatest thinkers and in the name of love
Perennialism was proposed in response to what many considered a failing educational system. Again Hutchins writes:
The products of American high schools are illiterate; and a degree from a famous college or university is no guarantee that the graduate is in any better case. One of the most remarkable features of American society is that the difference between the "uneducated" and the "educated" is so slight.
In this regard John Dewey and Hutchins were in agreement. Hutchins's book The Higher Learning in America deplored the "plight of higher learning" that had turned away from cultivation of the intellect and toward anti-intellectual practicality due, in part, to a lust for money. In a highly negative review of the book, Dewey wrote a series of articles in The Social Frontier which began by applauding Hutchins' attack on "the aimlessness of our present educational scheme.
Perennialists believe in reading being supplemented by mutual investigations involving both teacher and student and minimally-directed discussions through the Socratic method in order to develop a historically oriented understanding of concepts. They argue that accurate, independent reasoning distinguishes the developed or educated mind and stress the development of this faculty. A skilled teacher keeps discussions on topic, corrects errors in reasoning, and accurately formulates problems within the scope of texts being studied but lets the class reach their own conclusions.
Perennialists argue that many of the historical debates and the development of ideas presented by the great books are relevant to any society at any time, making them suitable for instructional use regardless of their age. They acknowledge disagreement between various great books but believe that the student must learn to recognize these disagreements, think about them, and reach a reasoned, defensible conclusion. This is a major goal of the Socratic discussions.
Religious perennialism
Perennialism was originally religious in nature, developed first by Thomas Aquinas in the thirteenth century in his work (On the Teacher).
In the nineteenth century, John Henry Newman presented a defense of religious perennialism in The Idea of a University. Discourse 5 of that work, "Knowledge Its Own End", is a recent statement of a Christian educational perennialism.
There are several epistemological options, which affect the pedagogical options. The possibilities may be surveyed by considering four extreme positions - idealistic rationalism, idealistic fideism, realistic rationalism and realistic fideism.
Teaching pupils to think critically and rationally are the main objectives of perennialist educators. A perennialist classroom seeks to be a highly structured and disciplined setting that fosters in pupils a never-ending search for the truth.
Colleges exemplifying this philosophy
Reed College in Portland, Oregon is a well-known secular liberal arts college which requires a year-long humanities course covering ancient Greek and Roman literature, history, art, religion, and philosophy. Students may pursue an optional extension to this core curriculum in later years.
St. John's College (Annapolis/Santa Fe) in Annapolis, Maryland and Santa Fe, New Mexico is a secular liberal arts college with an undergraduate program described as "an all-required course of study based on the great books of the Western tradition".
The Core Curriculum of Columbia College of Columbia University, is another well-known example of educational perennialism.
The University of Chicago's Common Core, established by Mortimer Adler and Robert Maynard Hutchins is another well-known example of educational perennialism. Similar to Columbia College of Columbia University, it is an uncommon example of an educational perennialistic college within a large research institution.
Integral Program at Saint Mary's College of California in is a Great Books major at the Lasallian Catholic liberal arts college in Moraga, California. The program was designed with the assistance of faculty from St. John's College, U.S.
Thomas Aquinas College in Santa Paula, California is a Catholic Christian college with a Great Books curriculum. The college was founded by a group of graduates and professors of the Integral Program at Saint Mary's College of California, who were discouraged by the liberalism that became common place among the faculty and administration on Saint Mary's campus shortly after Vatican II.
Gutenberg College in Eugene, Oregon provides "a broad-based liberal arts education in a Protestant Christian environment", with a "great books" curriculum emphasizing "the development of basic learning skills (reading, writing, mathematics, and critical thinking) and the application of these skills to profound writings of the past".
Shimer College in Chicago grants a Bachelor of Arts to students who complete a program composed of humanities, social sciences, natural sciences, integrative studies and a capstone senior thesis.
The Torrey Honors Institute at Biola University is a Christian Great Books program.
George Wythe University in Cedar City, Utah, is an unaccredited liberal arts school.
Thomas More College in Merrimack, New Hampshire is a Catholic College with an integrated Liberal Arts curriculum . The program includes poetry and folk, art and wood guild. The college also offers a Rome Semester, when students have the chance to study Ancient and Medieval Art & Architecture.
The Great Books Program at Benedictine College is an example of perennialism, teaching ancient, medieval, renaissance, and modern works from the Western cannon with an emphasis on Catholicism.
See also
Philosophy of Education
Education reform
Aristotelianism
Thomism
Paidea proposal, a reform plan initiated by Adler for public schools
References
External links
Searle, John. "The Storm Over the University". The New York Review of Books. December 6, 1990.
Philosophy of education
Liberal arts education | 0.781635 | 0.977331 | 0.763916 |
Controversy | Controversy is a state of prolonged public dispute or debate, usually concerning a matter of conflicting opinion or point of view. The word was coined from the Latin controversia, as a composite of controversus – "turned in an opposite direction".
Legal
In the theory of law, a controversy differs from a legal case; while legal cases include all suits, criminal as well as civil, a controversy is a purely civil proceeding.
For example, the Case or Controversy Clause of Article Three of the United States Constitution (Section 2, Clause 1) states that "the judicial Power shall extend ... to Controversies to which the United States shall be a Party". This clause has been deemed to impose a requirement that United States federal courts are not permitted to cases that do not pose an actual controversy—that is, an actual dispute between adverse parties which is capable of being resolved by the [court]. In addition to setting out the scope of the jurisdiction of the federal judiciary, it also prohibits courts from issuing advisory opinions, or from hearing cases that are either unripe, meaning that the controversy has not arisen yet, or moot, meaning that the controversy has already been
Benford's law
Benford's law of controversy, as expressed by the astrophysicist and science fiction author Gregory Benford in 1980, states: Passion is inversely proportional to the amount of real information available. In other words, it claims that the less factual information is available on a topic, the more controversy can arise around that topic – and the more facts are available, the less controversy can arise. Thus, for example, controversies in physics would be limited to subject areas where experiments cannot be carried out yet, whereas controversies would be inherent to politics, where communities must frequently decide on courses of action based on insufficient information.
Psychological bases
Controversies are frequently thought to be a result of a lack of confidence on the part of the disputants – as implied by Benford's law of controversy, which only talks about lack of information ("passion is inversely proportional to the amount of real information available"). For example, in analyses of the political controversy over anthropogenic climate change, which is exceptionally virulent in the United States, it has been proposed that those who are opposed to the scientific consensus do so because they don't have enough information about the topic. A study of 1540 US adults found instead that levels of scientific literacy correlated with the strength of opinion on climate change, but not on which side of the debate that they stood.
The puzzling phenomenon of two individuals being able to reach different conclusions after being exposed to the same facts has been frequently explained (particularly by Daniel Kahneman) by reference to a 'bounded rationality' – in other words, that most judgments are made using fast acting heuristics that work well in every day situations, but are not amenable to decision-making about complex subjects such as climate change. Anchoring has been particularly identified as relevant in climate change controversies as individuals are found to be more positively inclined to believe in climate change if the outside temperature is higher, if they have been primed to think about heat, and if they are primed with higher temperatures when thinking about the future temperature increases from climate change.
In other controversies – such as that around the HPV vaccine, the same evidence seemed to license inference to radically different conclusions. Kahan et al. explained this by the cognitive biases of biased assimilation and a credibility heuristic.
Similar effects on reasoning are also seen in non-scientific controversies, for example in the gun control debate in the United States. As with other controversies, it has been suggested that exposure to empirical facts would be sufficient to resolve the debate once and for all. In computer simulations of cultural communities, beliefs were found to polarize within isolated sub-groups, based on the mistaken belief of the community's unhindered access to ground truth. Such confidence in the group to find the ground truth is explicable through the success of wisdom of the crowd based inferences. However, if there is no access to the ground truth, as there was not in this model, the method will fail.
Bayesian decision theory allows these failures of rationality to be described as part of a statistically optimized system for decision making. Experiments and computational models in multisensory integration have shown that sensory input from different senses is integrated in a statistically optimal way, in addition, it appears that the kind of inferences used to infer single sources for multiple sensory inputs uses a Bayesian inference about the causal origin of the sensory stimuli. As such, it appears neurobiologically plausible that the brain implements decision-making procedures that are close to optimal for Bayesian inference.
Brocas and Carrillo propose a model to make decisions based on noisy sensory inputs, beliefs about the state of the world are modified by Bayesian updating, and then decisions are made based on beliefs passing a threshold. They show that this model, when optimized for single-step decision making, produces belief anchoring and polarization of opinions – exactly as described in the global warming controversy context – in spite of identical evidence presented, the pre-existing beliefs (or evidence presented first) has an overwhelming effect on the beliefs formed. In addition, the preferences of the agent (the particular rewards that they value) also cause the beliefs formed to change – this explains the biased assimilation (also known as confirmation bias) shown above. This model allows the production of controversy to be seen as a consequence of a decision maker optimized for single-step decision making, rather than a result of limited reasoning in the bounded rationality of Daniel Kahneman.
See also
Argument
Bipartisanship
Dialectic
Misinformation
ProCon.org
Scandal
Third rail (politics)
References
External links
Brian Martin, The Controversy Manual (Sparsnäs, Sweden: Irene Publishing, 2014).
Controversial topics based on machine learning on Wikipedia data
Controversial Today
English words | 0.773211 | 0.987976 | 0.763913 |
City | A city is a human settlement of a substantial size. The term "city" has different meanings around the world and in some places the settlement can be very small. Even where the term is limited to larger settlements, there is no universally agreed definition of the lower boundary for their size. In a narrower sense, a city can be defined as a permanent and densely populated place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution.
Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have some significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources.
Meaning
A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there and can be used in a general sense to mean urban rather than rural territory.
National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanent. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population of 12,000 , and St Davids, with a population of 1,841 .) According to the "functional definition", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas.
The presence of a literate elite is often associated with cities because of the cultural diversities present in a city. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or the leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations.
The degree of urbanization is a modern metric to help define what comprises a city: "a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)". This metric was "devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations ... largely for the purpose of international statistical comparison".
Etymology
The word city and the related civilization come from the Latin root , originally meaning 'citizenship' or 'community member' and eventually coming to correspond with , meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek —another common root appearing in English words such as metropolis.
In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek 'city or town' and 'name').
Geography
Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth.
Site
Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river.
Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland that sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would, in theory, favor the creation of marketplaces in optimal mutually reachable locations.
Center
The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district.
Public space
Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provides the benefit of mitigating the urban heat island effect, especially in cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption, and human health risks.
Internal structure
The urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. The physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structures may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning.
In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible.
A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley Civilization built Mohenjo-Daro, Harappa, and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean.
Urban areas
The urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary.
Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.)
History
The emergence of cities from proto-urban settlements, such as Çatalhöyük, is a non-linear development that demonstrates the varied experiences of early urbanization.
The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Matera, Damascus, and Argos are among those laying claim to the longest continual inhabitation.
Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city were both followed by the development of agriculture, which enabled the production of surplus food and thus a social division of labor (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal.
Ancient times
Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid-fourth millennium BC (ancient Iraq) is considered by most archaeologists to be the first true city, innovating many characteristics for cities to follow, with its name attributed to the Uruk period.
In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations.
Among the early Old World cities, Mohenjo-daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms.
The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes.
In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostered multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz.
In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of the athletic, artistic, spiritual, and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities, and with them brought its principles of urban architecture, design, and society.
In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu, and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilizations, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac.
From 1600 BC, Dhar Tichitt, in the south of present-day Mauritania, presented characteristics suggestive of an incipient form of urbanism. The second place to show urban characteristics in West Africa was Dia, in present-day Mali, from 800 BC. Both Dhar Tichitt and Dia were founded by the same people: the Soninke, who would later also found the Ghana Empire.
Another ancient site, Jenné-Jeno, in what is today Mali, has been dated to the third century BCE. According to Roderick and Susan McIntosh, Jenné-Jeno did not fit into traditional Western conceptions of urbanity as it lacked monumental architecture and a distinctive elite social class, but it should indeed be considered a city based on a functional redefinition of urban development. In particular, Jenné-Jeno featured settlement mounds arranged according to a horizontal, rather than vertical, power hierarchy, and served as a center of specialized production and exhibited functional interdependence with the surrounding hinterland.
More recently, scholars have concluded that the civilization of Djenne-Djenno was likely established by the Mande progenitors of the Bozo people. Their habitation of the site spanned the period from 3rd century BCE to 13th century CE. Archaeological evidence from Jenné-Jeno, specifically the presence of non-West African glass beads dated from the third century BCE to the fourth century CE, indicates that pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa.
Additionally, other early urban centers in West Africa, dated to around 500 CE, include Awdaghust, Kumbi Saleh, the ancient capital of Ghana, and Maranda, a center located on a trade route between Egypt and Gao.
Middle Ages
In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, the capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453.
In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zürich, and Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governance by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet.
By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy, medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed considerable autonomy in late medieval Japan.
In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km2 and possibly supporting up to one million people.
West Africa already had cities before the Common Era, but the consolidation of Trans-Saharan trade in the Middle Ages multiplied the number of cities in the region, as well as making some of them very populous, notably Gao (72,000 inhabitants in 800 AD), Oyo-Ile (50,000 inhabitants in 1400 AD, and may have reached up to 140,000 inhabitants in the 18th century), Ile-Ifẹ̀ (70,000 to 105,000 inhabitants in the 14th and 15th centuries), Niani (50,000 inhabitants in 1400 AD) and Timbuktu (100,000 inhabitants in 1450 AD).
Early modern
In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small.
During the Spanish colonization of the Americas, the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories and were bound to several laws regarding administration, finances, and urbanism.
Industrial age
The growth of the modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas.
Some industrialized cities were confronted with health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape.
Post-industrial age
In the second half of the 20th century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and become the world's leading manufacturer.
Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city dwellers. Some companies are building brand-new master-planned cities from scratch on greenfield sites.
Urbanization
Urbanization is the process of migration from rural to urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and demographic expansion. In England, the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world's population lived in cities. The cultural appeal of cities also plays a role in attracting residents.
Urbanization rapidly spread across Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs reported in 2014 that for the first time, more than half of the world population lives in cities.
Latin America is the most urban continent, with four-fifths of its population living in cities, including one-fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China, and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion city dwellers (and 300 million fewer country dwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa.
Megacities, cities with populations in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as the relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides the rich and poor in these cities, which usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions.
Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground.
Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels.
Government
The local government of cities takes different forms including prominently the municipality (especially in England, in the United States, India, and other British colonies; legally, the municipal corporation; municipio in Spain and Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and Chile; or comune in Italy).
The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city.
Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas.
Municipal officials may be appointed from a higher level of government or elected locally.
Municipal services
Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, but some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968.
Finance
The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradeable financial public contracts and other related rights). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings.
Governance
Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, the industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners.
The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in emergent megacities, where international organizations consider existing governments inadequate for their large populations.
Urban planning
Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions.
Government is legally the final authority on planning but in practice, the process involves both public and private elements. The legal principle of eminent domain is used by the government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation.
The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems.
Society
Social structure
Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic, and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development that surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods.
Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status of factory workers which in the nineteenth century provided access to the means of production.
Economics
Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market.
As hubs of trade, cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism.
In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities; however, in very dense cities, increased crowding and waiting times may lead to some negative effects.
Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, and housekeeping to grey-collar work in law, financial consulting, and administration.
According to a scientific model of cities by Professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%.
Culture and communications
Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves play some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change.
Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful.
Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; attract businesses, investors, residents, and tourists; and to create shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city.
Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Paris, a city known for its cultural history, was the site of the most recent Olympics in the summer of 2024.
Warfare
Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities.
Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people to concentrate in cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside.
During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, and functionally extends modern urban crime prevention, which already uses concepts such as defensible space.
Although capture is the more common objective, warfare has in some cases spelled complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "counter-value" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces.
Climate change
Infrastructure
Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private.
Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already.
Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from the national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance.
Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives.
Utilities
Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace.
Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide.
Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications.
Transportation
Because cities rely on specialization and an economic system based on wage labor, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. City dwellers travel by foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas.
City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems, and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown.
Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with the accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks.
The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. The economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia.
Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic.
Housing
The housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity.
Homeownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Because cities generally have higher population densities than rural areas, city dwellers are more likely to reside in apartments and less likely to live in a single-family home.
Ecology
Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in the wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species that never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions.
Typical urban fauna includes insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. However, in North America, large predators such as coyotes and other large animals like white-tailed deer persist.
Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) that envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in the comparable wilderness.
Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries.
Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it).
One of the main methods of improving the urban ecology is including in the cities more urban green spaces: parks, gardens, lawns, and trees. These areas improve the health and well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city.
A study published in Scientific Reports in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits are applied to men and women of all ages, as well as across different ethnicities, socioeconomic statuses, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study did not count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles of home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit."
World city system
As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media.
Global city
A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity.
Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems.
Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities.
Large cities have a great divide between populations of both ends of the financial spectrum. Regulations on immigration promote the exploitation of low- and high-skilled immigrant workers from poor areas. During employment, migrant workers may be subject to unfair working conditions, including working overtime, low wages, and lack of safety in workplaces.
Transnational activity
Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels.
New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes.
Global governance
Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance.
Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network.
Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest.
South Africa has one of the highest rate of protests in the world. Pretoria, a city in South Africa, had a rally where five thousand people took part in order to advocate for increasing wages to afford living costs.
United Nations System
The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization.
The Habitat I conference in 1976 adopted the "Vancouver Declaration on Human Settlements" which identifies urban management as a fundamental aspect of development and establishes various principles for maintaining urban habitats.
Citing the Vancouver Declaration, the UN General Assembly in December 1977 authorized the United Nations Commission Human Settlements and the HABITAT Centre for Human Settlements, intended to coordinate UN activities related to housing and settlements.
The 1992 Earth Summit in Rio de Janeiro resulted in a set of international agreements including Agenda 21 which establishes principles and plans for sustainable development.
The Habitat II conference in 1996 called for cities to play a leading role in this program, which subsequently advanced the Millennium Development Goals and Sustainable Development Goals.
In January 2002 the UN Commission on Human Settlements became an umbrella agency called the United Nations Human Settlements Programme or UN-Habitat, a member of the United Nations Development Group.
The Habitat III conference of 2016 focused on implementing these goals under the banner of a "New Urban Agenda". The four mechanisms envisioned for effecting the New Urban Agenda are (1) national policies promoting integrated sustainable development, (2) stronger urban governance, (3) long-term integrated urban and territorial planning, and (4) effective financing frameworks. Just before this conference, the European Union concurrently approved an "Urban Agenda for the European Union" known as the Pact of Amsterdam.
UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank.
The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance.
The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding.
Representation in culture
Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk.
Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies.
Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967).
Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis.
Gallery
See also
Lists of cities
List of adjectivals and demonyms for cities
Compact city
Eco-cities
Lost city
Megacity
Metropolis
Settlement hierarchy
Urbanization
Walking city
Notes
References
Bibliography
Abrahamson, Mark (2004). Global Cities. Oxford University Press.
Ashworth, G.J. War and the City. London & New York: Routledge, 1991. .
Bridge, Gary, and Sophie Watson, eds. (2000). A Companion to the City. Malden, MA: Blackwell, 2000/2003.
Brighenti, Andrea Mubi, ed. (2013). Urban Interstices: The Aesthetics and the Politics of the In-between. Farnham: Ashgate Publishing. .
Carter, Harold (1995). The Study of Urban Geography. 4th ed. London: Arnold.
Clark, Peter (ed.) (2013). The Oxford Handbook of Cities in World History. Oxford University Press.
Curtis, Simon (2016). Global Cities and Global Order. Oxford University Press.
Ellul, Jacques (1970). The Meaning of the City. Translated by Dennis Pardee. Grand Rapids, Michigan: Eerdmans, 1970. ; French original (written earlier, published later as): Sans feu ni lieu : Signification biblique de la Grande Ville; Paris: Gallimard, 1975. Republished 2003 with
Gupta, Joyetta, Karin Pfeffer, Hebe Verrest, & Mirjam Ros-Tonen, eds. (2015). Geographies of Urban Governance: Advanced Theories, Methods and Practices. Springer, 2015. .
Hahn, Harlan, & Charles Levine (1980). Urban Politics: Past, Present, & Future. New York & London: Longman.
Hanson, Royce (ed.). Perspectives on Urban Infrastructure. Committee on National Urban Policy, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington: National Academy Press, 1984.
Herrschel, Tassilo & Peter Newman (2017). Cities as International Actors: Urban and Regional Governance Beyond the Nation State. Palgrave Macmillan (Springer Nature).
Grava, Sigurd (2003). Urban Transportation Systems: Choices for Communities. McGraw Hill, e-book.
Kaplan, David H.; James O. Wheeler; Steven R. Holloway; & Thomas W. Hodler, cartographer (2004). Urban Geography. John Wiley & Sons, Inc.
Kavaratzis, Mihalis, Gary Warnaby, & Gregory J. Ashworth, eds. (2015). Rethinking Place Branding: Comprehensive Brand Development for Cities and Regions. Springer. .
Kraas, Frauke, Surinder Aggarwal, Martin Coy, & Günter Mertins, eds. (2014). Megacities: Our Global Urban Future. United Nations "International Year of Planet Earth" book series. Springer. .
Leach, William (1993). Land of Desire: Merchants, Power, and the Rise of a New American Culture. New York: Vintage Books (Random House), 1994. .
Levy, John M. (2017). Contemporary Urban Planning. 11th ed. New York: Routledge (Taylor & Francis).
Magnusson, Warren. Politics of Urbanism: Seeing like a city. London & New York: Routledge, 2011. .
Marshall, John U. (1989). The Structure of Urban Systems. University of Toronto Press. .
Marzluff, John M., Eric Schulenberger, Wilfried Endlicher, Marina Alberti, Gordon Bradley, Clre Ryan, Craig ZumBrunne, & Ute Simon (2008). Urban Ecology: An International Perspective on the Interaction Between Humans and Nature. New York: Springer Science+Business Media. .
McQuillan, Eugene. The Law of Municipal Corporations, 3rd ed. 1987 revised volume by Charles R.P. Keating, Esq. Wilmette, Illinois: Callaghan & Company.
Moholy-Nagy, Sibyl (1968). Matrix of Man: An Illustrated History of Urban Environment. New York: Frederick A Praeger.
Mumford, Lewis (1961). The City in History: Its Origins, Its Transformations, and Its Prospects. New York: Harcourt, Brace & World.
Paddison, Ronan, ed. (2001). Handbook of Urban Studies. London; Thousand Oaks, California; and New Delhi: Sage Publications. .
Rybczynski, W., City Life: Urban Expectations in a New World, (1995)
Smith, Michael E. (2002) The Earliest Cities. In Urban Life: Readings in Urban Anthropology, edited by George Gmelch and Walter Zenner, pp. 3–19. 4th ed. Waveland Press, Prospect Heights, IL.
Southall, Aidan (1998). The City in Time and Space. Cambridge University Press.
Wellman, Kath & Marcus Spiller, eds. (2012). Urban Infrastructure: Finance and Management. Chichester, UK: Wiley-Blackwell. .
Further reading
Chandler, T. Four Thousand Years of Urban Growth: An Historical Census. Lewiston, NY: Edwin Mellen Press, 1987.
Geddes, Patrick, City Development (1904)
Robson, W.A., and Regan, D.E., ed., Great Cities of the World, (3d ed., 2 vol., 1972)
Thernstrom, S., and Sennett, R., ed., Nineteenth-Century Cities (1969)
Toynbee, Arnold J. (ed), Cities of Destiny, New York: McGraw-Hill, 1967. Pan historical/geographical essays, many images. Starts with "Athens", ends with "The Coming World City-Ecumenopolis".
Weber, Max, The City, 1921. (tr. 1958)
External links
World Urbanization Prospects, Website of the United Nations Population Division (archived 10 July 2017)
Urban population (% of total) – World Bank website based on UN data.
Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data.
Cities
Populated places by type
Types of populated places
Urban geography | 0.764491 | 0.99924 | 0.76391 |
Ecophagy | Ecophagy is a term coined by Robert Freitas that means the consumption of an ecosystem. It derives .
Freitas used the term to describe a scenario involving molecular nanotechnology gone awry. In this situation (called the grey goo scenario) out-of-control self-replicating nanorobots consume entire ecosystems, resulting in global ecophagy.
However, the word "ecophagy" is now applied more generally in reference to any event—nuclear war, the spread of monoculture, massive species extinctions—that might fundamentally alter the planet. Scholars suggest that these events might result in ecocide in that they would undermine the capacity of the Earth's biological population to repair itself. Others suggest that more mundane and less spectacular events—the unrelenting growth of the human population, the steady transformation of the natural world by human beings—will eventually result in a planet that is considerably less vibrant, and one that is, apart from humans, essentially lifeless. These people believe that the current human trajectory puts us on a path that will eventually lead to ecophagy.
In the paper in which Freitas coined the term he wrote:
Perhaps the earliest-recognized and best-known danger of molecular nanotechnology is the risk that self-replicating nanorobots capable of functioning autonomously in the natural environment could quickly convert that natural environment (e.g., "biomass") into replicas of themselves (e.g., "nanomass") on a global basis, a scenario usually referred to as the "grey goo problem" but perhaps more properly termed "global ecophagy".
See also
Ecocide
Grey goo
Molecular assembler
References
Philip Ball, The Robot Within , New Scientist, 15 March 2003.
External links
Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations
critical review of the Freitas article in biosafety group
Green Goo - Life In The Era Of Humane Genocide by Nick Szabo
Human Global Ecophagy (Or, How Quickly Can Humans Consume the Earth?)
"Intentional Ecophagy" references
"Nanotechnology Daily News"
Environmental ethics
Environmental disasters
2000 neologisms | 0.793016 | 0.963292 | 0.763907 |
Ideal (ethics) | An ideal is a principle or value that one actively pursues as a goal, usually in the context of ethics, and one's prioritization of ideals can serve to indicate the extent of one's dedication to each. The belief in ideals is called ethical idealism, and the history of ethical idealism includes a variety of philosophers. In some theories of applied ethics, such as that of Rushworth Kidder, there is importance given to such orders as a way to resolve disputes. In law, for instance, a judge is sometimes called on to resolve the balance between the ideal of truth, which would advise hearing out all evidence, and the ideal of fairness. Given the complexity of putting ideals into practice, and resolving conflicts between them, it is not uncommon to see them reduced to dogma. One way to avoid this, according to Bernard Crick, is to have ideals that themselves are descriptive of a process, rather than an outcome. His political virtues try to raise the practical habits useful in resolving disputes into ideals of their own. A virtue, in general, is an ideal that one can make a habit.
See also
Dominant culture
Euthyphro dilemma
History of ethical idealism
Self-sufficiency
Social justice
References
External links
Philosophy of life
Concepts in ethics | 0.776647 | 0.98359 | 0.763902 |
Biocybernetics | Biocybernetics is the application of cybernetics to biological science disciplines such as neurology and multicellular systems. Biocybernetics plays a major role in systems biology, seeking to integrate different levels of information to understand how biological systems function. The field of cybernetics itself has origins in biological disciplines such as neurophysiology. Biocybernetics is an abstract science and is a fundamental part of theoretical biology, based upon the principles of systemics. Biocybernetics is a psychological study that aims to understand how the human body functions as a biological system and performs complex mental functions like thought processing, motion, and maintaining homeostasis.(PsychologyDictionary.org)Within this field, many distinct qualities allow for different distinctions within the cybernetic groups such as humans and insects such as beehives and ants. Humans work together but they also have individual thoughts that allow them to act on their own, while worker bees follow the commands of the queen bee. (Seeley, 1989). Although humans often work together, they can also separate from the group and think for themselves.(Gackenbach, J. 2007) A unique example of this within the human sector of biocybernetics would be in society during the colonization period, when Great Britain established their colonies in North America and Australia. Many of the traits and qualities of the mother country were inherited by the colonies, as well as niche qualities that were unique to them based on their areas like language and personality—similar vines and grasses, where the parent plant produces offshoots, spreading from the core. Once the shoots grow their roots and get separated from the mother plant, they will survive independently and be considered their plant. Society is more closely related to plants than to animals since, like plants, there is no distinct separation between parent and offspring. The branching of society is more similar to plant reproduction than to animal reproduction. Humans are a k- selected species that typically have fewer offspring that they nurture for longer periods than r -selected species. It could be argued that when Britain created colonies in regions like North America and Australia, these colonies, once they became independent, should be seen as offspring of British society. Like all children, the colonies inherited many characteristics, such as language, customs and technologies, from their parents, but still developed their own personality. This form of reproduction is most similar to the type of vegetative reproduction used by many plants, such as vines and grasses, where the parent plant produces offshoots, spreading ever further from the core. When such a shoot, once it has produced its own roots, gets separated from the mother plant, it will survive independently and define a new plant. Thus, the growth of society is more like that of plants than like that of the higher animals that we are most familiar with, there is not a clear distinction between a parent and its offspring. Superorganisms are also capable of the so-called "distributed intelligence," a system composed of individual agents with limited intelligence and information. These can pool resources to complete goals beyond the individuals' reach on their own. Similar to the concept of "Game theory." (Durlauf, S.N., Blume, L.E. 2010) In this concept, individuals and organisms make choices based on the behaviors of the other player to deem the most profitable outcome for them as an individual rather than a group.
Terminology
Biocybernetics is a conjoined word from bio (Greek: βίο / life) and cybernetics (Greek: κυβερνητική / controlling-governing). Although the extended form of the word is biological cybernetics, the field is most commonly referred to as biocybernetics in scientific papers.
Early proponents
Early proponents of biocybernetics include Ross Ashby, Hans Drischel, and Norbert Wiener among others. Popular papers published by each scientist are listed below.
Ross Ashby, "Introduction to Cybernetics", 1956
Hans Drischel, "Einführung in die Biokybernetik." 1972
Norbert Wiener, "Cybernetics or Control and Communication in the Animal and the Machine", 1948
Similar fields
Papers and research that delve into topics involving biocybernetics may be found under a multitude of similar names, including molecular cybernetics, neurocybernetics, and cellular cybernetics. Such fields involve disciplines that specify certain aspects of the study of the living organism (for example, neurocybernetics focuses on the study neurological models in organisms).
Categories
Biocybernetics – the study of an entire living organism
Neurocybernetics – cybernetics dealing with neurological models. (Psycho-Cybernetics was the title of a self-help book, and is not a scientific discipline)
Molecular cybernetics – cybernetics dealing with molecular systems (e.g. molecular biology cybernetics)
Cellular cybernetics – cybernetics dealing with cellular systems (e.g. information technology/cell phones or biological cells)
Evolutionary cybernetics – study of the evolution of informational systems (See also evolutionary programming, evolutionary algorithm)
See also
Bioinformatics
Biosemiotics
Computational biology
Computational biomodeling
Medical cybernetics
List of biomedical cybernetics software
References
External links
Max Planck Institute for Biological Cybernetics
Journal "Biological Cybernetics"
Scientific portal on biological cybernetics
UCLA Biocybernetics Laboratory
Cybernetics
Branches of biology | 0.794254 | 0.961773 | 0.763892 |
Economic problem | Economic systems as a type of social system must confront and solve the three fundamental economic problems:
What kinds and quantities of goods shall be produced, "how much and which of alternative goods and services shall be produced?"
How shall goods be produced? ..by whom and with what resources (using what technology)...?"
For whom are the goods or services produced? Who benefits? Samuelson rephrased this question as "how is the total of the national product to be distributed among different individuals and families?"
Economic systems solve these problems in several ways:"... by custom and instinct; by command and centralized control (in planned economies) and in mixed economies that "...uses both market signals and government directives to allocate goods and resources." The latter is variously defined as an economic system blending elements of a market economy with elements of a planned economy, free markets with state interventionism, or private enterprise with public enterprise..."
Samuelson wrote in Economics, a "canonical textbook" of mainstream economic thought that "the price mechanism, working through supply and demand in competitive markets, operates to (simultaneously) answer the three fundamental problems in a mixed private enterprise system..." At competitive equilibrium, the value society places on a good is equivalent to the value of the resources given up to produce it (marginal benefit equals marginal cost). This ensures allocative efficiency-the additional value society places on another unit of the good is equal to what society must give up in resources to produce it.
The solution to these problems is important because of the "fundamental fact of economic institution life" that ...
"The economic problem, "the struggle for subsistence", always has been hitherto primary, most pressing problem of the human race- not only of the human race, but of the whole of the biological kingdom from the beginnings of life in its most primitive forms." -Samuelson, Economics, 11th ed., 1980
Parts of the problem
The economic problem can be divided into three different parts, which are given below.
Problem of allocation of resources
The problem of allocation of resources arises due to the scarcity of resources, and refers to the question of which wants should be satisfied and which should be left unsatisfied. In other words, what to produce and how much to produce. More production of a good implies more resources required for the production of that good, and resources are scarce. These two facts together mean that, if a society decides to increase the production of some good, it has to withdraw some resources from the production of other goods. In other words, more production of a desired commodity can be made possible only by reducing the quantity of resources used in the production of other goods.
The problem of allocation deals with the question of whether to produce capital goods or consumer goods. If the community decides to produce capital goods, resources must be withdrawn from the production of consumer goods. In the long run, however, [investment] in capital goods augments the production of consumer goods. Thus, both capital and consumer goods are important. The problem is determining the optimal production ratio between the two.
Resources are scarce and it is important to use them as efficiently as possible. Thus, it is essential to know if the production and distribution of national product made by an economy is maximally efficient. The production becomes efficient only if the productive resources are utilized in such a way that any reallocation does not produce more of one good without reducing the output of any other good. In other words, efficient distribution means that redistributing goods cannot make anyone better off without making someone else worse off. (See Pareto efficiency.)
The inefficiencies of production and distribution exist in all types of economies. The welfare of the people can be increased if these inefficiencies are ruled out. Some cost must be incurred to remove these inefficiencies. If the cost of removing these inefficiencies of production and distribution is more than the gain, then it is not worthwhile to remove them.َ
The problem of full employment of resources
In view of the scarce resources, the question of whether all available resources are fully utilized is an important one. A community should achieve maximum satisfaction by using the scarce resources in the best possible manner—not wasting resources or using them inefficiently. There are two types of employment of resources:
Labour-intensive
Capital-intensive
In capitalist economies, however, available resources are not fully used. In times of depression, many people want to work but can't find employment. It supposes that the scarce resources are not fully utilized in a capitalistic economy
The problem of economic growth
If productive capacity grows, an economy can produce progressively more goods, which raises the standard of living. The increase in productive capacity of an economy is called economic growth. There are various factors affecting economic growth. The problems of economic growth have been discussed by numerous growth models, including the Harrod-Domar model, the neoclassical growth models of Solow and Swan, and the Cambridge growth models of Kaldor and Joan Robinson. This part of the economic problem is studied in the economies of development.
See also
Full employment
Post-scarcity
Job guarantee
Rivalry (economics)
Overpopulation
Degrowth
Post-growth
Poverty
References
*
Resource economics
Economic growth
sa:अर्थशास्त्रं | 0.771421 | 0.990221 | 0.763877 |
Open system (systems theory) | An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system.
The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences.
In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter.
The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes.
Social sciences
In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes to making it closed, and is thus a conservative approach. The Althusserian concept of overdetermination (drawing on Sigmund Freud) posits that there are always multiple causes in every event.
David Harvey uses this to argue that when systems such as capitalism enter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation. Looking at the crisis in accumulation, Harvey argues that phenomena such as foreign direct investment, privatization of state-owned resources, and accumulation by dispossession act as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the Asian and South-East Asian financial crisis of 1997-8, involving "hedge fund raising" of national currencies, as examples of this.
Structural functionalists such as Talcott Parsons and neofunctionalists such as Niklas Luhmann have incorporated system theory to describe society and its components.
The sociology of religion finds both open and closed systems within the field of religion.
Thermodynamics
See the book
Systems engineering
See also
Business process
Complex system
Dynamical system
Glossary of systems theory
Ludwig von Bertalanffy
Maximum power principle
Non-equilibrium thermodynamics
Open system (computing)
Open System Environment Reference Model
Openness
Open and Closed Systems in Social Science
Phantom loop
Thermodynamic system
References
Further reading
Khalil, E.L. (1995). Nonlinear thermodynamics and social science modeling: fad cycles, cultural development and identificational slips. The American Journal of Economics and Sociology, Vol. 54, Issue 4, pp. 423–438.
Weber, B.H. (1989). Ethical Implications Of The Interface Of Natural And Artificial Systems. Delicate Balance: Technics, Culture and Consequences: Conference Proceedings for the Institute of Electrical and Electronics Engineers.
External links
OPEN SYSTEM, Principia Cybernetica Web, 2007.
Cybernetics
Thermodynamic systems | 0.771656 | 0.989914 | 0.763873 |
International business | International business refers to the trade of Goods and service goods, services, technology, capital and/or knowledge across national borders and at a global or transnational scale.
It involves cross-border transactions of goods and services between two or more countries. Transactions of economic resources include capital, skills, and people for the purpose of the international production of physical goods and services such as finance, banking, insurance, and construction. International business is also known as globalization.
International business encompasses a myriad of crucial elements vital for global economic integration and growth. At its core, it involves the exchange of goods, services, and capital across national borders. One of its pivotal aspects is globalization, which has significantly altered the landscape of trade by facilitating increased interconnectedness between nations. International business thrives on the principle of comparative advantage, wherein countries specialize in producing goods and services they can produce most efficiently. This specialization fosters efficiency, leading to optimal resource allocation and higher overall productivity. Moreover, international business fosters cultural exchange and understanding by promoting interactions between people of diverse backgrounds. However, it also poses challenges, such as navigating complex regulatory frameworks, cultural differences, and geopolitical tensions. Effective international business strategies require astute market analysis, risk assessment, and adaptation to local customs and preferences. The role of technology cannot be overstated, as advancements in communication and transportation have drastically reduced barriers to entry and expanded market reach. Additionally, international business plays a crucial role in sustainable development, as companies increasingly prioritize ethical practices, environmental responsibility, and social impact. Collaboration between governments, businesses, and international organizations is essential to address issues like climate change, labor rights, and economic inequality. In essence, international business is a dynamic force driving economic growth, fostering global cooperation, and shaping the future of commerce on a worldwide scale.
To conduct business overseas, multinational companies need to bridge separate national markets into one global marketplace. There are two macro-scale factors that underline the trend of greater globalization. The first consists of eliminating barriers to make cross-border trade easier (e.g. free flow of goods and services, and capital, referred to as "free trade"). The second is technological change, particularly developments in communication, information processing, and transportation technologies.
Overview
The discourse surrounding international business has a transition in terminology over the years, reflecting shifts in understanding and the expanding scope of cross-border commerce. Initially, phrases such as "foreign trade" and "foreign exchange" were prevalent, embodying a static view of cross-border interactions. However, the term "foreign" often evoked notions of remoteness or strangeness, failing to capture the dynamic essence of international engagements.
As commerce evolved with the advent of firms engaging in substantial direct investments across borders, newer terms to encapsulate the changing landscape. The mid-19th century marked the rise of companies owning and controlling production facilities in various countries, a departure from the earlier norm where firms held minor or passive ("portfolio") investments abroad. This paradigm shift necessitated a fresh nomenclature, leading to the introduction of the term "multinational enterprise" (MNE), referring to entities with substantial operations in multiple nations.
"International business" is also defined as the study of the internationalization process of multinational enterprises. A multinational enterprise (MNE) is a company that has a worldwide approach to markets, production and/or operations in several countries. Well-known MNEs include fast-food companies such as: McDonald's (MCD), YUM (YUM), Starbucks Coffee Company (SBUX), etc. Other industrial MNEs leaders include vehicle manufacturers such as: Ford Motor Company, and General Motors (GMC). Some consumer electronics producers such as Samsung, LG and Sony, and energy companies such as Exxon Mobil, and British Petroleum (BP) are also multinational enterprises.
Multinational enterprises range from any kind of business activity or market, from consumer goods to machinery manufacture; a company can become an international business. Therefore, to conduct business overseas, companies should be aware of all the factors that might affect any business activities, including, but not limited to: difference in legal systems, political systems, economic policy, language, accounting standards, labor standards, living standards, environmental standards, local cultures, corporate cultures, foreign-exchange markets, tariffs, import and export regulations, trade agreements, climate, and education. Each of these factors may require changes in how companies operate from one country to another. Each factor makes a difference and a connection.
One of the first scholars to engage in developing a theory of multinational companies was Canadian economist Stephen Hymer. Throughout his academic life, he developed theories that sought to explain foreign direct investment (FDI) and why firms become multinational.
There were three phases of internationalization according to Hymer's work. In this thesis, the author departs from neoclassical theory and opens up a new area of international production. At first, Hymer started analyzing neoclassical theory and financial investment, where the main reason for capital movement is the difference in interest rates. After this analysis, Hymer analyzed the characteristics of foreign investment by large companies for production and direct business purposes, calling this Foreign Direct Investment (FDI). By analyzing the two types of investments, Hymer distinguished financial investment from direct investment. The main distinguishing feature was control. Portfolio investment is a more passive approach, and the main purpose is financial gain, whereas in foreign direct investment a firm has control over the operations abroad. So, the traditional theory of investment based on differential interest rates does not explain the motivations for FDI.
According to Hymer, there are two main determinants of FDI; where an imperfect market structure is the key element. The first is the firm-specific advantages which are developed at the specific companies home country and, profitably, used in the foreign country. The second determinant is the removal of control where Hymer wrote: "When firms are interconnected, they compete in selling in the same market or one of the firms may sell to the other," and because of this "it may be profitable to substitute centralized decision-making for decentralized decision-making".
Hymer's second phase is his neoclassical article in 1968 that includes a theory of internationalization and explains the direction of growth of the international expansion of firms. In a later stage, Hymer went to a more Marxist approach where he explains that MNC as agents of an international capitalist system causing conflict and contradictions, causing among other things inequality and poverty in the world. Hymer is the "father of the theory of MNEs", and explains the motivations for companies doing direct business abroad.
Among modern economic theories of multinationals and foreign direct investment are internalization theory and John Dunning's OLI paradigm (standing for ownership, location and internationalization). Dunning was widely known for his research in economics of international direct investment and the multinational enterprise. His OLI paradigm, in particular, remains as the predominant theoretical contribution to study international business topics. Hymer and Dunning are considered founders of international business as a specialist field of study.
Physical and social factors of competitive business and social environment
The conduct of international operations depends on a company's objectives and the means with which they carry them out. The operations affect and are affected by the physical and societal factors and the competitive environment.
Operations
All firms that want to go international have one goal in common; the desire to increase their respective economic values when engaging in international trade transactions. To accomplish this goal, each firm must develop its individual strategy and approach to maximize value, lower costs, and increase profits. A firm's value creation is the difference between (the value of the product being sold) and (the cost of production per each product sold).
Value creation can be categorized as: primary activities (research and development, production, marketing and sales, customer service) and as support activities (information systems, logistics, human resources). All of these activities must be managed effectively and be consistent with the firm strategy. However, the success of firms that extend internationally depends on the goods or services sold and on the firm's core competencies (Skills within the firm that competitors cannot easily match or imitate). For a firm to be successful, the firm's strategy must be consistent with the environment in which the firm operates. Therefore, the firm needs to change its organizational structure to reflect changes in the setting in which they are operating and the strategy they are pursuing.
Once a firm decides to enter a foreign market, it must decide on a mode of entry. There are six different modes to enter a foreign market, and each mode has pros and cons that are associated with it. The firm must decide which mode is most appropriately aligned with the company's goals and objectives. The six different modes of entry are exporting, turnkey projects, licensing, franchising, establishing joint ventures with a host-country firm, or setting up a new wholly owned subsidiary in the host country.
The first entry mode is exporting. Exporting is the sale of a product in a different national market than a centralized hub of manufacturing. In this way, a firm may realize a substantial scale of economies from its global sales revenue. As an example, many Japanese automakers made inroads into the U.S. market through exporting. There are two primary advantages to exporting: avoiding high costs of establishing manufacturing in a host country (when these are higher) and gaining an experience curve. Some possible disadvantages to exporting are high transport costs and high tariff barriers.
The second entry mode is a turnkey project. In a turnkey project, an independent contractor is hired by the company to oversee all of the preparation for entering a foreign market. Once the preparation is complete and the end of the contract is reached, the plant is turned over to the company fully ready for operation.
Licensing and franchising are two additional entry modes that are similar in operation. Licensing allows a licensor to grant the rights to an intangible property to the licensee for a specified period of time for a royalty fee. Franchising, on the other hand, is a specialized form of licensing in which the "franchisor" sells the intangible property to the franchisee, and also requires the franchisee operate as dictated by the franchisor.
Lastly, a joint venture and wholly owned subsidiary are two more entry modes in international business. A joint venture is when a firm created is jointly owned by two or more companies (Most joint venture are 50-50 partnerships). This is in contrast with a wholly owned subsidiary, when a firm owns 100 percent of the stock of a company in a foreign country because it has either set up a new operation or acquires an established firm in that country.
Types of operations
Exports and import
Merchandise exports: goods exportednot including services.
Merchandise imports: The physical good or product that is imported into the respective country. Countries import products or goods that their country lacks in. An example of this is that Colombia must import cars since there is no Colombian car company.
Service exports: , the fastest growing export sector. The majority of the companies create a product that requires installation, repairs, and troubleshooting, Service exports is simply a resident of one country providing a service to another country. A cloud software platform used by people or companies outside the home country.
"Tourism and transportation, service performance, asset use".
Exports and Imports of products, goods or services are usually a country's most important international economic transactions.
Top imports and exports in the world
Data is from the CIA World Factbook, compiled in 2017:
Choice of entry mode in international business
Strategic variables affect the choice of entry mode for multinational corporation expansion beyond their domestic markets. These variables are global concentration, global synergies, and global strategic motivations of MNC.
Global concentration: many MNEs share and overlap markets with a limited number of other corporations in the same industry.
Global synergies: the reuse or sharing of resources by a corporation and may include marketing departments or other inputs that can be used in multiple markets. This includes, among other things, brand name recognition.
Global strategic motivations: other factors beyond entry mode that are the basic reasons for corporate expansion into an additional market. These are strategic reasons that may include establishing a foreign outpost for expansion, developing sourcing sites among other strategic reasons.
Means of businesses
International Business Media
International business media encompasses a diverse range of channels that facilitate the dissemination of information and communication among businesses operating across borders. These channels play a crucial role in keeping stakeholders informed about global market trends, emerging opportunities, and potential risks. Here are some of the key types of international business media:
Industry-Specific Publications:* Specialized magazines, journals, and newsletters that focus on particular industries or sectors, providing in-depth analysis, expert commentary, and industry news.
Financial News Outlets:* Global media organizations that report on financial markets, economic developments, and business performance, offering insights into investment opportunities and economic trends.
Business Television Networks:* Broadcast and online channels dedicated to business news, featuring interviews with CEOs, market analysis, and reports on global business events.
Online Business Resources:* Websites, blogs, and social media platforms that provide news, analysis, and commentary on international business, often catering to specific regions or industries.
In addition to traditional media, there are also a number of social media channels that focus on international business. These channels can be a good way to stay up-to-date on the latest news and developments, and they can also be a valuable platform for connecting with other businesses and professionals.
Physical and social factors
Geographical influences: There are many different geographic factors that affect international business. These factors are: the geographical size, the climatic challenges happening throughout the world, the natural resources available on a specific territory, the population distribution in a country, etc.
Social factors: Political policies: political disputes, particularly those that result in the military confrontation, can disrupt trade and investment.
Legal policies: domestic and international laws play a big role in determining how a company can operate overseas.
Behavioural factors: in a foreign environment, the related disciplines such as anthropology, psychology, and sociology are helpful for managers to get a better understanding of values, attitudes, and beliefs.
Economic forces: economics explains country differences in costs, currency values, and market size.
Risks
Faulty Planning
To achieve success in penetrating a foreign market and remaining profitable, efforts must be directed towards the planning and execution of Phase I. The use of conventional SWOT analysis, market research, and cultural research, will give a firm appropriate tools to reduce risk of failure abroad. Risks that arise from poor planning include: large expenses in marketing, administration and product development (with no sales); disadvantages derived from local or federal laws of a foreign country, lack of popularity because of a saturated market, vandalism of physical property due to instability of country; etc. There are also cultural risks when entering a foreign market. Lack of research and understanding of local customs can lead to alienation of locals and brand dissociation. Strategic risks can be defined as the uncertainties and untapped opportunities embedded in your strategic intent and how well they are executed. As such, they are key matters for the board and impinge on the whole business, rather than just an isolated unit.
Operational risk
A company has to be conscious about the production costs to not waste time and money. If the expenditures and costs are controlled, it will create an efficient production and help the internationalization. Operational risk is the prospect of loss resulting from inadequate or failed procedures, systems or policies; employee errors, systems failure, fraud or other criminal activity, or any event that disrupts business processes.
Political risk
How a government governs a country (governance) can affect the operations of a firm. The government might be corrupt, hostile, or totalitarian; and may have a negative image around the globe. A firm's reputation can change if it operates in a country controlled by that type of government. Also, an unstable political situation can be a risk for multinational firms. Elections or any unexpected political event can change a country's situation and put a firm in an awkward position. Political risks are the likelihood that political forces will cause drastic changes in a country's business environment that hurt the profit and other goals of a business enterprise. Political risk tends to be greater in countries experiencing social unrest. When political risk is high, there is a high probability that a change will occur in the country's political environment that will endanger foreign firms there. Corrupt foreign governments may also take over the company without warning, as seen in Venezuela.
Technological risk
Technological improvements bring many benefits, but some disadvantages as well. Some of these risks include "lack of security in electronic transactions, the cost of developing new technology ... the fact that this new technology may fail, and, when all of these are coupled with the outdated existing technology, [the fact that] the result may create a dangerous effect in doing business in the international arena."
Environmental risk
Companies that establish a subsidiary or factory abroad need to be conscious about the externalizations they will produce, as some may have negative effects such as noise or pollution. This may cause aggravation to the people living there, which in turn can lead to a conflict. People want to live in a clean and quiet environment, without pollution or unnecessary noise. If a conflict arises, this may lead to a negative change in customer's perception of the company. Actual or potential threat of adverse effects on living organisms and environment by effluents, emissions, wastes, resource depletion, etc., arising out of an organization's activities is considered to be risks of the environment. As new business leaders come to fruition in their careers, it will be increasingly important to curb business activities and externalizations that may hurt the environment.
Economic risk
These are the economic risks explained by Professor Okolo: "This comes from the inability of a country to meet its financial obligations. The changing of foreign-investment or/and domestic fiscal or monetary policies. The effect of exchange-rate and interest rate make it difficult to conduct international business." Moreover, it can be a risk for a company to operate in a country and they may experience an unexpected economic crisis after establishing the subsidiary. Economic risks is the likelihood that economic management will cause drastic changes in a country's business environment that hurt the profit and other goals of a business enterprise. In practice, the biggest problem arising from economic mismanagement has been inflation. Historically many governments have expanded their domestic money supplying misguided attempts to stimulate economic activity.
Financial risk
According to Professor Okolo: "This area is affected by the currency exchange rate, government flexibility in allowing the firms to repatriate profits or funds outside the country. The devaluation and inflation will also affect the firm's ability to operate at an efficient capacity and still be stable." Furthermore, the taxes that a company has to pay might be advantageous or not. It might be higher or lower in the host countries. Then "the risk that a government will indiscriminately change the laws, regulations, or contracts governing an investment—or will fail to enforce them—in a way that reduces an investor's financial returns is what we call 'policy risk.'" Exchange rates can fluctuate rapidly for a variety of reasons, including economic instability and diplomatic issues.
Terrorism
Terrorism is a voluntary act of violence towards a group(s) of people. In most cases, acts of terrorism is derived from hatred of religious, political and cultural beliefs. An example was the infamous 9/11 attacks, labeled as terrorism due to the massive damages inflicted on American society and the global economy stemming from the animosity towards Western culture by some radical Islamic groups. Terrorism not only affects civilians, but it also damages corporations and other businesses. These effects may include: physical vandalism or destruction of property, sales declining due to frightened consumers and governments issuing public safety restrictions. Firms engaging in international business will find it difficult to operate in a country that has an uncertain assurance of safety from these attacks.
Bribery
Bribery is the act of receiving or soliciting of any items or services of value to influence the actions of a party with public or legal obligations. This is considered to an unethical form of practicing business and can have legal repercussions. Firm that want to operate legally should instruct employees to not involve themselves or the company in such activities. Companies should avoid doing business in countries where unstable forms of government exist as it could bring unfair advantages against domestic business and/or harm the social fabric of the citizens.
Factors towards globalization
There has been growth in globalization in recent decades due to the following factors.
Technology is expanding, especially in transportation and communications.
Governments are removing international business restrictions.
Institutions provide services to ease the conduct of international business.
Consumers want to know about foreign goods and services.
Competition has become more global.
Political relationships have improved among some major economic powers.
Countries cooperate more on transnational issues.
Cross-national cooperation and agreements have increased.
Importance of international business education
Most companies are either international companies or compete with other international companies.
Modes of operation may differ from those used domestically.
The best way of conducting business may differ by country.
An understanding helps one make better career decisions.
An understanding helps one decide what governmental policies to support.
Managers in international business must understand social science disciplines and how they affect different functional business fields.
To maintain and achieve successful business operations in foreign nations, persons must understand how variations in culture and traditions across nations affect business practices. This idea is known as cultural literacy. Without knowledge of a host country's culture, corporate strategizing is more difficult and error-prone when entering foreign markets compared with the home country's market and culture. This can create a "blind spot" during the decision making process and result in ethnocentrism. Education about international business introduces the student to new concepts that can be applicable in international strategy in topics such as marketing and operations.
Importance of language and cultural studies
A considerable advantage in international business is gained through the knowledge and use of language, thereby mitigating a language barrier. A study by Lohmann (2011) in Economics Letters delved into the impact of language barriers on trade. The findings suggest that fluency in the local language can significantly enhance trade interactions. Advantages of being an international businessperson who is fluent in the local language include the following:
Having the ability to directly communicate with employees and customers
Understanding the manner of speaking within business in the local area to improve overall productivity
Gaining respect of customers and employees from speaking with them in their native tongue
In many cases, it plays a crucial role. It is truly impossible to gain an understanding of a culture's buying habits without first taking the time to understand the culture. Examples of the benefit of understanding local culture include the following:
Being able to provide marketing techniques that are specifically tailored to the local market
Knowing how other businesses operate and what might or might not be social taboos
Understanding the time structure of an area. Some societies are more focused on timeliness ("being on time") while others focus on doing business at "the right time".
Associating with people who do not know several languages.
Language barriers can affect transaction costs. Linguistic distance is defined as the amount of variation one language has from another. For example, French, and Spanish are both languages derived from Latin. When evaluating dialogue in these languages, you will discover many similarities. However, languages such as English and Chinese or English and Arabic vary much more strongly and contain far fewer similarities. The writing systems of these languages are also different. The larger the linguistic distance there, the wider language barriers to cross. These differences can reflect on transaction costs and make foreign business operations more expensive.
Importance of studying international business
The international business standards focus on the following:
raising awareness of the inter-relatedness of one country's political policies and economic practices on another;
learning to improve international business relations through appropriate communication strategies;
understanding the global business environment—that is, the interconnections of cultural, political, legal, economic, and ethical systems;
exploring basic concepts underlying international finance, management, marketing, and trade relations; and
identifying forms of business ownership and international business opportunities.
By focusing on these, students will gain a better understanding of Political economy. These are tools that would help future business people bridge the economic and political gap between countries.
There is an increasing amount of demand for business people with an education in international business. A survey conducted by Thomas Patrick from University of Notre Dame concluded that bachelor's degree and master's degree holders felt that the training received through education were very practical in the working environment. Increasingly, companies are sourcing their human resource requirement globally. For example, at Sony Corporation, only fifty percent of its employees are Japanese. Business people with an education in international business also had a significantly higher chance of being sent abroad to work under the international operations of a firm.
The following table provides descriptions of higher education in international business and its benefits.
References
Sources
Further reading
Daniels, J., Radebaugh, L., Sullivan, D. (2018). International Business: environment and operations, 16th edition. Prentice Hall.
Daniels, John D., Lee H. Radebaugh, and Daniel P. Sullivan. Globalization and business. Prentice Hall, 2002.
External links
The International Trade Centre ITC is the joint agency of the World Trade Organization and the United Nations
The U.S. Government's export promotion and finance portal A government resource for U.S. exporters
UK Trade & Investment - a government resource for UK exporters | 0.76654 | 0.996518 | 0.763871 |
Hypothetical types of biochemistry | Hypothetical types of biochemistry are forms of biochemistry agreed to be scientifically viable but not proven to exist at this time. The kinds of living organisms currently known on Earth all use carbon compounds for basic structural and metabolic functions, water as a solvent, and DNA or RNA to define and control their form. If life exists on other planets or moons it may be chemically similar, though it is also possible that there are organisms with quite different chemistries for instance, involving other classes of carbon compounds, compounds of another element, or another solvent in place of water.
The possibility of life-forms being based on "alternative" biochemistries is the topic of an ongoing scientific discussion, informed by what is known about extraterrestrial environments and about the chemical behaviour of various elements and compounds. It is of interest in synthetic biology and is also a common subject in science fiction.
The element silicon has been much discussed as a hypothetical alternative to carbon. Silicon is in the same group as carbon on the periodic table and, like carbon, it is tetravalent. Hypothetical alternatives to water include ammonia, which, like water, is a polar molecule, and cosmically abundant; and non-polar hydrocarbon solvents such as methane and ethane, which are known to exist in liquid form on the surface of Titan.
Overview of hypothetical types of biochemistry
Shadow biosphere
A shadow biosphere is a hypothetical microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. Although life on Earth is relatively well-studied, the shadow biosphere may still remain unnoticed because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms.
Alternative-chirality biomolecules
Perhaps the least unusual alternative biochemistry would be one with differing chirality of its biomolecules. In known Earth-based life, amino acids are almost universally of the form and sugars are of the form. Molecules using amino acids or sugars may be possible; molecules of such a chirality, however, would be incompatible with organisms using the opposing chirality molecules. Amino acids whose chirality is opposite to the norm are found on Earth, and these substances are generally thought to result from decay of organisms of normal chirality. However, physicist Paul Davies speculates that some of them might be products of "anti-chiral" life.
It is questionable, however, whether such a biochemistry would be truly alien. Although it would certainly be an alternative stereochemistry, molecules that are overwhelmingly found in one enantiomer throughout the vast majority of organisms can nonetheless often be found in another enantiomer in different (often basal) organisms such as in comparisons between members of Archaea and other domains, making it an open topic whether an alternative stereochemistry is truly novel.
Non-carbon-based biochemistries
On Earth, all known living things have a carbon-based structure and system. Scientists have speculated about the pros and cons of using elements other than carbon to form the molecular structures necessary for life, but no one has proposed a theory employing such atoms to form all the necessary structures. However, as Carl Sagan argued, it is very difficult to be certain whether a statement that applies to all life on Earth will turn out to apply to all life throughout the universe. Sagan used the term "carbon chauvinism" for such an assumption. He regarded silicon and germanium as conceivable alternatives to carbon (other plausible elements include but are not limited to palladium and titanium); but, on the other hand, he noted that carbon does seem more chemically versatile and is more abundant in the cosmos. Norman Horowitz devised the experiments to determine whether life might exist on Mars that were carried out by the Viking Lander of 1976, the first U.S. mission to successfully land a probe on the surface of Mars. Horowitz argued that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. He considered that there was only a remote possibility that non-carbon life forms could exist with genetic information systems capable of self-replication and the ability to evolve and adapt.
Silicon biochemistry
The silicon atom has been much discussed as the basis for an alternative biochemical system, because silicon has many chemical similarities to carbon and is in the same group of the periodic table. Like carbon, silicon can create molecules that are sufficiently large to carry biological information.
However, silicon has several drawbacks as a carbon alternative. Carbon is ten times more cosmically abundant than silicon, and its chemistry appears naturally more complex. By 1998, astronomers had identified 84 carbon-containing molecules in the interstellar medium, but only 8 containing silicon, of which half also included carbon. Even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (silicon is roughly 925 times more abundant in Earth's crust than carbon), terrestrial life bases itself on carbon. It may eschew silicon because silicon compounds are less varied, unstable in the presence of water, or block the flow of heat.
Relative to carbon, silicon has a much larger atomic radius, and forms much weaker covalent bonds to atoms — except oxygen and fluorine, with which it forms very strong bonds. Almost no multiple bonds to silicon are stable, although silicon does exhibit varied coordination number. Silanes, silicon analogues to the alkanes, react rapidly with water, and long-chain silanes spontaneously decompose. Consequently, most terrestrial silicon is "locked up" in silica, and not a wide variety of biogenic precursors.
Silicones, which alternate between silicon and oxygen atoms, are much more stable than silanes, and may even be more stable than the equivalent hydrocarbons in sulfuric acid-rich extraterrestrial environments. Alternatively, the weak bonds in silicon compounds may help maintain a rapid pace of life at cryogenic temperatures. Polysilanols, the silicon homologues to sugars, are among the few compounds soluble in liquid nitrogen.
All known silicon macromolecules are artificial polymers, and so "monotonous compared with the combinatorial universe of organic macromolecules". Even so, some Earth life uses biogenic silica: diatoms' silicate skeletons. A. G. Cairns-Smith hypothesized that silicate minerals in water played a crucial role in abiogenesis, in that biogenic carbon compounds formed around their crystal structures. Although not observed in nature, carbon–silicon bonds have been added to biochemistry under directed evolution (artificial selection): a cytochrome c protein from Rhodothermus marinus has been engineered to catalyze new carbon–silicon bonds between hydrosilanes and diazo compounds.
Other exotic element-based biochemistries
Boranes are dangerously explosive in Earth's atmosphere, but would be more stable in a reducing atmosphere. However, boron's low cosmic abundance makes it less likely as a base for life than carbon.
Various metals, together with oxygen, can form very complex and thermally stable structures rivaling those of organic compounds; the heteropoly acids are one such family. Some metal oxides are also similar to carbon in their ability to form both nanotube structures and diamond-like crystals (such as cubic zirconia). Titanium, aluminium, magnesium, and iron are all more abundant in the Earth's crust than carbon. Metal-oxide-based life could therefore be a possibility under certain conditions, including those (such as high temperatures) at which carbon-based life would be unlikely. The Cronin group at Glasgow University reported self-assembly of tungsten polyoxometalates into cell-like spheres. By modifying their metal oxide content, the spheres can acquire holes that act as porous membrane, selectively allowing chemicals in and out of the sphere according to size.
Sulfur is also able to form long-chain molecules, but suffers from the same high-reactivity problems as phosphorus and silanes. The biological use of sulfur as an alternative to carbon is purely hypothetical, especially because sulfur usually forms only linear chains rather than branched ones. (The biological use of sulfur as an electron acceptor is widespread and can be traced back 3.5 billion years on Earth, thus predating the use of molecular oxygen. Sulfur-reducing bacteria can utilize elemental sulfur instead of oxygen, reducing sulfur to hydrogen sulfide.)
Arsenic as an alternative to phosphorus
Arsenic, which is chemically similar to phosphorus, while poisonous for most life forms on Earth, is incorporated into the biochemistry of some organisms. Some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Fungi and bacteria can produce volatile methylated arsenic compounds. Arsenate reduction and arsenite oxidation have been observed in microbes (Chrysiogenes arsenatis). Additionally, some prokaryotes can use arsenate as a terminal electron acceptor during anaerobic growth and some can utilize arsenite as an electron donor to generate energy.
It has been speculated that the earliest life forms on Earth may have used arsenic biochemistry in place of phosphorus in the structure of their DNA. A common objection to this scenario is that arsenate esters are so much less stable to hydrolysis than corresponding phosphate esters that arsenic is poorly suited for this function.
The authors of a 2010 geomicrobiology study, supported in part by NASA, have postulated that a bacterium, named GFAJ-1, collected in the sediments of Mono Lake in eastern California, can employ such 'arsenic DNA' when cultured without phosphorus. They proposed that the bacterium may employ high levels of poly-β-hydroxybutyrate or other means to reduce the effective concentration of water and stabilize its arsenate esters. This claim was heavily criticized almost immediately after publication for the perceived lack of appropriate controls. Science writer Carl Zimmer contacted several scientists for an assessment: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case".
Other authors were unable to reproduce their results and showed that the study had issues with phosphate contamination, suggesting that the low amounts present could sustain extremophile lifeforms.
Alternatively, it was suggested that GFAJ-1 cells grow by recycling phosphate from degraded ribosomes, rather than by replacing it with arsenate.
Non-water solvents
In addition to carbon compounds, all currently known terrestrial life also requires water as a solvent. This has led to discussions about whether water is the only liquid capable of filling that role. The idea that an extraterrestrial life-form might be based on a solvent other than water has been taken seriously in recent scientific literature by the biochemist Steven Benner, and by the astrobiological committee chaired by John A. Baross. Solvents discussed by the Baross committee include ammonia, sulfuric acid, formamide, hydrocarbons, and (at temperatures much lower than Earth's) liquid nitrogen, or hydrogen in the form of a supercritical fluid.
Water as a solvent limits the forms biochemistry can take. For example, Steven Benner, proposes the polyelectrolyte theory of the gene that claims that for a genetic biopolymer such as, DNA, to function in water, it requires repeated ionic charges. If water is not required for life, these limits on genetic biopolymers are removed.
Carl Sagan once described himself as both a carbon chauvinist and a water chauvinist; however, on another occasion he said that he was a carbon chauvinist but "not that much of a water chauvinist".
He speculated on hydrocarbons, hydrofluoric acid, and ammonia as possible alternatives to water.
Some of the properties of water that are important for life processes include:
A complexity which leads to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution.
Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen.
Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life.
A large temperature range over which it is liquid.
High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life.
A high heat capacity (leading to higher environmental temperature stability).
Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions.
Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life.
A large heat of vaporization leading to stable lakes and oceans.
The ability to dissolve a wide variety of compounds.
The solid (ice) has lower density than the liquid, so ice floats on the liquid. This is why bodies of water freeze over but do not freeze solid (from the bottom up). If ice were denser than liquid water (as is true for nearly all other compounds), then large bodies of liquid would slowly freeze solid, which would not be conducive to the formation of life.
Water as a compound is cosmically abundant, although much of it is in the form of vapor or ice. Subsurface liquid water is considered likely or possible on several of the outer moons: Enceladus (where geysers have been observed), Europa, Titan, and Ganymede. Earth and Titan are the only worlds currently known to have stable bodies of liquid on their surfaces.
Not all properties of water are necessarily advantageous for life, however. For instance, water ice has a high albedo, meaning that it reflects a significant quantity of light and heat from the Sun. During ice ages, as reflective ice builds up over the surface of the water, the effects of global cooling are increased.
There are some properties that make certain compounds and elements much more favorable than others as solvents in a successful biosphere. The solvent must be able to exist in liquid equilibrium over a range of temperatures the planetary object would normally encounter. Because boiling points vary with the pressure, the question tends not to be does the prospective solvent remain liquid, but at what pressure. For example, hydrogen cyanide has a narrow liquid-phase temperature range at 1 atmosphere, but in an atmosphere with the pressure of Venus, with of pressure, it can indeed exist in liquid form over a wide temperature range.
Ammonia
The ammonia molecule (NH3), like the water molecule, is abundant in the universe, being a compound of hydrogen (the simplest and most common element) with another very common element, nitrogen. The possible role of liquid ammonia as an alternative solvent for life is an idea that goes back at least to 1954, when J. B. S. Haldane raised the topic at a symposium about life's origin.
Numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does and, in addition, it is capable of dissolving many elemental metals. Haldane made the point that various common water-related organic compounds have ammonia-related analogs; for instance the ammonia-related amine group (−NH2) is analogous to the water-related hydroxyl group (−OH).
Ammonia, like water, can either accept or donate an H+ ion. When ammonia accepts an H+, it forms the ammonium cation (NH4+), analogous to hydronium (H3O+). When it donates an H+ ion, it forms the amide anion (NH2−), analogous to the hydroxide anion (OH−). Compared to water, however, ammonia is more inclined to accept an H+ ion, and less inclined to donate one; it is a stronger nucleophile. Ammonia added to water functions as Arrhenius base: it increases the concentration of the anion hydroxide. Conversely, using a solvent system definition of acidity and basicity, water added to liquid ammonia functions as an acid, because it increases the concentration of the cation ammonium. The carbonyl group (C=O), which is much used in terrestrial biochemistry, would not be stable in ammonia solution, but the analogous imine group (C=NH) could be used instead.
However, ammonia has some problems as a basis for life. The hydrogen bonds between ammonia molecules are weaker than those in water, causing ammonia's heat of vaporization to be half that of water, its surface tension to be a third, and reducing its ability to concentrate non-polar molecules through a hydrophobic effect. Gerald Feinberg and Robert Shapiro have questioned whether ammonia could hold prebiotic molecules together well enough to allow the emergence of a self-reproducing system. Ammonia is also flammable in oxygen and could not exist sustainably in an environment suitable for aerobic metabolism.
A biosphere based on ammonia would likely exist at temperatures or air pressures that are extremely unusual in relation to life on Earth. Life on Earth usually exists between the melting point and boiling point of water, at a pressure designated as normal pressure, between . When also held to normal pressure, ammonia's melting and boiling points are and respectively. Because chemical reactions generally proceed more slowly at lower temperatures, ammonia-based life existing in this set of conditions might metabolize more slowly and evolve more slowly than life on Earth. On the other hand, lower temperatures could also enable living systems to use chemical species that would be too unstable at Earth temperatures to be useful.
A set of conditions where ammonia is liquid at Earth-like temperatures would involve it being at a much higher pressure. For example, at 60 atm ammonia melts at and boils at .
Ammonia and ammonia–water mixtures remain liquid at temperatures far below the freezing point of pure water, so such biochemistries might be well suited to planets and moons orbiting outside the water-based habitability zone. Such conditions could exist, for example, under the surface of Saturn's largest moon Titan.
Methane and other hydrocarbons
Methane (CH4) is a simple hydrocarbon: that is, a compound of two of the most common elements in the cosmos: hydrogen and carbon. It has a cosmic abundance comparable with ammonia. Hydrocarbons could act as a solvent over a wide range of temperatures, but would lack polarity. Isaac Asimov, the biochemist and science fiction writer, suggested in 1981 that poly-lipids could form a substitute for proteins in a non-polar solvent such as methane. Lakes composed of a mixture of hydrocarbons, including methane and ethane, have been detected on the surface of Titan by the Cassini spacecraft.
There is debate about the effectiveness of methane and other hydrocarbons as a solvent for life compared to water or ammonia. Water is a stronger solvent than the hydrocarbons, enabling easier transport of substances in a cell. However, water is also more chemically reactive and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the threat of its biomolecules being destroyed in this way. Also, the water molecule's tendency to form strong hydrogen bonds can interfere with internal hydrogen bonding in complex organic molecules. Life with a hydrocarbon solvent could make more use of hydrogen bonds within its biomolecules. Moreover, the strength of hydrogen bonds within biomolecules would be appropriate to a low-temperature biochemistry.
Astrobiologist Chris McKay has argued, on thermodynamic grounds, that if life does exist on Titan's surface, using hydrocarbons as a solvent, it is likely also to use the more complex hydrocarbons as an energy source by reacting them with hydrogen, reducing ethane and acetylene to methane. Possible evidence for this form of life on Titan was identified in 2010 by Darrell Strobel of Johns Hopkins University; a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward diffusion at a rate of roughly 1025 molecules per second and disappearance of hydrogen near Titan's surface. As Strobel noted, his findings were in line with the effects Chris McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by Chris McKay as consistent with the hypothesis of organisms reducing acetylene to methane. While restating the biological hypothesis, McKay cautioned that other explanations for the hydrogen and acetylene findings are to be considered more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a non-living surface catalyst enabling acetylene to react with hydrogen), or flaws in the current models of material flow. He noted that even a non-biological catalyst effective at 95 K would in itself be a startling discovery.
Azotosome
A hypothetical cell membrane termed an azotosome, capable of functioning in liquid methane in Titan conditions was computer-modeled in an article published in February 2015. Composed of acrylonitrile, a small molecule containing carbon, hydrogen, and nitrogen, it is predicted to have stability and flexibility in liquid methane comparable to that of a phospholipid bilayer (the type of cell membrane possessed by all life on Earth) in liquid water. An analysis of data obtained using the Atacama Large Millimeter / submillimeter Array (ALMA), completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. Later studies questioned whether acrylonitrile would be able to self-assemble into azotozomes.
Hydrogen fluoride
Hydrogen fluoride (HF), like water, is a polar molecule, and due to its polarity it can dissolve many ionic compounds. At atmospheric pressure, its melting point is , and its boiling point is ; the difference between the two is a little more than 100 K. HF also makes hydrogen bonds with its neighbor molecules, as do water and ammonia. It has been considered as a possible solvent for life by scientists such as Peter Sneath and Carl Sagan.
HF is dangerous to the systems of molecules that Earth-life is made of, but certain other organic compounds, such as paraffin waxes, are stable with it. Like water and ammonia, liquid hydrogen fluoride supports an acid–base chemistry. Using a solvent system definition of acidity and basicity, nitric acid functions as a base when it is added to liquid HF.
However, hydrogen fluoride is cosmically rare, unlike water, ammonia, and methane.
Hydrogen sulfide
Hydrogen sulfide is the closest chemical analog to water, but is less polar and is a weaker inorganic solvent. Hydrogen sulfide is quite plentiful on Jupiter's moon Io and may be in liquid form a short distance below the surface; astrobiologist Dirk Schulze-Makuch has suggested it as a possible solvent for life there. On a planet with hydrogen sulfide oceans, the source of the hydrogen sulfide could come from volcanoes, in which case it could be mixed in with a bit of hydrogen fluoride, which could help dissolve minerals. Hydrogen sulfide life might use a mixture of carbon monoxide and carbon dioxide as their carbon source. They might produce and live on sulfur monoxide, which is analogous to oxygen (O2). Hydrogen sulfide, like hydrogen cyanide and ammonia, suffers from the small temperature range where it is liquid, though that, like that of hydrogen cyanide and ammonia, increases with increasing pressure.
Silicon dioxide and silicates
Silicon dioxide, also known as silica and quartz, is very abundant in the universe and has a large temperature range where it is liquid. However, its melting point is , so it would be impossible to make organic compounds in that temperature, because all of them would decompose. Silicates are similar to silicon dioxide and some have lower melting points than silica. Feinberg and Shapiro have suggested that molten silicate rock could serve as a liquid medium for organisms with a chemistry based on silicon, oxygen, and other elements such as aluminium.
Other solvents or cosolvents
Other solvents sometimes proposed:
Supercritical fluids: supercritical carbon dioxide and supercritical hydrogen.
Simple hydrogen compounds: hydrogen chloride.
More complex compounds: sulfuric acid, formamide, methanol.
Very-low-temperature fluids: liquid nitrogen and hydrogen.
High-temperature liquids: sodium chloride.
Sulfuric acid in liquid form is strongly polar. It remains liquid at higher temperatures than water, its liquid range being 10 °C to 337 °C at a pressure of 1 atm, although above 300 °C it slowly decomposes. Sulfuric acid is known to be abundant in the clouds of Venus, in the form of aerosol droplets. In a biochemistry that used sulfuric acid as a solvent, the alkene group (C=C), with two carbon atoms joined by a double bond, could function analogously to the carbonyl group (C=O) in water-based biochemistry.
A proposal has been made that life on Mars may exist and be using a mixture of water and hydrogen peroxide as its solvent.
A 61.2% (by mass) mix of water and hydrogen peroxide has a freezing point of −56.5 °C and tends to super-cool rather than crystallize. It is also hygroscopic, an advantage in a water-scarce environment.
Supercritical carbon dioxide has been proposed as a candidate for alternative biochemistry due to its ability to selectively dissolve organic compounds and assist the functioning of enzymes and because "super-Earth"- or "super-Venus"-type planets with dense high-pressure atmospheres may be common.
Other speculations
Non-green photosynthesizers
Physicists have noted that, although photosynthesis on Earth generally involves green plants, a variety of other-colored plants could also support photosynthesis, essential for most life on Earth, and that other colors might be preferred in places that receive a different mix of stellar radiation than Earth.
These studies indicate that blue plants would be unlikely; however yellow or red plants may be relatively common.
Variable environments
Many Earth plants and animals undergo major biochemical changes during their life cycles as a response to changing environmental conditions, for example, by having a spore or hibernation state that can be sustained for years or even millennia between more active life stages. Thus, it would be biochemically possible to sustain life in environments that are only periodically consistent with life as we know it.
For example, frogs in cold climates can survive for extended periods of time with most of their body water in a frozen state, whereas desert frogs in Australia can become inactive and dehydrate in dry periods, losing up to 75% of their fluids, yet return to life by rapidly rehydrating in wet periods. Either type of frog would appear biochemically inactive (i.e. not living) during dormant periods to anyone lacking a sensitive means of detecting low levels of metabolism.
Alanine world and hypothetical alternatives
The genetic code may have evolved during the transition from the RNA world to a protein world. The Alanine World Hypothesis postulates that the evolution of the genetic code (the so-called GC phase) started with only four basic amino acids: alanine, glycine, proline and ornithine (now arginine). The evolution of the genetic code ended with 20 proteinogenic amino acids. From a chemical point of view, most of them are Alanine-derivatives particularly suitable for the construction of α-helices and β-sheets basic secondary structural elements of modern proteins. Direct evidence of this is an experimental procedure in molecular biology known as alanine scanning.
A hypothetical "Proline World" would create a possible alternative life with the genetic code based on the proline chemical scaffold as the protein backbone. Similarly, a "Glycine World" and "Ornithine World" are also conceivable, but nature has chosen none of them. Evolution of life with Proline, Glycine, or Ornithine as the basic structure for protein-like polymers (foldamers) would lead to parallel biological worlds. They would have morphologically radically different body plans and genetics from the living organisms of the known biosphere.
Nonplanetary life
Dusty plasma-based
In 2007, Vadim N. Tsytovich and colleagues proposed that lifelike behaviors could be exhibited by dust particles suspended in a plasma, under conditions that might exist in space. Computer models showed that, when the dust became charged, the particles could self-organize into microscopic helical structures, and the authors offer "a rough sketch of a possible model of...helical grain structure reproduction".
Cosmic necklace-based
In 2020, Luis A. Anchordoqu and Eugene M. Chudnovsky of the City University of New York hypothesized that cosmic necklace-based life composed of magnetic monopoles connected by cosmic strings could evolve inside stars. This would be achieved by a stretching of cosmic strings due to the star's intense gravity, thus allowing it to take on more complex forms and potentially form structures similar to the RNA and DNA structures found within carbon-based life. As such, it is theoretically possible that such beings could eventually become intelligent and construct a civilization using the power generated by the star's nuclear fusion. Because such use would use up part of the star's energy output, the luminosity would also fall. For this reason, it is thought that such life might exist inside stars observed to be cooling faster or dimmer than current cosmological models predict.
Life on a neutron star
Frank Drake suggested in 1973 that intelligent life could inhabit neutron stars. Physical models in 1973 implied that Drake's creatures would be microscopic.
Scientists who have published on this topic
Scientists who have considered possible alternatives to carbon-water biochemistry include:
J. B. S. Haldane (1892–1964), a geneticist noted for his work on abiogenesis.
V. Axel Firsoff (1910–1981), British astronomer.
Isaac Asimov (1920–1992), biochemist and science fiction writer.
Fred Hoyle (1915–2001), astronomer and science fiction writer.
Norman Horowitz (1915–2005), Caltech geneticist who devised the first experiments carried out to detect life on Mars.
George C. Pimentel (1922–1989), American chemist, University of California, Berkeley.
Peter Sneath (1923–2011), microbiologist, author of the book Planets and Life.
Gerald Feinberg (1933–1992), physicist and Robert Shapiro (1935–2011), chemist, co-authors of the book Life Beyond Earth.
Carl Sagan (1934–1996), astronomer, science popularizer, and SETI proponent.
Jonathan Lunine (born 1959), American planetary scientist and physicist.
Robert Freitas (born 1952), specialist in nano-technology and nano-medicine.
John Baross (born 1940), oceanographer and astrobiologist, who chaired a committee of scientists under the United States National Research Council that published a report on life's limiting conditions in 2007.
See also
Abiogenesis
Astrobiology
Carbon chauvinism
Carbon-based life
Earliest known life forms
Extraterrestrial life
Hachimoji DNA
Iron–sulfur world hypothesis
Life origination beyond planets
Nexus for Exoplanet System Science
Non-cellular life
Non-proteinogenic amino acids
Nucleic acid analogues
Planetary habitability
Shadow biosphere
References
Further reading
External links
Astronomy FAQ
Ammonia-based life
Silicon-based life
Astrobiology
Science fiction themes
Biological hypotheses
Scientific speculation | 0.766999 | 0.995912 | 0.763863 |
Hypergamy | Hypergamy (colloquially referred to as "dating up" or "marrying up") is a term used in social science for the act or practice of a person dating or marrying a spouse of higher social status or sexual capital than themselves, and continuingly attempting to replace their current partner with someone they deem superior.
The antonym "hypogamy" refers to the inverse: marrying a person of lower social class or status (colloquially "marrying down"). Both terms were invented in the Indian subcontinent in the 19th century while translating classical Hindu law books, which used the Sanskrit terms anuloma and pratiloma, respectively, for the two concepts.
The term hypergyny is used to describe the overall practice of women marrying up, since the men would be marrying down.
Research
A Russian study found that among Deaf-hearing married couples women are almost 3 times more likely to be the Deaf individual.
One study found that women are more selective in their choice of marriage partners than are men.
A study done by the University of Minnesota in 2017 found that females generally prefer dominant males as mates. Research conducted throughout the world strongly supports the position that women prefer marriage with partners who are culturally successful or have high potential to become culturally successful. The most extensive of these studies included 10,000 people in 37 cultures across six continents and five islands. Women rated "good financial prospect" higher than men did in all cultures. In 29 samples, the "ambition and industriousness" of a prospective mate were more important for women than for men. Meta-analysis of research published from 1965 to 1986 revealed the same sex difference (Feingold, 1992). Across studies, 3 out of 4 women rated socioeconomic status as more important in a prospective marriage partner than did the average man.
Gilles Saint-Paul (2008) proposes a mathematical model that purports to demonstrate that human female hypergamy occurs because women have greater lost mating opportunity costs from monogamous mating (given their slower reproductive rate and limited window of fertility compared to men), and thus must be compensated for this cost of marriage. By this argument, marriage reduces the overall genetic quality of her offspring by precluding the possibility of impregnation by a genetically higher quality male, albeit without his parental investment, but the reduction may be compensated by greater levels of parental investment by her genetically lower quality husband.. At the end of his introduction, Saint-Paul states his model is consistent with statistics published by Bertrand et al (2013) but also notes that in US Bureau of Labor and Statistics (BLS) data gathered the same year "aggregate evidence is not so clear-cut."
An empirical study examined the mate preferences of subscribers to a computer dating service in Israel that had a highly skewed sex ratio (646 men for 1,000 women). Despite this skewed sex ratio, they found that "On education and socioeconomic status, women on average express greater hypergamic selectivity; they prefer mates who are superior to them in these traits... while men express a desire for an analogue of hypergamy based on physical attractiveness; they desire a mate who ranks higher on the physical attractiveness scale than they themselves do."
One study did not find a statistical difference in the number of women or men "marrying-up" in a sample of 1,109 first-time married couples in the United States.
Another study found traditional marriage practices in which men "marry down" in education do not persist for long once women have the educational advantage.
Additional studies of mate selection in dozens of countries around the world have found men and women report prioritizing different traits when it comes to choosing a mate, with both groups favoring attractive partners in general, but men tending to prefer women who are young while women tend to prefer men who are rich, well educated, and ambitious. They argue that as societies shift towards becoming more gender-equal, women's mate selection preferences shift as well. Some research supports that theory, including a 2012 analysis of a survey of 8,953 people in 37 countries, which found that the more gender-equal a country, the likelier male and female respondents were to report seeking the same qualities in each other rather than different ones.
In a 2016 paper that explored the income difference between couples in 1980 and 2012, researcher Yue Qian noted that the tendency for women to marry men with higher incomes than themselves still persists in the modern era.
Prevalence
It is becoming less common for women to marry older men, because current socioeconomic dynamics allow women more autonomy. Hypergamy does not necessitate the man being older; rather, it requires him to have higher status. The term 'social equals' typically pertains to shared social circles rather than economic equality.
See also
Dating
Dating preferences
Eligible bachelor
Erotic capital
Evolutionary psychology
Exogamy
Men's rights movement#Female privilege
Gold digging
Mating system
Polygamy
Polygyny threshold model
Resource acquisition ability
Sexual selection
Social psychology
Social status
Socioeconomics
Trophy wife
Utilitarianism
Notes
References
External links
Dating
Evolutionary psychology
Mating systems
Morganatic marriage
Sexual selection | 0.764858 | 0.998683 | 0.763851 |
Human sexual activity | Human sexual activity, human sexual practice or human sexual behaviour is the manner in which humans experience and express their sexuality. People engage in a variety of sexual acts, ranging from activities done alone (e.g., masturbation) to acts with another person (e.g., sexual intercourse, non-penetrative sex, oral sex, etc.) in varying patterns of frequency, for a wide variety of reasons. Sexual activity usually results in sexual arousal and physiological changes in the aroused person, some of which are pronounced while others are more subtle. Sexual activity may also include conduct and activities which are intended to arouse the sexual interest of another or enhance the sex life of another, such as strategies to find or attract partners (courtship and display behaviour), or personal interactions between individuals (for instance, foreplay or BDSM). Sexual activity may follow sexual arousal.
Human sexual activity has sociological, cognitive, emotional, behavioural and biological aspects. It involves personal bonding, sharing emotions, the physiology of the reproductive system, sex drive, sexual intercourse, and sexual behaviour in all its forms.
In some cultures, sexual activity is considered acceptable only within marriage, while premarital and extramarital sex are taboo. Some sexual activities are illegal either universally or in some countries or subnational jurisdictions, while some are considered contrary to the norms of certain societies or cultures. Two examples that are criminal offences in most jurisdictions are sexual assault and sexual activity with a person below the local age of consent.
Types
Sexual activity can be classified in a number of ways. The practices may be preceded by or consist solely of foreplay. Acts involving one person (autoeroticism) may include sexual fantasy or masturbation. If two people are involved, they may engage in vaginal sex, anal sex, oral sex or manual sex. Penetrative sex between two people may be described as sexual intercourse, but definitions vary. If there are more than two participants in a sex act, it may be referred to as group sex. Autoerotic sexual activity can involve use of dildos, vibrators, butt plugs, and other sex toys, though these devices can also be used with a partner.
Sexual activity can be classified into the gender and sexual orientation of the participants, as well as by the relationship of the participants. The relationships can be ones of marriage, intimate partners, casual sex partners or anonymous. Sexual activity can be regarded as conventional or as alternative, involving, for example, fetishism or BDSM activities.
Fetishism can take many forms, including the desire for certain body parts (partialism) such as breasts, navels, or feet. The object of desire can be shoes, boots, lingerie, clothing, leather or rubber items. Some non-conventional autoerotic practices can be dangerous. These include autoerotic asphyxiation and self-bondage. The potential for injury or even death that exists while engaging in the partnered versions of these fetishes (choking and bondage, respectively) becomes drastically increased in the autoerotic case due to the isolation and lack of assistance in the event of a problem.
Sexual activity that is consensual is sexual activity in which both or all participants agree to take part and are of the age that they can consent. If sexual activity takes place under force or duress, it is considered rape or another form of sexual assault. In different cultures and countries, various sexual activities may be lawful or illegal in regards to the age, gender, marital status or other factors of the participants, or otherwise contrary to social norms or generally accepted sexual morals.
Mating strategies
In evolutionary psychology and behavioral ecology, human mating strategies are a set of behaviors used by individuals to attract, select, and retain mates. Mating strategies overlap with reproductive strategies, which encompass a broader set of behaviors involving the timing of reproduction and the trade-off between quantity and quality of offspring (see life history theory).
Relative to other animals, human mating strategies are unique in their relationship with cultural variables such as the institution of marriage. Humans may seek out individuals with the intention of forming a long-term intimate relationship, marriage, casual relationship, or friendship. The human desire for companionship is one of the strongest human drives. It is an innate feature of human nature, and may be related to the sex drive. The human mating process encompasses the social and cultural processes whereby one person may meet another to assess suitability, the courtship process and the process of forming an interpersonal relationship. Commonalities, however, can be found between humans and nonhuman animals in mating behavior.
Stages of physiological arousal during sexual stimulation
The physiological responses during sexual stimulation are fairly similar for both men and women and there are four phases.
During the excitement phase, muscle tension and blood flow increase in and around the sexual organs, heart and respiration increase and blood pressure rises. Men and women experience a "sex flush" on the skin of the upper body and face. For women, the vagina becomes lubricated and the clitoris engorges. For men, the penis becomes erect.
During the plateau phase, heart rate and muscle tension increase further. A man's urinary bladder closes to prevent urine from mixing with semen. A woman's clitoris may withdraw slightly and there is more lubrication, outer swelling and muscles tighten and reduction of diameter.
During the orgasm phase, breathing becomes extremely rapid and the pelvic muscles begin a series of rhythmic contractions. Both men and women experience quick cycles of muscle contraction of lower pelvic muscles and women often experience uterine and vaginal contractions; this experience can be described as intensely pleasurable, but roughly 15% of women never experience orgasm, and half report having faked it. A large genetic component is associated with how often women experience orgasm.
During the resolution phase, muscles relax, blood pressure drops, and the body returns to its resting state. Though generally reported that women do not experience a refractory period and thus can experience an additional orgasm, or multiple orgasms soon after the first, some sources state that both men and women experience a refractory period because women may also experience a period after orgasm in which further sexual stimulation does not produce excitement. This period may last from minutes to days and is typically longer for men than women.
Sexual dysfunction is the inability to react emotionally or physically to sexual stimulation in a way projected of the average healthy person; it can affect different stages in the sexual response cycles, which are desire, excitement and orgasm. In the media, sexual dysfunction is often associated with men, but in actuality, it is more commonly observed in females (43 percent) than males (31 percent).
Psychological aspects
Sexual activity can lower blood pressure and overall stress levels. It serves to release tension, elevate mood, and possibly create a profound sense of relaxation, especially in the postcoital period. From a biochemical perspective, sex causes the release of oxytocin and endorphins and boosts the immune system.
Motivations
People engage in sexual activity for any of a multitude of possible reasons. Although the primary evolutionary purpose of sexual activity is reproduction, research on college students suggested that people have sex for four general reasons: physical attraction, as a means to an end, to increase emotional connection, and to alleviate insecurity.
Most people engage in sexual activity because of pleasure they derive from the arousal of their sexuality, especially if they can achieve orgasm. Sexual arousal can also be experienced from foreplay and flirting, and from fetish or BDSM activities, or other erotic activities. Most commonly, people engage in sexual activity because of the sexual desire generated by a person to whom they feel sexual attraction; but they may engage in sexual activity for the physical satisfaction they achieve in the absence of attraction for another, as in the case of casual or social sex. At times, a person may engage in a sexual activity solely for the sexual pleasure of their partner, such as because of an obligation they may have to the partner or because of love, sympathy or pity they may feel for the partner.
A person may engage in sexual activity for purely monetary considerations, or to obtain some advantage from either the partner or the activity. A man and woman may engage in sexual intercourse with the objective of conception. Some people engage in hate sex which occurs between two people who strongly dislike or annoy each other. It is related to the idea that opposition between two people can heighten sexual tension, attraction and interest.
Self-determination theory
Research has found that people also engage in sexual activity for reasons associated with self-determination theory. The self-determination theory can be applied to a sexual relationship when the participants have positive feelings associated with the relationship. These participants do not feel guilty or coerced into the partnership. Researchers have proposed the model of self-determined sexual motivation. The purpose of this model is to connect self-determination and sexual motivation. This model has helped to explain how people are sexually motivated when involved in self-determined dating relationships. This model also links the positive outcomes, (satisfying the need for autonomy, competence, and relatedness) gained from sexual motivations.
According to the completed research associated with this model, it was found that people of both sexes who engaged in sexual activity for self-determined motivation had more positive psychological well-being. While engaging in sexual activity for self-determined reasons, the participants also had a higher need for fulfillment. When this need was satisfied, they felt better about themselves. This was correlated with greater closeness to their partner and higher overall satisfaction in their relationship. Though both sexes engaged in sexual activity for self-determined reasons, there were some differences found between males and females. It was concluded that females had more motivation than males to engage in sexual activity for self-determined reasons. Females also had higher satisfaction and relationship quality than males did from the sexual activity. Overall, research concluded that psychological well-being, sexual motivation, and sexual satisfaction were all positively correlated when dating couples partook in sexual activity for self-determined reasons.
Frequency
The frequency of sexual activity might range from zero to 15 or 20 times a week. Frequency of intercourse tends to decline with age. Some post-menopausal women experience decline in frequency of sexual intercourse, while others do not. According to the Kinsey Institute, the average frequency of sexual intercourse in the US for individuals with partners is 112 times per year (age 18–29), 86 times per year (age 30–39), and 69 times per year (age 40–49). The rate of sexual activity has been declining in the 21st century, a phenomenon that has been described as a sex recession.
Adolescents
The age at which adolescents become sexually active varies considerably between different cultures and times. (See Prevalence of virginity.) The first sexual act of a child or adolescent is sometimes referred to as the sexualization of the child, and may be considered a milestone or a change of status, as the loss of virginity or innocence. Youth are legally free to have intercourse after they reach the age of consent.
A 1999 survey of students indicated that approximately 40% of ninth graders across the United States report having had sexual intercourse. This figure rises with each grade. Males are more sexually active than females at each of the grade levels surveyed. Sexual activity of young adolescents differs in ethnicity as well. A higher percentage of African American and Hispanic adolescents are more sexually active than white adolescents.
Research on sexual frequency has also been conducted solely on female adolescents who engage in sexual activity. Female adolescents tended to engage in more sexual activity due to positive mood. In female teenagers, engaging in sexual activity was directly positively correlated with being older, greater sexual activity in the previous week or prior day, and more positive mood the previous day or the same day as the sexual activity occurred. Decreased sexual activity was associated with prior or same day negative mood or menstruation.
Although opinions differ, researchers suggest that sexual activity is an essential part of humans, and that teenagers need to experience sex. According to a study, sexual experiences help teenagers understand pleasure and satisfaction. In relation to hedonic and eudaimonic well-being, it stated that teenagers can positively benefit from sexual activity. The cross-sectional study was conducted in 2008 and 2009 at a rural upstate New York community. Teenagers who had their first sexual experience at age 16 revealed a higher well-being than those who were sexually inexperienced or who became sexually active at age 17. Furthermore, teenagers who had their first sexual experience at age 15 or younger, or who had many sexual partners were not negatively affected and did not have associated lower well-being.
Health and safety
Sexual activity is an innately physiological function, but like other physical activity, it comes with risks. There are four main types of risks that may arise from sexual activity: unwanted pregnancy, contracting a sexually transmitted infection (STI), physical injury, and psychological injury.
Unwanted pregnancy
Any sexual activity that involves the introduction of semen into a woman's vagina, such as during sexual intercourse, or contact of semen with her vulva, may result in a pregnancy. To reduce the risk of unintended pregnancies, some people who engage in penile-vaginal sex may use contraception, such as birth control pills, a condom, diaphragms, spermicides, hormonal contraception or sterilization. The effectiveness of the various contraceptive methods in avoiding pregnancy varies considerably, and depends on the method rather than the user.
Sexually transmitted infections
Sexual activity that involves skin-to-skin contact, exposure to an infected person's bodily fluids or mucous membranes carries the risk of contracting a sexually transmitted infection. People may not be able to detect that their sexual partner has one or more STIs, for example if they are asymptomatic (show no symptoms). The risk of STIs can be reduced by safe sex practices, such as using condoms. Both partners may opt to be tested for STIs before engaging in sex. The exchange of body fluids is not necessary to contract an infestation of crab lice. Crab lice typically are found attached to hair in the pubic area but sometimes are found on coarse hair elsewhere on the body (for example, eyebrows, eyelashes, beard, mustache, chest, armpits, etc.). Pubic lice infestations (pthiriasis) are spread through direct contact with someone who is infested with the louse.
Some STIs like HIV/AIDS can also be contracted by using IV drug needles after their use by an infected person, as well as through childbirth or breastfeeding.
Aging
Factors such as biological and psychological factors, diseases, mental conditions, boredom with the relationship, and widowhood have been found to contribute to a decrease in sexual interest and activity in old age, but older age does not eliminate the ability to enjoy sexual activity.
Orientations and society
Heterosexuality
Heterosexuality is the romantic or sexual attraction to the opposite sex. Heterosexual practices are institutionally privileged in most countries. In some countries, mostly those where religion has a strong influence on social policy, marriage laws serve the purpose of encouraging people to have sex only within marriage. Sodomy laws have been used to discourage same-sex sexual practices, but they may also affect opposite-sex sexual practices. Laws also ban adults from committing sexual abuse, committing sexual acts with anyone under an age of consent, performing sexual activities in public, and engaging in sexual activities for money (prostitution). Though these laws cover both same-sex and opposite-sex sexual activities, they may differ in regard to punishment, and may be more frequently (or exclusively) enforced on those who engage in same-sex sexual activities.
Different-sex sexual practices may be monogamous, serially monogamous, or polyamorous, and, depending on the definition of sexual practice, abstinent or autoerotic (including masturbation). Additionally, different religious and political movements have tried to influence or control changes in sexual practices including courting and marriage, though in most countries changes occur at a slow rate.
Homosexuality
Homosexuality is the romantic or sexual attraction to the same sex. People with a homosexual orientation can express their sexuality in a variety of ways, and may or may not express it in their behaviors. Research indicates that many gay men and lesbians want, and succeed in having, committed and durable relationships. For example, survey data indicate that between 40% and 60% of gay men and between 45% and 80% of lesbians are currently involved in a romantic relationship.
It is possible for a person whose sexual identity is mainly heterosexual to engage in sexual acts with people of the same sex. Gay and lesbian people who pretend to be heterosexual are often referred to as being closeted (hiding their sexuality in "the closet"). "Closet case" is a derogatory term used to refer to people who hide their sexuality. Making that orientation public can be called "coming out of the closet" in the case of voluntary disclosure or "outing" in the case of disclosure by others against the subject's wishes (or without their knowledge). Among some communities (called "men on the DL" or "down-low"), same-sex sexual behavior is sometimes viewed as solely for physical pleasure. Men who have sex with men, as well as women who have sex with women, or men on the "down-low" may engage in sex acts with members of the same sex while continuing sexual and romantic relationships with the opposite sex.
People who engage exclusively in same-sex sexual practices may not identify themselves as gay or lesbian. In sex-segregated environments, individuals may seek relationships with others of their own gender (known as situational homosexuality). In other cases, some people may experiment or explore their sexuality with same (or different) sex sexual activity before defining their sexual identity. Despite stereotypes and common misconceptions, there are no forms of sexual acts exclusive to same-sex sexual behavior that cannot also be found in opposite-sex sexual behavior, except those involving the meeting of the genitalia between same-sex partners – tribadism (generally vulva-to-vulva rubbing) and frot (generally penis-to-penis rubbing).
Bisexuality and pansexuality
People who have a romantic or sexual attraction to both sexes are referred to as bisexual. People who have a distinct but not exclusive preference for one sex/gender over the other may also identify themselves as bisexual. Like gay and lesbian individuals, bisexual people who pretend to be heterosexual are often referred to as being closeted.
Pansexuality (also referred to as omnisexuality) may or may not be subsumed under bisexuality, with some sources stating that bisexuality encompasses sexual or romantic attraction to all gender identities. Pansexuality is characterized by the potential for aesthetic attraction, romantic love, or sexual desire towards people without regard for their gender identity or biological sex. Some pansexuals suggest that they are gender-blind; that gender and sex are insignificant or irrelevant in determining whether they will be sexually attracted to others. As defined in the Oxford English Dictionary, pansexuality "encompasses all kinds of sexuality; not limited or inhibited in sexual choice with regards to gender or practice".
Avoidance of inbreeding
Although the main adaptive function of human sexual activity is reproduction, human sexual activity also includes the adaptive constraint of avoiding close inbreeding, since inbreeding can have deleterious effects on progeny. Charles Darwin, who was married to his first cousin Emma Wedgwood, considered that the ill health that plagued his family was a consequence of inbreeding. In general, inbreeding between individuals who are closely genetically related leads to the expression of deleterious recessive mutations. The avoidance of inbreeding as a constraint on human sexual activity is apparent in the near universal cultural inhibitions in human societies of sexual activity between closely related individuals. Human outcrossing sexual activity provides the adaptive benefit of the masking of expression of deleterious recessive mutations.
Other social aspects
General attitudes
Alex Comfort and others propose three potential social aspects of sexual intercourse in humans, which are not mutually exclusive: reproductive, relational, and recreational. The development of the contraceptive pill and other highly effective forms of contraception in the mid- and late 20th century has increased people's ability to segregate these three functions, which still overlap a great deal and in complex patterns. For example: A fertile couple may have intercourse while using contraception to experience sexual pleasure (recreational) and also as a means of emotional intimacy (relational), thus deepening their bonding, making their relationship more stable and more capable of sustaining children in the future (deferred reproductive). This same couple may emphasize different aspects of intercourse on different occasions, being playful during one episode of intercourse (recreational), experiencing deep emotional connection on another occasion (relational), and later, after discontinuing contraception, seeking to achieve pregnancy (reproductive, or more likely reproductive and relational).
Religious and ethical
Human sexual activity is generally influenced by social rules that are culturally specific and vary widely.
Sexual ethics, morals, and norms relate to issues including deception/honesty, legality, fidelity and consent. Some activities, known as sex crimes in some locations, are illegal in some jurisdictions, including those conducted between (or among) consenting and competent adults (examples include sodomy law and adult-adult incest).
Some people who are in a relationship but want to hide polygamous activity (possibly of opposite sexual orientation) from their partner, may solicit consensual sexual activity with others through personal contacts, online chat rooms, or, advertising in select media.
Swinging involves singles or partners in a committed relationship engaging in sexual activities with others as a recreational or social activity. The increasing popularity of swinging is regarded by some as arising from the upsurge in sexual activity during the sexual revolution of the 1960s.
Some people engage in various sexual activities as a business transaction. When this involves having sex with, or performing certain actual sexual acts for another person in exchange for money or something of value, it is called prostitution. Other aspects of the adult industry include phone sex operators, strip clubs, and pornography.
Gender roles and the expression of sexuality
Social gender roles can influence sexual behavior as well as the reaction of individuals and communities to certain incidents; the World Health Organization states that, "Sexual violence is also more likely to occur where beliefs in male sexual entitlement are strong, where gender roles are more rigid, and in countries experiencing high rates of other types of violence." Some societies, such as those where the concepts of family honor and female chastity are very strong, may practice violent control of female sexuality, through practices such as honor killings and female genital mutilation.
The relation between gender equality and sexual expression is recognized, and promotion of equity between men and women is crucial for attaining sexual and reproductive health, as stated by the UN International Conference on Population and Development Program of Action:
"Human sexuality and gender relations are closely interrelated and together affect the ability of men and women to achieve and maintain sexual health and manage their reproductive lives. Equal relationships between men and women in matters of sexual relations and reproduction, including full respect for the physical integrity of the human body, require mutual respect and willingness to accept responsibility for the consequences of sexual behaviour. Responsible sexual behaviour, sensitivity and equity in gender relations, particularly when instilled during the formative years, enhance and promote respectful and harmonious partnerships between men and women."
BDSM
BDSM is a variety of erotic practices or roleplaying involving bondage, dominance and submission, sadomasochism, and other interpersonal dynamics. Given the wide range of practices, some of which may be engaged in by people who do not consider themselves as practicing BDSM, inclusion in the BDSM community or subculture usually being dependent on self-identification and shared experience. BDSM communities generally welcome anyone with a non-normative streak who identifies with the community; this may include cross-dressers, extreme body modification enthusiasts, animal players, latex or rubber aficionados, and others.
B/D (bondage and discipline) is a part of BDSM. Bondage includes the restraint of the body or mind. D/s means "Dominant and submissive". A Dominant is one who takes control of a person who wishes to surrender control and a submissive is one who surrenders control to a person who wishes to take control. S/M (sadism and masochism) is the other part of BDSM. A sadist is an individual who takes pleasure in the pain or humiliation of others and a masochist is an individual who takes pleasure from their own pain or humiliation.
Unlike the usual "power neutral" relationships and play styles commonly followed by couples, activities and relationships within a BDSM context are often characterized by the participants' taking on complementary, but unequal roles; thus, the idea of informed consent of both the partners becomes essential. Participants who exert dominance (sexual or otherwise) over their partners are known as Dominants or Tops, while participants who take the passive, receiving, or obedient role are known as submissives or bottoms.
These terms are sometimes shortened so that a dominant person may be referred to as a "Dom" (a woman may choose to use the feminine "Domme") and a submissive may be referred to as a "sub". Individuals who can change between Top/Dominant and bottom/submissive roles – whether from relationship to relationship or within a given relationship – are known as switches. The precise definition of roles and self-identification is a common subject of debate within the community.
In a 2013 study, researchers stated that BDSM is a sexual act where participants play role games, use restraint, use power exchange,
use suppression and pain is sometimes involved depending on individual(s). The study serves to challenge the widespread notion that BDSM could be in some way linked to psychopathology. According to the findings, one who participates in BDSM may have greater strength socially and mentally as well as greater independence than those who do not practice BDSM. It suggests that people who participate in BDSM play have higher subjective well-being, and that this might be due to the fact that BDSM play requires extensive communication. Before any act occurs, the partners must discuss their agreement of their relationship. They discuss how long the play will last, the intensity, their actions, what each participant needs or desires, and what, if any, sexual activities may be included. All acts must be consensual and pleasurable to both parties.
In a 2015 study, interviewed BDSM participants have mentioned that the activities have helped to create higher levels of connection, intimacy, trust and communication between partners. The study suggests that Dominants and submissives exchange control for each other's pleasure and to satisfy a need. The participants have remarked that they enjoy pleasing their partner in any way they can and many surveyed have felt that this is one of the best things about BDSM. It gives a submissive pleasure to do things in general for their Dominant while a Dominant enjoys making their encounters all about their submissive and enjoy doing things that makes their submissive happy. The findings indicate that the surveyed submissives and Dominants found BDSM makes play more pleasurable and fun. The participants have also mentioned improvements in their personal growth, romantic relationships, sense of community and self, the dominant's confidence, and their coping with everyday things by giving them a psychological release.
Legal issues
There are many laws and social customs which prohibit, or in some way affect sexual activities. These laws and customs vary from country to country, and have varied over time. They cover, for example, a prohibition to non-consensual sex, to sex outside marriage, to sexual activity in public, besides many others. Many of these restrictions are non-controversial, but some have been the subject of public debate.
Most societies consider it a serious crime to force someone to engage in sexual acts or to engage in sexual activity with someone who does not consent. This is called sexual assault, and if sexual penetration occurs it is called rape, the most serious kind of sexual assault. The details of this distinction may vary among different legal jurisdictions. Also, what constitutes effective consent in sexual matters varies from culture to culture and is frequently debated. Laws regulating the minimum age at which a person can consent to have sex (age of consent) are frequently the subject of debate, as is adolescent sexual behavior in general. Some societies have forced marriage, where consent may not be required.
Same-sex laws
Many locales have laws that limit or prohibit same-sex sexual activity.
Sex outside marriage
In the West, sex before marriage is not illegal. There are social taboos and many religions condemn pre-marital sex. In many Muslim countries, such as Saudi Arabia, Pakistan, Afghanistan, Iran, Kuwait, Maldives, Morocco, Oman, Mauritania, United Arab Emirates, Sudan, and Yemen, any form of sexual activity outside marriage is illegal. Those found guilty, especially women, may be forced to wed the sexual partner, may be publicly beaten, or may be stoned to death. In many African and native tribes, sexual activity is not viewed as a privilege or right of a married couple, but rather as the unification of bodies and is thus not frowned upon.
Other studies have analyzed the changing attitudes about sex that American adolescents have outside marriage. Adolescents were asked how they felt about oral and vaginal sex in relation to their health, social, and emotional well-being. Overall, teenagers felt that oral sex was viewed as more socially positive amongst their demographic. Results stated that teenagers believed that oral sex for dating and non-dating adolescents was less threatening to their overall values and beliefs than vaginal sex was. When asked, teenagers who participated in the research viewed oral sex as more acceptable to their peers, and their personal values than vaginal sex.
Minimum age of sexual activity (age of consent)
The laws of each jurisdiction set the minimum age at which a young person is allowed to engage in sexual activity. This age of consent is typically between 14 and 18 years, but laws vary. In many jurisdictions, age of consent is a person's mental or functional age. As a result, those above the set age of consent may still be considered unable to legally consent due to mental immaturity. Many jurisdictions regard any sexual activity by an adult involving a child as child sexual abuse.
Age of consent may vary by the type of sexual act, the sex of the actors, or other restrictions such as abuse of a position of trust. Some jurisdictions also make allowances for young people engaged in sexual acts with each other.
Incestuous relationships
Most jurisdictions prohibit sexual activity between certain close relatives. These laws vary to some extent; such acts are called incestuous.
Incest laws may involve restrictions on marriage rights, which also vary between jurisdictions. When incest involves an adult and a child, it is considered to be a form of child sexual abuse.
Sexual abuse
Non-consensual sexual activity or subjecting an unwilling person to witnessing a sexual activity are forms of sexual abuse, as well as (in many countries) certain non-consensual paraphilias such as frotteurism, telephone scatophilia (indecent phonecalls), and non-consensual exhibitionism and voyeurism (known as "indecent exposure" and "peeping tom" respectively).
Prostitution and survival sex
People sometimes exchange sex for money or access to other resources. Work takes place under many varied circumstances. The person who receives payment for sexual services is known as a prostitute and the person who receives such services is referred to by a multitude of terms, such as being a client. Prostitution is one of the branches of the sex industry. The legal status of prostitution varies from country to country, from being a punishable crime to a regulated profession. Estimates place the annual revenue generated from the global prostitution industry to be over $100 billion. Prostitution is sometimes referred to as "the world's oldest profession". Prostitution may be a voluntary individual activity or facilitated or forced by pimps.
Survival sex is a form of prostitution engaged in by people in need, usually when homeless or otherwise disadvantaged people trade sex for food, a place to sleep, or other basic needs, or for drugs. The term is used by sex trade and poverty researchers and aid workers.
See also
Child sexuality
Erotic plasticity
History of human sexuality
Human female sexuality
Human male sexuality
Mechanics of human sexuality
Orgasm control
Orgastic potency
Sociosexual orientation
Transgender sexuality
References
Further reading
Durex Global Sex Survey 2005 (PDF) at data360.org
4
Intimate relationships
Evolutionary psychology | 0.763923 | 0.999886 | 0.763835 |
Solastalgia | Solastalgia is a neologism, formed by the combination of the Latin words sōlācium (solace or comfort), 'solus' (desolation) with meanings connected to devastation, deprivation of comfort, abandonment and loneliness and the Greek root -algia (pain, suffering, grief), that describes a form of emotional or existential distress caused by negatively perceived environmental change. A distinction can be made between solastalgia as the lived experience of negatively perceived change in the present and eco-anxiety linked to worry or concern about what may happen in the future (associated with "pre-traumatic stress", in reference to post-traumatic stress).
Origins
The concept of solastalgia was coined by philosopher Glenn Albrecht in 2003 and then published in the 2005 article 'Solastalgia: a new concept in human health and identity'. He describes it as "the homesickness you have when you are still at home" and your home environment is changing in ways you find distressing. In many cases this is in reference to global climate change, but more localized events such as volcanic eruptions, drought or destructive mining techniques can cause solastalgia as well. Differing from nostalgic distress on being absent from home, solastalgia refers to the distress specifically caused by environmental change while still in a home environment.
More recent approaches have connected solastalgia to the experience of historic heritage threatened by the climate crisis, such as the ancient cities of Venice, Amsterdam, and Hoi An.
Effects
A paper published by Albrecht et al. in 2007 focused on two contexts: the experiences of persistent drought in rural New South Wales (NSW) and the impact of large-scale open-cut coal mining on individuals in the Upper Hunter Valley of NSW. In both cases, people exposed to environmental change had negative reactions brought about by a sense of powerlessness over the unfolding environmental changes. A community's loss of certainty in a once-predictable environment is common among groups that express solastalgia.
In 2015, an article in the medical journal The Lancet included solastalgia as a contributing concept to the impact of climate change on human health and well-being. A study review over solastalgia shows 15 years of scholarly literature on the understanding between climate change, how it is measured in literature, and how it affects people's mentality.
A temporal component of solastalgia has also been highlighted, with scientists demonstrating a link between one's experience of unwelcome environmental change and increased anticipation about changes to come in one's environment, with this being linked with greater reported symptoms of anxiety, PTSD, anger.
Research has indicated that solastalgia can have an adaptive function when it leads people to seek comfort collectively. Like other climate related emotions, when processed collectively through conversation that allows for emotion to be processed and reflective function to be increased, this can lead to resilience and growth.
Contexts
Employment
Hedda Haugen Askland outlines how distress is caused by a lack of interaction between the society in social and political ways, that in turn affect the experience of a community. Societies whose livelihoods are not closely tied to their environment are not as likely to express solastalgia and, in turn, societies that are closely tied to their environments are more susceptible. Groups that depend heavily upon agroecosystems are considered particularly vulnerable. There are many examples of this across Africa, where agrarian communities have lost vital resources due to environmental changes. This has resulted in an increase in the number of environmental refugees throughout Africa in recent years.
Wealth
Solastalgia tends to affect wealthier populations less. A study conducted in the western United States showed that higher-income families experienced the effects of solastalgia significantly less than their lower-income neighbors following a destructive wildfire. This is due to the flexibility wealth can provide. In this case, wealthy families were able to move from or rebuild their homes, reducing the uncertainty caused by the wildfire. Other studies have supported the existence of solastalgia in Appalachian communities affected by mountain-top removal coal mining practices. Communities located in close proximity to coal mining sites experienced significantly higher depression rates than those located farther from the sites.
In Music
American death metal band Cattle Decapitation released a song 'Solastalgia' with official videoclip
In 2018, Australian pop rock musician Missy Higgins released
See also
Ecophobia
Ecopsychology
Environmental psychology
Paradise, California
References
Environment and society
Neologisms
Nostalgia
Environmental impact by effect | 0.772169 | 0.989207 | 0.763835 |
Climate change | In common usage, climate change describes global warming—the ongoing increase in global average temperature—and its effects on Earth's climate system. Climate change in a broader sense also includes previous long-term changes to Earth's climate. The current rise in global average temperature is primarily caused by humans burning fossil fuels since the Industrial Revolution. Fossil fuel use, deforestation, and some agricultural and industrial practices add to greenhouse gases. These gases absorb some of the heat that the Earth radiates after it warms from sunlight, warming the lower atmosphere. Carbon dioxide, the primary greenhouse gas driving global warming, has grown by about 50% and is at levels unseen for millions of years.
Climate change has an increasingly large impact on the environment. Deserts are expanding, while heat waves and wildfires are becoming more common. Amplified warming in the Arctic has contributed to thawing permafrost, retreat of glaciers and sea ice decline. Higher temperatures are also causing more intense storms, droughts, and other weather extremes. Rapid environmental change in mountains, coral reefs, and the Arctic is forcing many species to relocate or become extinct. Even if efforts to minimize future warming are successful, some effects will continue for centuries. These include ocean heating, ocean acidification and sea level rise.
Climate change threatens people with increased flooding, extreme heat, increased food and water scarcity, more disease, and economic loss. Human migration and conflict can also be a result. The World Health Organization calls climate change one of the biggest threats to global health in the 21st century. Societies and ecosystems will experience more severe risks without action to limit warming. Adapting to climate change through efforts like flood control measures or drought-resistant crops partially reduces climate change risks, although some limits to adaptation have already been reached. Poorer communities are responsible for a small share of global emissions, yet have the least ability to adapt and are most vulnerable to climate change.
Many climate change impacts have been felt in recent years, with 2023 the warmest on record at + since regular tracking began in 1850. Additional warming will increase these impacts and can trigger tipping points, such as melting all of the Greenland ice sheet. Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2 °C". However, with pledges made under the Agreement, global warming would still reach about by the end of the century. Limiting warming to 1.5 °C would require halving emissions by 2030 and achieving net-zero emissions by 2050.
Fossil fuel use can be phased out by conserving energy and switching to energy sources that do not produce significant carbon pollution. These energy sources include wind, solar, hydro, and nuclear power. Cleanly generated electricity can replace fossil fuels for powering transportation, heating buildings, and running industrial processes. Carbon can also be removed from the atmosphere, for instance by increasing forest cover and farming with methods that capture carbon in soil.
Terminology
Before the 1980s it was unclear whether the warming effect of increased greenhouse gases was stronger than the cooling effect of airborne particulates in air pollution. Scientists used the term inadvertent climate modification to refer to human impacts on the climate at this time. In the 1980s, the terms global warming and climate change became more common, often being used interchangeably. Scientifically, global warming refers only to increased surface warming, while climate change describes both global warming and its effects on Earth's climate system, such as precipitation changes.
Climate change can also be used more broadly to include changes to the climate that have happened throughout Earth's history. Global warming—used as early as 1975—became the more popular term after NASA climate scientist James Hansen used it in his 1988 testimony in the U.S. Senate. Since the 2000s, climate change has increased usage. Various scientists, politicians and media may use the terms climate crisis or climate emergency to talk about climate change, and may use the term global heating instead of global warming.
Global temperature rise
Temperature records prior to global warming
Over the last few million years human beings evolved in a climate that cycled through ice ages, with global average temperature ranging between 1 °C warmer and 5–6 °C colder than current levels. One of the hotter periods was the Last Interglacial between 115,000 and 130,000 years ago, when sea levels were 6 to 9 metres higher than today. The most recent glacial maximum 20,000 years ago had sea levels that were about lower than today.
Temperatures stabilized in the current interglacial period beginning 11,700 years ago. Historical patterns of warming and cooling, like the Medieval Warm Period and the Little Ice Age, did not occur at the same time across different regions. Temperatures may have reached as high as those of the late 20th century in a limited set of regions. Climate information for that period comes from climate proxies, such as trees and ice cores.
Warming since the Industrial Revolution
Around 1850 thermometer records began to provide global coverage.
Between the 18th century and 1970 there was little net warming, as the warming impact of greenhouse gas emissions was offset by cooling from sulfur dioxide emissions. Sulfur dioxide causes acid rain, but it also produces sulfate aerosols in the atmosphere, which reflect sunlight and cause so-called global dimming. After 1970, the increasing accumulation of greenhouse gases and controls on sulfur pollution led to a marked increase in temperature.
Ongoing changes in climate have had no precedent for several thousand years. Multiple independent datasets all show worldwide increases in surface temperature, at a rate of around 0.2 °C per decade. The 2013–2022 decade warmed to an average 1.15 °C [1.00–1.25 °C] compared to the pre-industrial baseline (1850–1900). Not every single year was warmer than the last: internal climate variability processes can make any year 0.2 °C warmer or colder than the average. From 1998 to 2013, negative phases of two such processes, Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) caused a so-called "global warming hiatus". After the hiatus, the opposite occurred, with years like 2023 exhibiting temperatures well above even the recent average. This is why the temperature change is defined in terms of a 20-year average, which reduces the noise of hot and cold years and decadal climate patterns, and detects the long-term signal.
A wide range of other observations reinforce the evidence of warming. The upper atmosphere is cooling, because greenhouse gases are trapping heat near the Earth's surface, and so less heat is radiating into space. Warming reduces average snow cover and forces the retreat of glaciers. At the same time, warming also causes greater evaporation from the oceans, leading to more atmospheric humidity, more and heavier precipitation. Plants are flowering earlier in spring, and thousands of animal species have been permanently moving to cooler areas.
Differences by region
Different regions of the world warm at different rates. The pattern is independent of where greenhouse gases are emitted, because the gases persist long enough to diffuse across the planet. Since the pre-industrial period, the average surface temperature over land regions has increased almost twice as fast as the global average surface temperature. This is because oceans lose more heat by evaporation and oceans can store a lot of heat. The thermal energy in the global climate system has grown with only brief pauses since at least 1970, and over 90% of this extra energy has been stored in the ocean. The rest has heated the atmosphere, melted ice, and warmed the continents.
The Northern Hemisphere and the North Pole have warmed much faster than the South Pole and Southern Hemisphere. The Northern Hemisphere not only has much more land, but also more seasonal snow cover and sea ice. As these surfaces flip from reflecting a lot of light to being dark after the ice has melted, they start absorbing more heat. Local black carbon deposits on snow and ice also contribute to Arctic warming. Arctic surface temperatures are increasing between three and four times faster than in the rest of the world. Melting of ice sheets near the poles weakens both the Atlantic and the Antarctic limb of thermohaline circulation, which further changes the distribution of heat and precipitation around the globe.
Future global temperatures
The World Meteorological Organization estimates a 66% chance of global temperatures exceeding 1.5 °C warming from the preindustrial baseline for at least one year between 2023 and 2027. Because the IPCC uses a 20-year average to define global temperature changes, a single year exceeding 1.5 °C does not break the limit.
The IPCC expects the 20-year average global temperature to exceed +1.5 °C in the early 2030s. The IPCC Sixth Assessment Report (2023) included projections that by 2100 global warming is very likely to reach 1.0-1.8 °C under a scenario with very low emissions of greenhouse gases, 2.1-3.5 °C under an intermediate emissions scenario,
or 3.3-5.7 °C under a very high emissions scenario. The warming will continue past 2100 in the intermediate and high emission scenarios, with future projections of global surface temperatures by year 2300 being similar to millions of years ago.
The remaining carbon budget for staying beneath certain temperature increases is determined by modelling the carbon cycle and climate sensitivity to greenhouse gases. According to the IPCC, global warming can be kept below 1.5 °C with a two-thirds chance if emissions after 2018 do not exceed 420 or 570 gigatonnes of . This corresponds to 10 to 13 years of current emissions. There are high uncertainties about the budget. For instance, it may be 100 gigatonnes of equivalent smaller due to and methane release from permafrost and wetlands. However, it is clear that fossil fuel resources need to be proactively kept in the ground to prevent substantial warming. Otherwise, their shortages would not occur until the emissions have already locked in significant long-term impacts.
Causes of recent global temperature rise
The climate system experiences various cycles on its own which can last for years, decades or even centuries. For example, El Niño events cause short-term spikes in surface temperature while La Niña events cause short term cooling. Their relative frequency can affect global temperature trends on a decadal timescale. Other changes are caused by an imbalance of energy from external forcings. Examples of these include changes in the concentrations of greenhouse gases, solar luminosity, volcanic eruptions, and variations in the Earth's orbit around the Sun.
To determine the human contribution to climate change, unique "fingerprints" for all potential causes are developed and compared with both observed patterns and known internal climate variability. For example, solar forcing—whose fingerprint involves warming the entire atmosphere—is ruled out because only the lower atmosphere has warmed. Atmospheric aerosols produce a smaller, cooling effect. Other drivers, such as changes in albedo, are less impactful.
Greenhouse gases
Greenhouse gases are transparent to sunlight, and thus allow it to pass through the atmosphere to heat the Earth's surface. The Earth radiates it as heat, and greenhouse gases absorb a portion of it. This absorption slows the rate at which heat escapes into space, trapping heat near the Earth's surface and warming it over time.
While water vapour (≈50%) and clouds (≈25%) are the biggest contributors to the greenhouse effect, they primarily change as a function of temperature and are therefore mostly considered to be feedbacks that change climate sensitivity. On the other hand, concentrations of gases such as (≈20%), tropospheric ozone, CFCs and nitrous oxide are added or removed independently from temperature, and are therefore considered to be external forcings that change global temperatures.
Before the Industrial Revolution, naturally-occurring amounts of greenhouse gases caused the air near the surface to be about 33 °C warmer than it would have been in their absence. Human activity since the Industrial Revolution, mainly extracting and burning fossil fuels (coal, oil, and natural gas), has increased the amount of greenhouse gases in the atmosphere, resulting in a radiative imbalance. In 2019, the concentrations of and methane had increased by about 48% and 160%, respectively, since 1750. These levels are higher than they have been at any time during the last 2 million years. Concentrations of methane are far higher than they were over the last 800,000 years.
Global anthropogenic greenhouse gas emissions in 2019 were equivalent to 59 billion tonnes of . Of these emissions, 75% was , 18% was methane, 4% was nitrous oxide, and 2% was fluorinated gases. emissions primarily come from burning fossil fuels to provide energy for transport, manufacturing, heating, and electricity. Additional emissions come from deforestation and industrial processes, which include the released by the chemical reactions for making cement, steel, aluminum, and fertilizer. Methane emissions come from livestock, manure, rice cultivation, landfills, wastewater, and coal mining, as well as oil and gas extraction. Nitrous oxide emissions largely come from the microbial decomposition of fertilizer.
While methane only lasts in the atmosphere for an average of 12 years, lasts much longer. The Earth's surface absorbs CO2 as part of the carbon cycle. While plants on land and in the ocean absorb most excess emissions of every year, that is returned to the atmosphere when biological matter is digested, burns, or decays. Land-surface carbon sink processes, such as carbon fixation in the soil and photosynthesis, remove about 29% of annual global emissions. The ocean has absorbed 20 to 30% of emitted over the last 2 decades. is only removed from the atmosphere for the long term when it is stored in the Earth's crust, which is a process that can take millions of years to complete.
Land surface changes
According to Food and Agriculture Organization, around 30% of Earth's land area is largely unusable for humans (glaciers, deserts, etc.), 26% is forests, 10% is shrubland and 34% is agricultural land. Deforestation is the main land use change contributor to global warming, as the destroyed trees release , and are not replaced by new trees, removing that carbon sink. Between 2001 and 2018, 27% of deforestation was from permanent clearing to enable agricultural expansion for crops and livestock. Another 24% has been lost to temporary clearing under the shifting cultivation agricultural systems. 26% was due to logging for wood and derived products, and wildfires have accounted for the remaining 23%. Some forests have not been fully cleared, but were already degraded by these impacts. Restoring these forests also recovers their potential as a carbon sink.
Local vegetation cover impacts how much of the sunlight gets reflected back into space (albedo), and how much heat is lost by evaporation. For instance, the change from a dark forest to grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also modify the release of chemical compounds that influence clouds, and by changing wind patterns. In tropic and temperate areas the net effect is to produce significant warming, and forest restoration can make local temperatures cooler. At latitudes closer to the poles, there is a cooling effect as forest is replaced by snow-covered (and more reflective) plains. Globally, these increases in surface albedo have been the dominant direct influence on temperature from land use change. Thus, land use change to date is estimated to have a slight cooling effect.
Other factors
Aerosols and clouds
Air pollution, in the form of aerosols, affects the climate on a large scale. Aerosols scatter and absorb solar radiation. From 1961 to 1990, a gradual reduction in the amount of sunlight reaching the Earth's surface was observed. This phenomenon is popularly known as global dimming, and is primarily attributed to sulfate aerosols produced by the combustion of fossil fuels with heavy sulfur concentrations like coal and bunker fuel. Smaller contributions come from black carbon, organic carbon from combustion of fossil fuels and biofuels, and from anthropogenic dust. Globally, aerosols have been declining since 1990 due to pollution controls, meaning that they no longer mask greenhouse gas warming as much.
Aerosols also have indirect effects on the Earth's energy budget. Sulfate aerosols act as cloud condensation nuclei and lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets. They also reduce the growth of raindrops, which makes clouds more reflective to incoming sunlight. Indirect effects of aerosols are the largest uncertainty in radiative forcing.
While aerosols typically limit global warming by reflecting sunlight, black carbon in soot that falls on snow or ice can contribute to global warming. Not only does this increase the absorption of sunlight, it also increases melting and sea-level rise. Limiting new black carbon deposits in the Arctic could reduce global warming by 0.2 °C by 2050. The effect of decreasing sulfur content of fuel oil for ships since 2020 is estimated to cause an additional 0.05 °C increase in global mean temperature by 2050.
Solar and volcanic activity
As the Sun is the Earth's primary energy source, changes in incoming sunlight directly affect the climate system. Solar irradiance has been measured directly by satellites, and indirect measurements are available from the early 1600s onwards. Since 1880, there has been no upward trend in the amount of the Sun's energy reaching the Earth, in contrast to the warming of the lower atmosphere (the troposphere). The upper atmosphere (the stratosphere) would also be warming if the Sun was sending more energy to Earth, but instead, it has been cooling.
This is consistent with greenhouse gases preventing heat from leaving the Earth's atmosphere.
Explosive volcanic eruptions can release gases, dust and ash that partially block sunlight and reduce temperatures, or they can send water vapour into the atmosphere, which adds to greenhouse gases and increases temperatures. These impacts on temperature only last for several years, because both water vapour and volcanic material have low persistence in the atmosphere. volcanic emissions are more persistent, but they are equivalent to less than 1% of current human-caused emissions. Volcanic activity still represents the single largest natural impact (forcing) on temperature in the industrial era. Yet, like the other natural forcings, it has had negligible impacts on global temperature trends since the Industrial Revolution.
Climate change feedbacks
The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds. The primary balancing mechanism is radiative cooling, as Earth's surface gives off more heat to space in response to rising temperature. In addition to temperature feedbacks, there are feedbacks in the carbon cycle, such as the fertilizing effect of on plant growth. Feedbacks are expected to trend in a positive direction as greenhouse gas emissions continue, raising climate sensitivity.
Radiative feedbacks are physical processes that influence the rate of global warming in response to warming. For instance, warmer air can hold more moisture, and water vapour itself is a potent greenhouse gas. Warmer air can also result in clouds becoming higher and thinner, where they act as an insulator and warm the planet. Another major feedback is the reduction of snow cover and sea ice in the Arctic, reducing the reflectivity of the Earth's surface there and contributing to amplification of Arctic temperature changes. Arctic amplification is also thawing permafrost, which releases methane and into the atmosphere.
Around half of human-caused emissions have been absorbed by land plants and by the oceans. This fraction is not static and if future emissions decrease, the Earth will be able to absorb up to around 70%. If they increase substantially, it'll still absorb more carbon than now, but the overall fraction will decrease to below 40%. This is because climate change increases droughts and heat waves that eventually inhibit plant growth on land, and soils will release more carbon from dead plants when they are warmer. The rate at which oceans absorb atmospheric carbon will be lowered as they become more acidic and experience changes in thermohaline circulation and phytoplankton distribution. Uncertainty over feedbacks, particularly cloud cover, is the major reason why different climate models project different magnitudes of warming for a given amount of emissions.
Modelling
A climate model is a representation of the physical, chemical and biological processes that affect the climate system. Models include natural processes like changes in the Earth's orbit, historical changes in the Sun's activity, and volcanic forcing. Models are used to estimate the degree of warming future emissions will cause when accounting for the strength of climate feedbacks. Models also predict the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.
The physical realism of models is tested by examining their ability to simulate current or past climates. Past models have underestimated the rate of Arctic shrinkage and underestimated the rate of precipitation increase. Sea level rise since 1990 was underestimated in older models, but more recent models agree well with observations. The 2017 United States-published National Climate Assessment notes that "climate models may still be underestimating or missing relevant feedback processes". Additionally, climate models may be unable to adequately predict short-term regional climatic shifts.
A subset of climate models add societal factors to a physical climate model. These models simulate how population, economic growth, and energy use affect—and interact with—the physical climate. With this information, these models can produce scenarios of future greenhouse gas emissions. This is then used as input for physical climate models and carbon cycle models to predict how atmospheric concentrations of greenhouse gases might change. Depending on the socioeconomic scenario and the mitigation scenario, models produce atmospheric concentrations that range widely between 380 and 1400 ppm.
Impacts
Environmental effects
The environmental effects of climate change are broad and far-reaching, affecting oceans, ice, and weather. Changes may occur gradually or rapidly. Evidence for these effects comes from studying climate change in the past, from modelling, and from modern observations. Since the 1950s, droughts and heat waves have appeared simultaneously with increasing frequency. Extremely wet or dry events within the monsoon period have increased in India and East Asia. Monsoonal precipitation over the Northern Hemisphere has increased since 1980. The rainfall rate and intensity of hurricanes and typhoons is likely increasing, and the geographic range likely expanding poleward in response to climate warming. Frequency of tropical cyclones has not increased as a result of climate change.
Global sea level is rising as a consequence of thermal expansion and the melting of glaciers and ice sheets. Between 1993 and 2020, the rise increased over time, averaging 3.3 ± 0.3 mm per year. Over the 21st century, the IPCC projects 32–62 cm of sea level rise under a low emission scenario, 44–76 cm under an intermediate one and 65–101 cm under a very high emission scenario. Marine ice sheet instability processes in Antarctica may add substantially to these values, including the possibility of a 2-meter sea level rise by 2100 under high emissions.
Climate change has led to decades of shrinking and thinning of the Arctic sea ice. While ice-free summers are expected to be rare at 1.5 °C degrees of warming, they are set to occur once every three to ten years at a warming level of 2 °C. Higher atmospheric concentrations cause more to dissolve in the oceans, which is making them more acidic. Because oxygen is less soluble in warmer water, its concentrations in the ocean are decreasing, and dead zones are expanding.
Tipping points and long-term impacts
Greater degrees of global warming increase the risk of passing through 'tipping points'—thresholds beyond which certain major impacts can no longer be avoided even if temperatures return to their previous state. For instance, the Greenland ice sheet is already melting, but if global warming reaches levels between 1.7 °C and 2.3 °C, its melting will continue until it fully disappears. If the warming is later reduced to 1.5 °C or less, it will still lose a lot more ice than if the warming was never allowed to reach the threshold in the first place. While the ice sheets would melt over millennia, other tipping points would occur faster and give societies less time to respond. The collapse of major ocean currents like the Atlantic meridional overturning circulation (AMOC), and irreversible damage to key ecosystems like the Amazon rainforest and coral reefs can unfold in a matter of decades.
The long-term effects of climate change on oceans include further ice melt, ocean warming, sea level rise, ocean acidification and ocean deoxygenation. The timescale of long-term impacts are centuries to millennia due to 's long atmospheric lifetime. When net emissions stabilize surface air temperatures will also stabilize, but oceans and ice caps will continue to absorb excess heat from the atmosphere. The result is an estimated total sea level rise of after 2000 years. Oceanic uptake is slow enough that ocean acidification will also continue for hundreds to thousands of years. Deep oceans (below ) are also already committed to losing over 10% of their dissolved oxygen by the warming which occurred to date. Further, the West Antarctic ice sheet appears committed to practically irreversible melting, which would increase the sea levels by at least over approximately 2000 years.
Nature and wildlife
Recent warming has driven many terrestrial and freshwater species poleward and towards higher altitudes. For instance, the range of hundreds of North American birds has shifted northward at an average rate of 1.5 km/year over the past 55 years. Higher atmospheric levels and an extended growing season have resulted in global greening. However, heatwaves and drought have reduced ecosystem productivity in some regions. The future balance of these opposing effects is unclear. A related phenomenon driven by climate change is woody plant encroachment, affecting up to 500 million hectares globally. Climate change has contributed to the expansion of drier climate zones, such as the expansion of deserts in the subtropics. The size and speed of global warming is making abrupt changes in ecosystems more likely. Overall, it is expected that climate change will result in the extinction of many species.
The oceans have heated more slowly than the land, but plants and animals in the ocean have migrated towards the colder poles faster than species on land. Just as on land, heat waves in the ocean occur more frequently due to climate change, harming a wide range of organisms such as corals, kelp, and seabirds. Ocean acidification makes it harder for marine calcifying organisms such as mussels, barnacles and corals to produce shells and skeletons; and heatwaves have bleached coral reefs. Harmful algal blooms enhanced by climate change and eutrophication lower oxygen levels, disrupt food webs and cause great loss of marine life. Coastal ecosystems are under particular stress. Almost half of global wetlands have disappeared due to climate change and other human impacts. Plants have come under increased stress from damage by insects.
Humans
The effects of climate change are impacting humans everywhere in the world. Impacts can be observed on all continents and ocean regions, with low-latitude, less developed areas facing the greatest risk. Continued warming has potentially "severe, pervasive and irreversible impacts" for people and ecosystems. The risks are unevenly distributed, but are generally greater for disadvantaged people in developing and developed countries.
Health and food
The World Health Organization calls climate change one of the biggest threats to global health in the 21st century. Scientists have warned about the irreversible harms it poses. Extreme weather events affect public health, and food and water security. Temperature extremes lead to increased illness and death. Climate change increases the intensity and frequency of extreme weather events. It can affect transmission of infectious diseases, such as dengue fever and malaria. According to the World Economic Forum, 14.5 million more deaths are expected due to climate change by 2050. 30% of the global population currently live in areas where extreme heat and humidity are already associated with excess deaths. By 2100, 50% to 75% of the global population would live in such areas.
While total crop yields have been increasing in the past 50 years due to agricultural improvements, climate change has already decreased the rate of yield growth. Fisheries have been negatively affected in multiple regions. While agricultural productivity has been positively affected in some high latitude areas, mid- and low-latitude areas have been negatively affected. According to the World Economic Forum, an increase in drought in certain regions could cause 3.2 million deaths from malnutrition by 2050 and stunting in children. With 2 °C warming, global livestock headcounts could decline by 7–10% by 2050, as less animal feed will be available. If the emissions continue to increase for the rest of century, then over 9 million climate-related deaths would occur annually by 2100.
Livelihoods and inequality
Economic damages due to climate change may be severe and there is a chance of disastrous consequences. Severe impacts are expected in South-East Asia and sub-Saharan Africa, where most of the local inhabitants are dependent upon natural and agricultural resources. Heat stress can prevent outdoor labourers from working. If warming reaches 4 °C then labour capacity in those regions could be reduced by 30 to 50%. The World Bank estimates that between 2016 and 2030, climate change could drive over 120 million people into extreme poverty without adaptation.
Inequalities based on wealth and social status have worsened due to climate change. Major difficulties in mitigating, adapting to, and recovering from climate shocks are faced by marginalized people who have less control over resources. Indigenous people, who are subsistent on their land and ecosystems, will face endangerment to their wellness and lifestyles due to climate change. An expert elicitation concluded that the role of climate change in armed conflict has been small compared to factors such as socio-economic inequality and state capabilities.
While women are not inherently more at risk from climate change and shocks, limits on women's resources and discriminatory gender norms constrain their adaptive capacity and resilience. For example, women's work burdens, including hours worked in agriculture, tend to decline less than men's during climate shocks such as heat stress.
Climate migration
Low-lying islands and coastal communities are threatened by sea level rise, which makes urban flooding more common. Sometimes, land is permanently lost to the sea. This could lead to statelessness for people in island nations, such as the Maldives and Tuvalu. In some regions, the rise in temperature and humidity may be too severe for humans to adapt to. With worst-case climate change, models project that almost one-third of humanity might live in Sahara-like uninhabitable and extremely hot climates.
These factors can drive climate or environmental migration, within and between countries. More people are expected to be displaced because of sea level rise, extreme weather and conflict from increased competition over natural resources. Climate change may also increase vulnerability, leading to "trapped populations" who are not able to move due to a lack of resources.
Reducing and recapturing emissions
Climate change can be mitigated by reducing the rate at which greenhouse gases are emitted into the atmosphere, and by increasing the rate at which carbon dioxide is removed from the atmosphere. In order to limit global warming to less than 1.5 °C global greenhouse gas emissions needs to be net-zero by 2050, or by 2070 with a 2 °C target. This requires far-reaching, systemic changes on an unprecedented scale in energy, land, cities, transport, buildings, and industry.
The United Nations Environment Programme estimates that countries need to triple their pledges under the Paris Agreement within the next decade to limit global warming to 2 °C. An even greater level of reduction is required to meet the 1.5 °C goal. With pledges made under the Paris Agreement as of October 2021, global warming would still have a 66% chance of reaching about 2.7 °C (range: 2.2–3.2 °C) by the end of the century. Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs.
Although there is no single pathway to limit global warming to 1.5 or 2 °C, most scenarios and strategies see a major increase in the use of renewable energy in combination with increased energy efficiency measures to generate the needed greenhouse gas reductions. To reduce pressures on ecosystems and enhance their carbon sequestration capabilities, changes would also be necessary in agriculture and forestry, such as preventing deforestation and restoring natural ecosystems by reforestation.
Other approaches to mitigating climate change have a higher level of risk. Scenarios that limit global warming to 1.5 °C typically project the large-scale use of carbon dioxide removal methods over the 21st century. There are concerns, though, about over-reliance on these technologies, and environmental impacts. Solar radiation modification (SRM) is also a possible supplement to deep reductions in emissions. However, SRM raises significant ethical and legal concerns, and the risks are imperfectly understood.
Clean energy
Renewable energy is key to limiting climate change. For decades, fossil fuels have accounted for roughly 80% of the world's energy use. The remaining share has been split between nuclear power and renewables (including hydropower, bioenergy, wind and solar power and geothermal energy). Fossil fuel use is expected to peak in absolute terms prior to 2030 and then to decline, with coal use experiencing the sharpest reductions. Renewables represented 86% of all new electricity generation installed in 2023. Other forms of clean energy, such as nuclear and hydropower, currently have a larger share of the energy supply. However, their future growth forecasts appear limited in comparison.
While solar panels and onshore wind are now among the cheapest forms of adding new power generation capacity in many locations, green energy policies are needed to achieve a rapid transition from fossil fuels to renewables. To achieve carbon neutrality by 2050, renewable energy would become the dominant form of electricity generation, rising to 85% or more by 2050 in some scenarios. Investment in coal would be eliminated and coal use nearly phased out by 2050.
Electricity generated from renewable sources would also need to become the main energy source for heating and transport. Transport can switch away from internal combustion engine vehicles and towards electric vehicles, public transit, and active transport (cycling and walking). For shipping and flying, low-carbon fuels would reduce emissions. Heating could be increasingly decarbonized with technologies like heat pumps.
There are obstacles to the continued rapid growth of clean energy, including renewables. For wind and solar, there are environmental and land use concerns for new projects. Wind and solar also produce energy intermittently and with seasonal variability. Traditionally, hydro dams with reservoirs and conventional power plants have been used when variable energy production is low. Going forward, battery storage can be expanded, energy demand and supply can be matched, and long-distance transmission can smooth variability of renewable outputs. Bioenergy is often not carbon-neutral and may have negative consequences for food security. The growth of nuclear power is constrained by controversy around radioactive waste, nuclear weapon proliferation, and accidents. Hydropower growth is limited by the fact that the best sites have been developed, and new projects are confronting increased social and environmental concerns.
Low-carbon energy improves human health by minimizing climate change as well as reducing air pollution deaths, which were estimated at 7 million annually in 2016. Meeting the Paris Agreement goals that limit warming to a 2 °C increase could save about a million of those lives per year by 2050, whereas limiting global warming to 1.5 °C could save millions and simultaneously increase energy security and reduce poverty. Improving air quality also has economic benefits which may be larger than mitigation costs.
Energy conservation
Reducing energy demand is another major aspect of reducing emissions. If less energy is needed, there is more flexibility for clean energy development. It also makes it easier to manage the electricity grid, and minimizes carbon-intensive infrastructure development. Major increases in energy efficiency investment will be required to achieve climate goals, comparable to the level of investment in renewable energy. Several COVID-19 related changes in energy use patterns, energy efficiency investments, and funding have made forecasts for this decade more difficult and uncertain.
Strategies to reduce energy demand vary by sector. In the transport sector, passengers and freight can switch to more efficient travel modes, such as buses and trains, or use electric vehicles. Industrial strategies to reduce energy demand include improving heating systems and motors, designing less energy-intensive products, and increasing product lifetimes. In the building sector the focus is on better design of new buildings, and higher levels of energy efficiency in retrofitting. The use of technologies like heat pumps can also increase building energy efficiency.
Agriculture and industry
Agriculture and forestry face a triple challenge of limiting greenhouse gas emissions, preventing the further conversion of forests to agricultural land, and meeting increases in world food demand. A set of actions could reduce agriculture and forestry-based emissions by two thirds from 2010 levels. These include reducing growth in demand for food and other agricultural products, increasing land productivity, protecting and restoring forests, and reducing greenhouse gas emissions from agricultural production.
On the demand side, a key component of reducing emissions is shifting people towards plant-based diets. Eliminating the production of livestock for meat and dairy would eliminate about 3/4ths of all emissions from agriculture and other land use. Livestock also occupy 37% of ice-free land area on Earth and consume feed from the 12% of land area used for crops, driving deforestation and land degradation.
Steel and cement production are responsible for about 13% of industrial emissions. In these industries, carbon-intensive materials such as coke and lime play an integral role in the production, so that reducing emissions requires research into alternative chemistries.
Carbon sequestration
Natural carbon sinks can be enhanced to sequester significantly larger amounts of beyond naturally occurring levels. Reforestation and afforestation (planting forests where there were none before) are among the most mature sequestration techniques, although the latter raises food security concerns. Farmers can promote sequestration of carbon in soils through practices such as use of winter cover crops, reducing the intensity and frequency of tillage, and using compost and manure as soil amendments. Forest and landscape restoration yields many benefits for the climate, including greenhouse gas emissions sequestration and reduction. Restoration/recreation of coastal wetlands, prairie plots and seagrass meadows increases the uptake of carbon into organic matter. When carbon is sequestered in soils and in organic matter such as trees, there is a risk of the carbon being re-released into the atmosphere later through changes in land use, fire, or other changes in ecosystems.
Where energy production or -intensive heavy industries continue to produce waste , the gas can be captured and stored instead of released to the atmosphere. Although its current use is limited in scale and expensive, carbon capture and storage (CCS) may be able to play a significant role in limiting emissions by mid-century. This technique, in combination with bioenergy (BECCS) can result in net negative emissions as is drawn from the atmosphere. It remains highly uncertain whether carbon dioxide removal techniques will be able to play a large role in limiting warming to 1.5 °C. Policy decisions that rely on carbon dioxide removal increase the risk of global warming rising beyond international goals.
Adaptation
Adaptation is "the process of adjustment to current or expected changes in climate and its effects". Without additional mitigation, adaptation cannot avert the risk of "severe, widespread and irreversible" impacts. More severe climate change requires more transformative adaptation, which can be prohibitively expensive. The capacity and potential for humans to adapt is unevenly distributed across different regions and populations, and developing countries generally have less. The first two decades of the 21st century saw an increase in adaptive capacity in most low- and middle-income countries with improved access to basic sanitation and electricity, but progress is slow. Many countries have implemented adaptation policies. However, there is a considerable gap between necessary and available finance.
Adaptation to sea level rise consists of avoiding at-risk areas, learning to live with increased flooding, and building flood controls. If that fails, managed retreat may be needed. There are economic barriers for tackling dangerous heat impact. Avoiding strenuous work or having air conditioning is not possible for everybody. In agriculture, adaptation options include a switch to more sustainable diets, diversification, erosion control, and genetic improvements for increased tolerance to a changing climate. Insurance allows for risk-sharing, but is often difficult to get for people on lower incomes. Education, migration and early warning systems can reduce climate vulnerability. Planting mangroves or encouraging other coastal vegetation can buffer storms.
Ecosystems adapt to climate change, a process that can be supported by human intervention. By increasing connectivity between ecosystems, species can migrate to more favourable climate conditions. Species can also be introduced to areas acquiring a favourable climate. Protection and restoration of natural and semi-natural areas helps build resilience, making it easier for ecosystems to adapt. Many of the actions that promote adaptation in ecosystems, also help humans adapt via ecosystem-based adaptation. For instance, restoration of natural fire regimes makes catastrophic fires less likely, and reduces human exposure. Giving rivers more space allows for more water storage in the natural system, reducing flood risk. Restored forest acts as a carbon sink, but planting trees in unsuitable regions can exacerbate climate impacts.
There are synergies but also trade-offs between adaptation and mitigation. An example for synergy is increased food productivity, which has large benefits for both adaptation and mitigation. An example of a trade-off is that increased use of air conditioning allows people to better cope with heat, but increases energy demand. Another trade-off example is that more compact urban development may reduce emissions from transport and construction, but may also increase the urban heat island effect, exposing people to heat-related health risks.
Policies and politics
Countries that are most vulnerable to climate change have typically been responsible for a small share of global emissions. This raises questions about justice and fairness. Limiting global warming makes it much easier to achieve the UN's Sustainable Development Goals, such as eradicating poverty and reducing inequalities. The connection is recognized in Sustainable Development Goal 13 which is to "take urgent action to combat climate change and its impacts". The goals on food, clean water and ecosystem protection have synergies with climate mitigation.
The geopolitics of climate change is complex. It has often been framed as a free-rider problem, in which all countries benefit from mitigation done by other countries, but individual countries would lose from switching to a low-carbon economy themselves. Sometimes mitigation also has localized benefits though. For instance, the benefits of a coal phase-out to public health and local environments exceed the costs in almost all regions. Furthermore, net importers of fossil fuels win economically from switching to clean energy, causing net exporters to face stranded assets: fossil fuels they cannot sell.
Policy options
A wide range of policies, regulations, and laws are being used to reduce emissions. As of 2019, carbon pricing covers about 20% of global greenhouse gas emissions. Carbon can be priced with carbon taxes and emissions trading systems. Direct global fossil fuel subsidies reached $319 billion in 2017, and $5.2 trillion when indirect costs such as air pollution are priced in. Ending these can cause a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Money saved on fossil subsidies could be used to support the transition to clean energy instead. More direct methods to reduce greenhouse gases include vehicle efficiency standards, renewable fuel standards, and air pollution regulations on heavy industry. Several countries require utilities to increase the share of renewables in power production.
Climate justice
Policy designed through the lens of climate justice tries to address human rights issues and social inequality. According to proponents of climate justice, the costs of climate adaptation should be paid by those most responsible for climate change, while the beneficiaries of payments should be those suffering impacts. One way this can be addressed in practice is to have wealthy nations pay poorer countries to adapt.
Oxfam found that in 2023 the wealthiest 10% of people were responsible for 50% of global emissions, while the bottom 50% were responsible for just 8%. Production of emissions is another way to look at responsibility: under that approach, the top 21 fossil fuel companies would owe cumulative climate reparations of $5.4 trillion over the period 2025–2050. To achieve a just transition, people working in the fossil fuel sector would also need other jobs, and their communities would need investments.
International climate agreements
Nearly all countries in the world are parties to the 1994 United Nations Framework Convention on Climate Change (UNFCCC). The goal of the UNFCCC is to prevent dangerous human interference with the climate system. As stated in the convention, this requires that greenhouse gas concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can be sustained. The UNFCCC does not itself restrict emissions but rather provides a framework for protocols that do. Global emissions have risen since the UNFCCC was signed. Its yearly conferences are the stage of global negotiations.
The 1997 Kyoto Protocol extended the UNFCCC and included legally binding commitments for most developed countries to limit their emissions. During the negotiations, the G77 (representing developing countries) pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions, since developed countries contributed most to the accumulation of greenhouse gases in the atmosphere. Per-capita emissions were also still relatively low in developing countries and developing countries would need to emit more to meet their development needs.
The 2009 Copenhagen Accord has been widely portrayed as disappointing because of its low goals, and was rejected by poorer nations including the G77. Associated parties aimed to limit the global temperature rise to below 2 °C. The Accord set the goal of sending $100 billion per year to developing countries for mitigation and adaptation by 2020, and proposed the founding of the Green Climate Fund. , only 83.3 billion were delivered. Only in 2023 the target is expected to be achieved.
In 2015 all UN countries negotiated the Paris Agreement, which aims to keep global warming well below 2.0 °C and contains an aspirational goal of keeping warming under . The agreement replaced the Kyoto Protocol. Unlike Kyoto, no binding emission targets were set in the Paris Agreement. Instead, a set of procedures was made binding. Countries have to regularly set ever more ambitious goals and reevaluate these goals every five years. The Paris Agreement restated that developing countries must be financially supported. , 194 states and the European Union have signed the treaty and 191 states and the EU have ratified or acceded to the agreement.
The 1987 Montreal Protocol, an international agreement to stop emitting ozone-depleting gases, may have been more effective at curbing greenhouse gas emissions than the Kyoto Protocol specifically designed to do so. The 2016 Kigali Amendment to the Montreal Protocol aims to reduce the emissions of hydrofluorocarbons, a group of powerful greenhouse gases which served as a replacement for banned ozone-depleting gases. This made the Montreal Protocol a stronger agreement against climate change.
National responses
In 2019, the United Kingdom parliament became the first national government to declare a climate emergency. Other countries and jurisdictions followed suit. That same year, the European Parliament declared a "climate and environmental emergency". The European Commission presented its European Green Deal with the goal of making the EU carbon-neutral by 2050. In 2021, the European Commission released its "Fit for 55" legislation package, which contains guidelines for the car industry; all new cars on the European market must be zero-emission vehicles from 2035.
Major countries in Asia have made similar pledges: South Korea and Japan have committed to become carbon-neutral by 2050, and China by 2060. While India has strong incentives for renewables, it also plans a significant expansion of coal in the country. Vietnam is among very few coal-dependent, fast-developing countries that pledged to phase out unabated coal power by the 2040s or as soon as possible thereafter.
As of 2021, based on information from 48 national climate plans, which represent 40% of the parties to the Paris Agreement, estimated total greenhouse gas emissions will be 0.5% lower compared to 2010 levels, below the 45% or 25% reduction goals to limit global warming to 1.5 °C or 2 °C, respectively.
Society
Denial and misinformation
Public debate about climate change has been strongly affected by climate change denial and misinformation, which originated in the United States and has since spread to other countries, particularly Canada and Australia. Climate change denial has originated from fossil fuel companies, industry groups, conservative think tanks, and contrarian scientists. Like the tobacco industry, the main strategy of these groups has been to manufacture doubt about climate-change related scientific data and results. People who hold unwarranted doubt about climate change are called climate change "skeptics", although "contrarians" or "deniers" are more appropriate terms.
There are different variants of climate denial: some deny that warming takes place at all, some acknowledge warming but attribute it to natural influences, and some minimize the negative impacts of climate change. Manufacturing uncertainty about the science later developed into a manufactured controversy: creating the belief that there is significant uncertainty about climate change within the scientific community in order to delay policy changes. Strategies to promote these ideas include criticism of scientific institutions, and questioning the motives of individual scientists. An echo chamber of climate-denying blogs and media has further fomented misunderstanding of climate change.
Public awareness and opinion
Climate change came to international public attention in the late 1980s. Due to media coverage in the early 1990s, people often confused climate change with other environmental issues like ozone depletion. In popular culture, the climate fiction movie The Day After Tomorrow (2004) and the Al Gore documentary An Inconvenient Truth (2006) focused on climate change.
Significant regional, gender, age and political differences exist in both public concern for, and understanding of, climate change. More highly educated people, and in some countries, women and younger people, were more likely to see climate change as a serious threat. College biology textbooks from the 2010s featured less content on climate change compared to those from the preceding decade, with decreasing emphasis on solutions. Partisan gaps also exist in many countries, and countries with high CO2 emissions tend to be less concerned. Views on causes of climate change vary widely between countries. Concern has increased over time, to the point where in 2021 a majority of citizens in many countries express a high level of worry about climate change, or view it as a global emergency. Higher levels of worry are associated with stronger public support for policies that address climate change.
Climate movement
Climate protests demand that political leaders take action to prevent climate change. They can take the form of public demonstrations, fossil fuel divestment, lawsuits and other activities. Prominent demonstrations include the School Strike for Climate. In this initiative, young people across the globe have been protesting since 2018 by skipping school on Fridays, inspired by Swedish teenager Greta Thunberg. Mass civil disobedience actions by groups like Extinction Rebellion have protested by disrupting roads and public transport.
Litigation is increasingly used as a tool to strengthen climate action from public institutions and companies. Activists also initiate lawsuits which target governments and demand that they take ambitious action or enforce existing laws on climate change. Lawsuits against fossil-fuel companies generally seek compensation for loss and damage.
History
Early discoveries
Scientists in the 19th century such as Alexander von Humboldt began to foresee the effects of climate change. In the 1820s, Joseph Fourier proposed the greenhouse effect to explain why Earth's temperature was higher than the Sun's energy alone could explain. Earth's atmosphere is transparent to sunlight, so sunlight reaches the surface where it is converted to heat. However, the atmosphere is not transparent to heat radiating from the surface, and captures some of that heat, which in turn warms the planet.
In 1856 Eunice Newton Foote demonstrated that the warming effect of the Sun is greater for air with water vapour than for dry air, and that the effect is even greater with carbon dioxide. She concluded that "An atmosphere of that gas would give to our earth a high temperature..."
Starting in 1859, John Tyndall established that nitrogen and oxygen—together totalling 99% of dry air—are transparent to radiated heat. However, water vapour and gases such as methane and carbon dioxide absorb radiated heat and re-radiate that heat into the atmosphere. Tyndall proposed that changes in the concentrations of these gases may have caused climatic changes in the past, including ice ages.
Svante Arrhenius noted that water vapour in air continuously varied, but the concentration in air was influenced by long-term geological processes. Warming from increased levels would increase the amount of water vapour, amplifying warming in a positive feedback loop. In 1896, he published the first climate model of its kind, projecting that halving levels could have produced a drop in temperature initiating an ice age. Arrhenius calculated the temperature increase expected from doubling to be around 5–6 °C. Other scientists were initially sceptical and believed that the greenhouse effect was saturated so that adding more would make no difference, and that the climate would be self-regulating. Beginning in 1938, Guy Stewart Callendar published evidence that climate was warming and levels were rising, but his calculations met the same objections.
Development of a scientific consensus
In the 1950s, Gilbert Plass created a detailed computer model that included different atmospheric layers and the infrared spectrum. This model predicted that increasing levels would cause warming. Around the same time, Hans Suess found evidence that levels had been rising, and Roger Revelle showed that the oceans would not absorb the increase. The two scientists subsequently helped Charles Keeling to begin a record of continued increase, which has been termed the "Keeling Curve". Scientists alerted the public, and the dangers were highlighted at James Hansen's 1988 Congressional testimony. The Intergovernmental Panel on Climate Change (IPCC), set up in 1988 to provide formal advice to the world's governments, spurred interdisciplinary research. As part of the IPCC reports, scientists assess the scientific discussion that takes place in peer-reviewed journal articles.
There is a near-complete scientific consensus that the climate is warming and that this is caused by human activities. As of 2019, agreement in recent literature reached over 99%. No scientific body of national or international standing disagrees with this view. Consensus has further developed that some form of action should be taken to protect people against the impacts of climate change. National science academies have called on world leaders to cut global emissions. The 2021 IPCC Assessment Report stated that it is "unequivocal" that climate change is caused by humans.
See also
Anthropocene – proposed geological time interval in which humans are having significant geological impact
List of climate scientists
References
Sources
IPCC reports
Fourth Assessment Report
Fifth Assessment report
. AR5 Climate Change 2013: The Physical Science Basis – IPCC
. Chapters 1–20, SPM, and Technical Summary.
. Chapters 21–30, Annexes, and Index.
Special Report: Global Warming of 1.5 °C
Global Warming of 1.5 °C –.
Special Report: Climate change and Land
Special Report: The Ocean and Cryosphere in a Changing Climate
Sixth Assessment Report
Other peer-reviewed sources
Books, reports and legal documents
Dessler, Andrew E. and Edward A. Parson, eds. The science and politics of global climate change: A guide to the debate (Cambridge University Press, 2019).
Non-technical sources
Associated Press
BBC
Bulletin of the Atomic Scientists
Carbon Brief
Climate.gov
Deutsche Welle
EPA
EUobserver
European Parliament
The Guardian
International Energy Agency
NASA
National Conference of State Legislators
National Geographic
National Science Digital Library
Natural Resources Defense Council
The New York Times
NOAA
Our World in Data
Pew Research Center
Politico
RIVM
Salon
ScienceBlogs
Scientific American
Smithsonian
The Sustainability Consortium
UNFCCC
Union of Concerned Scientists
Vice
The Verge
Vox
World Health Organization
World Resources Institute
Yale Climate Connections
External links
Intergovernmental Panel on Climate Change: IPCC (IPCC)
UN: Climate Change (UN)
Met Office: Climate Guide (Met Office)
National Oceanic and Atmospheric Administration: Climate (NOAA)
Anthropocene
Articles containing video clips
History of climate variability and change
Global environmental issues
Human impact on the environment | 0.76397 | 0.999803 | 0.763819 |
Stock issues | In the formal speech competition genre known as policy debate, a widely accepted doctrine or "debate theory" divides the argument elements of supporting the resolution affirmative into five subtopical issues, called the stock issues. Stock issues are sometime referred to as on-case arguments or simply on-case or case arguments as opposed off-case arguments.
Logicality
Three issues must first be present in the affirmative case and are the main ideas or values to vote on for taking any action (in policy debate or in everyday life). They ask: What are we doing now (inherency stock issue)? What could we be doing differently (solvency stock issue)? What are the results of what we are doing now versus what we could be doing (significance stock issue)? The last stock issue, topicality, is procedural and unique (or one-of-kind or intrinsic or necessary, aka "warranted as presented") to debate as it concerns how germane the plan (specifically, plan as stated) is to the given resolution.
Components
The stock issues are inherency, solvency, topicality, and significance:
Significance: This answers the "why" of debate. All advantages and disadvantages to the status quo (resulting from inherency) and of the plan (resulting from solvency) are evaluated under significance. A common equivocation is to confuse "significance" with the word "significantly" that appears in many resolutions. Significance is derived from judicious weighing between advantages and disadvantages, whereas significant policy changes are judged by how much the policy itself changed independently of how good or bad the effects or Solvency are. Policy debate does not assume determinism, but every effect or consequence has to be argued with evidence that those effects or consequences can or do occur.
Harms: Harms are a way of elucidating the problems or shortcomings of the status quo. Since they prove the "so, no" of continuing with the status quo, harms are closely related to, but not the same as, Significance.
Inherency: The actual situation and causes of the status quo. A case is "not inherent" when the status quo is already implementing the plan or solving the harms. Clearly, a solution that is new or different from the status quo is not warranted in such a case. Three common types of inherency are:
Structural inherency: Laws or other barriers to the implementation of the plan or causes of harms
Attitudinal inherency: Beliefs or attitudes which prevent the implementation of the plan or causing harms
Existential inherency: The harms exist and res ipsa loquitur, the status quo must not be able to solve the problem. It just is.
Topicality: The Affirmative case must affirm the resolution, since that is the job of the Affirmative in a debate round. The Affirmative case often is shown to be within the bounds of the resolution as defined by appropriate definitions, or functional implementation or resolution instrumentality through the Affirmative plan. When the resolution seems vague, the most likely or best likely intent, and even the deeper beneficial meaning of the resolution, is often considered and upheld. In practice, most debate strategies and debate club practice regions do not consider Topicality to be a "stock issue" per se; instead, it is a high-level debate brought up by the Negative that does not excuse the Affirmative plan or case approach from defects that are not found prima facie in the resolution.
A straightforward Topicality in-round debate is different from a counter-resolution brought up by the Negative, different from a Negative counterplan, and different from the rare Affirmative counterplan. Topicality is an intrinsic, unstated Affirmative burden in the Affirmative's first speech. A Negative counterplan does not have to be topical, or it can be even more topical and more supportive of the resolution than the Affirmative's plan. There are no constraints on Negative counter-resolutions that aim to have better Solvency than the Affirmative if both sides agree on the status quo harms; any constraints would have to be debated.
Solvency: The advantages of the plan itself are presented in Solvency. Who or what does the plan benefit, and why is that good or valuable? Here the harms are often demonstrated to be solved by the plan, or the link to new advantages are shown. Without solvency, a plan is useless. Thus, the Affirmative almost always loses a debate without Solvency, no matter how well the debate speech described problems of the status quo.
Depending on the allowance by judges to the cleverness of debate arguments, not all Affirmative strategies need to present a policy plan. They can, as the Affirmative case, affirm the resolution as a policy at the doctrinal, protocol, constitutional, treaty, or such supportive level and present partial plans, typically parts of the status quo, merely as examples. These types of Affirmative presentations are sometimes referred to as "d'accord with the resolution" or "agreement with the resolution" without specifying any particular plan to pursue. The affirmative case without a plan asks that the Negative plan must deter better the status quo harms or be better than the obvious Significance and Solvency already provided by the resolution. In that way, ratification of the resolution has binding effects, once affirmed, that scopes the feasibility of and judgment on the value of specific plans. For example, if the Federal government is already solving the problem, then plans that want to reach horizontal or reciprocal federalism solvency at the interstate level are considered redundant.
Justification: Do the case and the plan justify the resolution? This issue usually hinges on whether the topic at hand is one that the United States Federal Government should be involved in, or if the harms would be better addressed by the states (for domestic topics) or the United Nations or some other country or non-governmental organization (for foreign topics). Even though a plan could be straightforwardly topical as to the resolution's policy agency, the action of the policy (the plan) has to justify the resolution by some standard, such as necessary and sufficient, for example.
While logically these issues are distinguishable, in practice they might not be addressed individually or in any particular order.
Other Components
Other components have been advocated by advanced debaters and can be found during some tournament rounds of intercollegiate policy debate. These types of arguments or, sometimes, components of policy debate, can be linked to stock issues by good debaters.
Typicality: Is the Affirmative case or plan good enough for the resolution? If too generic, many other plans that could fall under the resolution could be run by the Negative, making Affirmative's Significance arguments nonunique or not significant enough. If too specific or complex, the atypicality of the Affirmative side is an extraordinary exception supporting the resolution which, while being straightforward, is difficult to support readily. Typicality is often used as an argument by either side to avoid clash on Topicality.
The debate world's pet term for atypical plans is squirrelly: squirrelly cases, squirrelly arguments, squirrelly variety of policy debate.
Specificity: Is the resolution and the Affirmative case correctly, neatly, or clearly specifying what is to be debated? A vague resolution is difficult for the Affirmative to support and, hence, difficult for the Negative to challenge, the problem of the "moving target" or "patch of fog" resolution or plan. For example, if the Affirmative claims that not going with the resolution will end in evil and the devil will appear, the Affirmative has not yet met the stock issue burden of specifying anything in particular unique or significant or inherent or justifiable about arguing for or against the supposedly anti-devil resolution; that would be a fight to the death rather than a debate.
Another example. The Negative can argue that the wording in the resolution is imprecise and that there is better diction for the meaning as stated. If say, the resolution is to "significantly enhance the prospects of" some social-economic class, the unintended consequence of such a resolution allows for Affirmative plans to include prostitution, anarchy, human trafficking, and such vices. The Negative has to straightforwardly argue what the better diction is, for example, that the resolution is to "significantly enhance the economic standard of living of" some social-economic group of persons.
Grounds: Is the format, stock issue outline, or allowances within the debate round fair to both sides? Grounds is often argued by both sides that certain types of arguments unfairly overscope, overly limit, or overburdens one side's pool of arguments in favor of the other side. Many frowned upon experimental arguments lose debate grounds and are not encouraged by debate coaches and judges, because they detract from the educational value of the activity.
Policy debate is organized, attentive, and formalized to a fair degree, with etiquette and usual expectations of good demeanor in speech. Arguments that diminish the value of debating are argued at the Grounds level of debate. For example, because the Affirmative usually runs a case and has to demonstrate stock issue burdens have been cleared, running a values-versus-virtue debate on the Negative to shift the debate's qualitative format and tone to Lincoln-Douglas steals ground from policy debate.
Subversion is a high-level Grounds debate, often brought up by the Affirmative. The Affirmative is granted "good faith" in supporting the resolution at the beginning of the debate round. A Negative position that undermines that good faith without direct argumentation is considered subversive. Some examples: kritik is a subversion, homophobia and misogyny against sources cited is subversion, punditry creep or discursiveness is a subversion, provisional plans and tentative counterplans that need too many moving parts in place in order to work by not assuming fiat are also subversive, omniscience and speculative politicking is subversive. Negative subversion is difficult for the Affirmative to counter, in which the Negative can validly argue that changing the status quo is subversive, has dire unknown consequences, a form of Negative Inherency that seeks to preserve the underlying value of the resolution without the stated resolution itself, such as in clandestine operations by C.I.A. For example, inadvertently removing certain treaties outside of the resolution is not good for the resolution.
Nonpolicy solutions: Are there nonpolicy actions that can be taken within the scope of the resolution? For example, it can be argued that changing out the members of the Joint Chiefs of Staff is a significant workflow solution within the status quo policy that also supports the resolution. The area of leadership studies claims efficient solutions, limited to the few, to institutional problems incurred by the many. These are considered "emergency measures" that have already been planned for or, on the opposite end of the spectrum, are categorized as "ordinary ado". Usually, the substock issue burden granted to these types of argument is "reasonably feasible", where at least the reasonableness part of the solution as a duty has already been accounted for by the status quo.
Another example. "Technically", prayer is not a policy solution but a cultural tradition. A policy that allows for or disallows prayer can be debated, but the prayers themselves are not subject to policy inspection nor oversight. Prayer is a valid support of the resolution, such as practiced by some state courts as a "call to action". The nonpolicy call to action is a Model U.N.-style of debate such as "urging", "recommending", "condoning", or some policy position that is important to the policy itself but does not substitute for policy.
Arguments from supersystem or transcendental arguments are above-and-beyond policy, such as arguments for regime change. Such arguments rank regime higher than policy, because regimes follow many policies concurrently. In another example, an exciting debate round narrows the policy under consideration between process legalism and virtue ethics that affects many policies concurrently, capturing, supporting, or eschewing the resolution. In a different example, revolution is a quirky argument that has seen some support in academic policy debate circles, where it is argued that all important policies have broken down and the only realistic solution is revolution, the "moment of change" argument.
Interest arguments clarify interests or values, to change policy debate itself affecting both the resolution and the types of policy plans that can be considered by Affirmative and Negative sides. For example, an Affirmative running an "environment case" on a "climate change" topic will clash with a Negative case that gives evidence to support the argument that scientists have been the lackeys of politicians and that statistical evidence for climate change are the effects to policy causation rather than scientific discovery activities that are poorly understood by the layman as if discovery activities are done independent of policies, which they are not.
Idempotency: Is the plan or resolution redundant to the status quo? Idempotency gives clear case argument against redundancy. If something is done once or is already initiated, there is no countervailing need to redo certain steps in or repeat certain portions of the policy plan, the argument from incremental idempotency. Segmented idempotency argues that there exists unnecessary steps or components of the policy plan, whether proposed or existing in the status quo.
Affirmative Idempotency grants stock issue burden clearance or good faith that the Affirmative is assumed to not be redundant to the resolution itself but is a specimen of the species, or to the status quo but is a qualified implementation of the non-status quo resolution. On the theory side of in-round debate, Argumentation Idempotency is known as to "lump and dump", which is to take many arguments at once and debate their merits in one strong, succinct argument. Negative Idempotency, if argued well, can capture Affirmative Uniqueness with a lower burden of proof but greater stylistic flair for the speaker.
Intrinsic, or Integrity: Argument from intrinsic values is a type of Inherency argument, whereas argument from Integrity is a Justification argument; it has rarely been argued the other way around, that intrinsic values belong to Justification and integrity belongs to Inherency, because that is the presumption of the status quo, and the Negative tends to clash with the Affirmative rather than supporting the status quo. These types of Inherency-versus-Justification debates sometime clash, that is, give good opposition or direct differentiation between them. Are the values assumed by the resolution intrinsic to the interests of the policy plan? Are the values assumed by the policy plan intrinsic to the goals of the resolution? And vice-versa. Likewise, is the integrity of the resolution-as-policy preserved or enhanced by the plan? Is the plan's integrity necessary to affirm the resolution? Intrinsic-integrity tend to differ from argument-for instrumentality but not much from argument-from instrumentality. Instrumentality is the deciding factor of which policy plan or position, in implementation as an instrument of a value, upholds the better set of values overall: the status quo, the Affirmative supporting the resolution, or the Negative undermining the Affirmative. Instrumentality evaluates feasibility and best-fit at the same time within a values debate judgment about policy interests rather that straight weighing of advantages and disadvantages of stock issue burdens. It is rare but does occur in debate rounds that the stock issues approach is not the best way to evaluate advantages and disadvantages because stock issues overly focus on harms and there is a cost or risk burden when participating in certain policies that would be dangerous to the implementing agency or benefits recipient group. The difference is not what one can do as a plan or should do as a resolution but what is best to do, rightly understood, as policy debate. For example, in order to affirm the resolution, the Affirmative can challenge that debate against the resolution must not censure the Press, for national security reasons. On the other hand, with direct clash, the Negative could counter that any topical debate must not avoid censuring the Press sometimes, for protected free speech reasons different from propaganda.
Another example. One could advocate the position that the Pentagon is under threat from prayerful worship. Because the Pentagon are agents of war or representatives of time-consuming war studies maintenance and exercises, passive prayerful worship captures Significance by nullifying disruptions endemic in militaristic policy solutions. The underlying values between the two positions are at odds with one another.
Nullification: Does the plan sustain the resolution? The Nullification argument is also known as "plan eats resolution", in which some part or instance or iteration of the plan nullifies the resolution entirely, having shifted away from the resolution. This type of argument allows for only partial Inherency and partial Topicality, challenging total Significance. Resolution decompletedness is the argument, typically argued by the Negative by inflating Significance. For example, if the resolution desires "significant increase in the use of" some policy element and the plan, under some weird condition, has taken away all need for any use, then the resolution becomes moot because the plan is too successful. Case topics such as nuclear weapons tend to run into this issue, in which "significant use of" nuclear deterrence does not achieve the same solvency as deproliferation but the opposite, heightening threat awareness linked to civil unrest.
References
Bates, Ben. (2002). Inherency, Strategy, and Academic Debate. Rostrum. Retrieved December 30, 2005.
Kerpen, Phil. (1999). Debate Theory Ossification. Rostrum. Retrieved August 4, 2006.
Negative Strategy Lecture from the Dartmouth Debate Workshop
Ethos Debate.
Policy debate
sv:Nyemission | 0.796379 | 0.959114 | 0.763819 |
Physical chemistry | Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria.
Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids).
Some of the relationships that physical chemistry strives to understand include the effects of:
Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids).
Reaction kinetics on the rate of a reaction.
The identity of ions and the electrical conductivity of materials.
Surface science and electrochemistry of cell membranes.
Interaction of one body with another in terms of quantities of heat and work called thermodynamics.
Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry
Study of colligative properties of number of species present in solution.
Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule.
Reactions of electrochemical cells.
Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics.
Calculation of the energy of electron movement in molecules and metal complexes.
Key concepts
The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems.
One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them.
Disciplines
Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter.
Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium.
Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate.
The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities.
History
The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations".
Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule.
The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909.
Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development.
Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry.
See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship
Journals
Some journals that deal with physical chemistry include
Zeitschrift für Physikalische Chemie (1887)
Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997)
Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905)
Macromolecular Chemistry and Physics (1947)
Annual Review of Physical Chemistry (1950)
Molecular Physics (1957)
Journal of Physical Organic Chemistry (1988)
Journal of Physical Chemistry B (1997)
ChemPhysChem (2000)
Journal of Physical Chemistry C (2007)
Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals)
Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914).
Branches and related topics
Chemical thermodynamics
Chemical kinetics
Statistical mechanics
Quantum chemistry
Electrochemistry
Photochemistry
Surface chemistry
Solid-state chemistry
Spectroscopy
Biophysical chemistry
Materials science
Physical organic chemistry
Micromeritics
See also
List of important publications in chemistry#Physical chemistry
List of unsolved problems in chemistry#Physical chemistry problems
Physical biochemistry
:Category:Physical chemists
References
External links
The World of Physical Chemistry (Keith J. Laidler, 1993)
Physical Chemistry from Ostwald to Pauling (John W. Servos, 1996)
Physical Chemistry: neither Fish nor Fowl? (Joachim Schummer, The Autonomy of Chemistry, Würzburg, Königshausen & Neumann, 1998, pp. 135–148)
The Cambridge History of Science: The modern physical and mathematical sciences (Mary Jo Nye, 2003) | 0.766835 | 0.996066 | 0.763818 |
Yogyakarta Principles | The Yogyakarta Principles is a document about human rights in the areas of sexual orientation and gender identity that was published as the outcome of an international meeting of human rights groups in Yogyakarta, Indonesia, in November 2006. The principles were supplemented and expanded in 2017 to include new grounds of gender expression and sex characteristics and a number of new principles. However, the Principles have never been accepted by the United Nations (UN) and the attempt to make gender identity and sexual orientation new categories of non-discrimination has been repeatedly rejected by the General Assembly, the UN Human Rights Council and other UN bodies.
The principles and the supplement contain a set of precepts intended to apply the standards of international human rights law to address the abuse of human rights of lesbian, gay, bisexual, transgender, and intersex (LGBTI) people.
Versions
Original 2006 Principles
The Principles themselves are a lengthy document addressing legal matters. A website that was established to hold the principles and to make them accessible has an overview of the principles, reproduced here in full:
Preamble: The Preamble acknowledges human rights violations based on sexual orientation and gender identity, which undermine the integrity and dignity, establishes the relevant legal framework, and provides definitions of key terms.
Rights to Universal Enjoyment of Human Rights, Non-Discrimination and Recognition before the Law: Principles 1 to 3 set out the principles of the universality of human rights and their application to all persons without discrimination, as well as the right of all people to recognition as a person before the law.
Example:
Laws criminalising homosexuality violate the international right to non-discrimination (decision of the UN Human Rights Committee).
Rights to Human and Personal Security: Principles 4 to 11 address fundamental rights to life, freedom from violence and torture, privacy, access to justice and freedom from arbitrary detention, and human trafficking.
Examples:
Some nations still have laws imposing the death penalty for homosexual sex between consenting adults, despite UN resolutions specifically opposing such laws.
Eleven men were arrested in a gay bar and held in custody for over a year. The UN Working Group on Arbitrary Detention concluded that the men were detained in violation of international law, noting with concern that "one of the prisoners died as a result of his arbitrary detention".
Economic, Social and Cultural Rights: Principles 12 to 18 set out the importance of non-discrimination in the enjoyment of economic, social and cultural rights, including employment, accommodation, social security, education, sexual and reproductive health including the right for informed consent and sex reassignment therapy.
Examples:
Lesbian and transgender women are at increased risk of discrimination, homelessness and violence (report of United Nations Special Rapporteur on adequate housing).
Girls who display same-sex affection face discrimination and expulsion from educational institutions (report of UN Special Rapporteur on the right to education).
The United Nations High Commissioner for Human Rights has expressed concern about laws which "prohibit gender reassignment surgery for transsexuals or require intersex persons to undergo such surgery against their will".
Rights to Expression, Opinion and Association: Principles 19 to 21 emphasise the importance of the freedom to express oneself, one's identity and one's sexuality, without State interference based on sexual orientation or gender identity, including the rights to participate peaceably in public assemblies and events and otherwise associate in community with others.
Example:
A peaceful gathering to promote equality on the grounds of sexual orientation and gender identity was banned by authorities, and participants were harassed and intimidated by police and extremist nationalists shouting slogans such as "Let's get the fags" and "We'll do to you what Hitler did with Jews" (report of the UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia & related intolerance).
Freedom of Movement and Asylum: Principles 22 and 23 highlight the rights of persons to seek asylum from persecution based on sexual orientation or gender identity.
Example:
Refugee protection should be accorded to persons facing a well-founded fear of persecution based on sexual orientation (Guidelines of the United Nations High Commissioner for Refugees).
Rights of Participation in Cultural and Family Life: Principles 24 to 26 address the rights of persons to participate in family life, public affairs and the cultural life of their community, without discrimination based on sexual orientation or gender identity.
Example:
States have an obligation not to discriminate between different-sex and same-sex relationships in allocating partnership benefits such as survivors' pensions (decision of the UN Human Rights Committee).
Rights of Human Rights Defenders: Principle 27 recognises the right to defend and promote human rights without discrimination based on sexual orientation and gender identity, and the obligation of States to ensure the protection of human rights defenders working in these areas.
Examples:
Human rights defenders working on sexual orientation and gender identity issues in countries and regions around the world "have been threatened, had their houses and offices raided, they have been attacked, tortured, sexually abused, tormented by regular death threats and even killed. A major concern in this regard is an almost complete lack of seriousness with which such cases are treated by the concerned authorities." (report of the Special Representative of the UN Secretary-General on Human Rights Defenders).
Rights of Redress and Accountability: Principles 28 and 29 affirm the importance of holding rights violators accountable, and ensuring appropriate redress for those who face rights violations.
Example:
The UN High Commissioner for Human Rights has expressed concern about "impunity for crimes of violence against LGBT persons" and "the responsibility of the State to extend effective protection. The High Commissioner notes that "excluding LGBT individuals from these protections clearly violates international human rights law as well as the common standards of humanity that define us all."
Additional Recommendations: The Principles set out 16 additional recommendations to national human rights institutions, professional bodies, funders, NGOs, the High Commissioner for Human Rights, UN agencies, treaty bodies, Special Procedures, and others.
Example:
The Principles conclude by recognising the responsibility of a range of actors to promote and protect human rights and to integrate these standards into their work. A joint statement delivered at the United Nations Human Rights Council by 54 States from four of the five UN regions on 1 December 2006, for example, urges the Human Rights Council to "pay due attention to human rights violations based on sexual orientation and gender identity" and commends the work of civil society in this area, and calls upon "all Special Procedures and treaty bodies to continue to integrate consideration of human rights violations based on sexual orientation and gender identity within their relevant mandates." As this statement recognises, and the Yogyakarta Principles affirm, effective human rights protection truly is the responsibility of all.
2017 Yogyakarta Principles plus 10
Preamble: The Preamble recalls developments in international human rights law, and an intention to regularly update the Principles. It defines gender expression and sex characteristics, applies these grounds to the original Principles, recognizes the intersectionality of the grounds adopted in the Principles, and their intersectionality with other grounds.
The Rights to State Protection: Principle 30 recognises the right to State protection from violence, discrimination and harm, including the exercise of due diligence in prevention, investigation, prosecution and remedies.
The Right to Legal Recognition: Principle 31 calls for a right to legal recognition without reference to sex, gender, sexual orientation, gender identity, gender expression or sex characteristics, ending the superfluous inclusion of such information in identification documents.
The Right to Bodily and Mental Integrity: Principle 32 recognizes a right to bodily and mental integrity, autonomy and self-determination, including a freedom from torture and ill-treatment. It calls for no-one to be subjected to invasive or irreversible medical procedures to modify sex characteristics without their consent unless necessary to prevent urgent and serious harm.
The Right to Freedom from Criminalization and Sanction: Principle 33 recognizes a right to freedom from indirect or direct criminalization or sanction, including in customary, religious, public decency, vagrancy, sodomy and propaganda laws.
The Right to Protection from Poverty: Principle 34 calls for the right to protection from poverty and social exclusion.
The Right to Sanitation: Principle 35 calls on a right to safe and equitable access to sanitation and hygiene facilities.
The Right to the Enjoyment of Human Rights in Relation to Information and Communication Technologies: Principle 36 calls for the same protection of rights online as offline.
The Right to Truth: Principle 37 calls for the right to know the truth about human rights violations, including investigation and reparation unlimited by statutes of limitations, and including access to medical records.
The Right to Practise, Protect, Preserve and Revive Cultural Diversity: Principle 38 calls on the right to practise and manifest cultural diversity.
Additional State Obligations: the YP Plus 10 set out a range of additional obligations for States, including in relation to HIV status, access to sport, combating discrimination in prenatal selection and genetic modification technologies, detention and asylum, education, the right to health, and freedom of peaceful assembly and association.
Additional Recommendations: the Principles also set out recommendations for national human rights institutions and sporting organizations.
History
The website promoting the Principles notes that concerns have been voiced about a trend of people's human rights being violated because of their sexual orientation or gender identity. While the United Nations human rights instruments detail obligations to ensure that people are protected from discrimination and stereotypes, which includes people's expression of sexual orientation or gender identity, implementation of these rights has been fragmented and inconsistent internationally. The Principles aim to provide a consistent understanding about application of international human rights law in relation to sexual orientation and gender identity.
The Yogyakarta Principles were developed at a meeting of the International Commission of Jurists, the International Service for Human Rights and human rights experts from around the world at Gadjah Mada University on Java from 6 to 9 November 2006. The seminar clarified the nature, scope and implementation of states' human rights obligations under existing human rights treaties and law, in relation to sexual orientation and gender identity. The principles that developed out of this meeting were adopted by human rights experts from around the world, and included judges, academics, a former UN High Commissioner for Human Rights, NGOs and others. The Irish human rights expert Michael O'Flaherty was rapporteur responsible for drafting and development of the Yogyakarta Principles adopted at the meeting. Vitit Muntarbhorn and Sonia Onufer Corrêa were the co-chairpersons.
The concluding document "contains 29 principles adopted unanimously by the experts, along with recommendations to governments, regional intergovernmental institutions, civil society, and the UN itself". The principles are named after Yogyakarta, the city where the conference was held. These principles have not been adopted by States in a treaty, and are thus not by themselves a legally binding part of international human rights law. However the Principles are intended to serve as an interpretive aid to the human rights treaties.
Among the 29 signatories of the principles were Mary Robinson, Manfred Nowak, Martin Scheinin, Mauro Cabral, Sonia Corrêa, Elizabeth Evatt, Philip Alston, Edwin Cameron, Asma Jahangir, Paul Hunt, Sanji Mmasenono Monageng, Sunil Babu Pant, Stephen Whittle and Wan Yanhai. The signatories intended that the Yogyakarta Principles should be adopted as a universal standard, affirming binding international legal standard with which all States must comply but some states have expressed reservations.
In alignment with the movement towards establishing basic human rights for all people, the Yogyakarta Principles specifically address sexual orientation and gender identity. The Principles were developed in response to patterns of abuse reported from around the world. These included examples of sexual assault and rape, torture and ill-treatment, extrajudicial executions, honour killing, invasion of privacy, arbitrary arrest and imprisonment, medical abuse, denial of free speech and assembly and discrimination, prejudice and stigmatization in work, health, education, housing, family law, access to justice and immigration. These are estimated to affect millions of people who are, or have been, targeted on the basis of perceived or actual sexual orientation or gender identity.
Launch
The finalised Yogyakarta Principles was launched as a global charter on 26 March 2007 at a public event in Geneva, timed to coincide with the main session of the United Nations Human Rights Council. Michael O'Flaherty, spoke at the International Lesbian and Gay Association (ILGA) Conference in Lithuania on 27 October 2007; he explained that "all human rights belong to all of us. We have human rights because we exist – not because we are gay or straight and irrespective of our gender identities", but that in many situations these human rights are not respected or realised, and that "the Yogyakarta Principles is to redress that situation".
The Yogyakarta Principles were presented at a United Nations event in New York City on 7 November 2007, co-sponsored by Argentina, Brazil and Uruguay. Human Rights Watch explain that the first step towards this would be the de-criminalisation of homosexuality in 77 countries that still carry legal penalties for people in same-sex relationships, and repeal of the death penalty in the seven countries that still have the death penalty for such sexual practice.
Yogyakarta Principles plus 10
On 10 November 2017, the "Yogyakarta Principles plus 10" (The YP +10) to the supplement the Principles, formally as "Additional Principles and State Obligation on the Application of International Human Rights Law in Relation to Sexual Orientation, Gender Expression and Sex Characteristics to Complement the Yogyakarta Principles", emerged from the intersection of the developments in international human rights law with the emerging understanding of violations suffered by person on ground of sexual orientation and gender identity and the recognition of the district and intersectional grounds of gender expression and sex characteristics.
The update was drafted by a committee of Mauro Cabral Grinspan, Morgan Carpenter, Julia Ehrt, Sheherezade Kara, Arvind Narrain, Pooja Patel, Chris Sidoti and Monica Tabengwa. Signatories additionally include Philip Alston, Edwin Cameron, Kamala Chandrakirana, Sonia Onufer Corrêa, David Kaye, Maina Kiai, Victor Madrigal-Borloz, Sanji Mmasenono Monageng, Vitit Muntarbhorn, Sunil Pant, Dainius Puras, Ajit Prakash Shah, Sylvia Tamale, Frans Viljoen, and Kimberly Zieselman.
Reasoning
The compilers explain that the Principles detail how international human rights law can be applied to sexual orientation and gender identity issues, in a way that affirms international law and to which all states can be bound. They maintain that wherever people are recognised as being born free and equal in dignity and rights, this should include LGBT people. They argue that human rights standards can be interpreted in terms of sexual orientation and gender identity when they touch on issues of torture and violence, extrajudicial execution, access to justice, privacy, freedom from discrimination, freedom of expression and assembly, access to employment, health-care, education, and immigration and refugee issues. The Principles aim to explain that States are obliged to ensure equal access to human rights, and each principle recommends how to achieve this, highlighting international agencies' responsibilities to promote and maintain human rights.
The Principles are based on the recognition of the right to non-discrimination. The Committee on Economic, Social and Cultural Rights (CESCR) has dealt with these matters in its General Comments, the interpretative texts it issues to explicate the full meaning of the provisions of the International Covenant on Economic, Social and Cultural Rights. In General Comments Nos. 18 of 2005 (on the right to work), 15 of 2002 (on the right to water) and 14 of 2000 (on the right to the highest attainable standard of health), it indicated that the Covenant proscribes any discrimination on the basis of, inter alia, sex and sexual orientation "that has the intention or effect of nullifying or impairing the equal enjoyment or exercise of [the right at issue]".
The Committee on the Elimination of Discrimination against Women (CEDAW), notwithstanding that it has not addressed the matter in a General Comment or otherwise specified the applicable provisions of the Convention on the Elimination of All Forms of Discrimination Against Women, on a number of occasions has criticised states for discrimination on the basis of sexual orientation. For example, it addressed the situation of sexual minority women in Kyrgyzstan and recommended that, 'lesbianism be reconceptualised as a sexual orientation and that penalties for its practice be abolished'.
Reception
United Nations
The Principles have never been accepted by the United Nations and the attempt to make gender identity and sexual orientation new categories of non-discrimination has been repeatedly rejected by the General Assembly, the UN Human Rights Council and other UN bodies. In July 2010, Vernor Muñoz, United Nations Special Rapporteur on the Right to Education, presented to the United Nations General Assembly an interim report on the human right to comprehensive sexual education, in which he cited the Yogyakarta Principles as a Human Rights standard. In the ensuing discussion, the majority of General Assembly Third Committee members recommended against adopting the principles. The Representative of Malawi, speaking on behalf of all African States argued that the report:
Reflected an attempt to introduce controversial notions and a disregard to the Code of Conduct for Special Procedures Mandate-holders as outlined in Human Rights Council resolution 8/4. She expressed alarm at the reinterpretation of existing human rights instruments, principles and concepts. The report also selectively quoted general comments and country-specific recommendations made by treaty bodies and propagated controversial and unrecognized principles, including the so-called Yogyakarta Principles, to justify his personal opinion.
Trinidad and Tobago, on behalf of the Caribbean States members of CARICOM, argued that the special rapporteur "had chosen to ignore his mandate, as laid down in Human Rights Council resolution 8/4, and to focus instead on the so-called 'human right to comprehensive education.' Such a right did not exist under any internationally agreed human rights instrument or law and his attempts to create one far exceeded his mandate and that of the Human Rights Council." The representative of Mauritania, speaking on behalf of the Arab League, said that the Arab States were "dismayed" and accused the rapporteur of attempting to promote "controversial doctrines that did not enjoy universal recognition" and to "redefine established concepts of sexual and reproductive health education, or of human rights more broadly". The Russian Federation expressed "its disappointment and fundamental disagreement with the report," writing of the rapporteur:
As justification for his conclusions, he cited numerous documents which had not been agreed to at the intergovernmental level, and which therefore could not be considered as authoritative expressions of the opinion of the international community. In particular, he referred to the Yogyarkarta Principles and also to the International Technical Guidance on Sexuality Education. Implementation of various provisions and recommendations of the latter document would result in criminal prosecution for such criminal offences as corrupting youth.
Regional institutions
The Council of Europe states in "Human Rights and Gender Identity" that Principle 3 of the Yogyakarta Principles is "of particular relevance". They recommend that member states "abolish sterilisation and other compulsory medical treatment as a necessary legal requirement to recognise a person's gender identity in laws regulating the process for name and sex change," (V.4) as well as to "make gender reassignment procedures, such as hormone treatment, surgery and psychological support, accessible for transgender persons, and ensure that they are reimbursed by public health insurance schemes." (V.5) Similarly, the Parliamentary Assembly of the Council of Europe adopted a document titled "Discrimination on the basis of sexual orientation and gender identity" on 23 March 2010, describing the prejudice that "homosexuality is immoral" as a "subjective view usually based on religious dogma that, in a democratic society, cannot be a basis for limiting the rights of others." The document argued that the belief that "homosexuality is worsening the demographic crisis and threatening the future of the nation" is "illogical," and that "granting legal recognition to same-sex couples has no influence on whether heterosexuals marry or have children."
National institutions
However, the Principles have been cited by numerous national governments and court judgments. The principles influenced the proposed UN declaration on sexual orientation and gender identity in 2008.
Human rights and LGBT-rights groups took up the principles, and discussion has featured in the gay press, as well as academic papers and text books (see bibliography).
Brazil
In a unanimous decision on May 5, 2011, the Brazilian Supreme Federal Court became the first supreme court in the world to recognize same-sex civil unions as a family entity equal in rights to a heterosexual one, as certified by UNESCO, expressly citing the Yogyakarta Principles as a significant legal guideline:
India
The Supreme Court of India relied on the Yogyakarta Principles (2007), when ruling in the case of NLSA v. Union of India (2014), which recognised the right to self-identify gender and recognized non-binary gender as "Third Gender." The court held that Yogyakarta Principles must be recognised and followed as long as they are consistent with the fundamental rights enshrined in the Constitution of India.
The Constitutional Bench of the Supreme Court of India held that the Yogyakarta Principles (2007) conform to the constitutional view of fundamental rights, when decriminalizing homosexuality in the case of Navtej Singh Johar v. Union of India (2018). In his concurring opinion, Justice R.F. Nariman said,
Essentially, the Supreme Court read the Yogyakarta Principles (2007) into the Fundamental Rights of the Indian Constitution.
Intersex people
The Yogyakarta Principles mention intersex people only briefly. In a manual on Promoting and Protecting Human Rights in relation to Sexual Orientation, Gender Identity and Sex Characteristics the Asia Pacific Forum of National Human Rights Institutions (APF) states, "The Principles do not deal appropriately or adequately with the application of international human rights law in relation to intersex people. They do not specifically distinguish sex characteristics."
Those issues were addressed in the Yogyakarta Principles plus 10 update. Boris Dittrich of Human Rights Watch comments that the new update "protects intersex children from involuntary modification of their sex characteristics".
See also
Brazilian Resolution
Compulsory sterilization
Declaration of Montreal
Gender role
International human rights law
Intersex human rights
LGBT history
LGBT people in prison – Prison rape
LGBT rights at the United Nations
LGBT rights by country or territory
LGBT stereotypes
LGBT topics in medicine
Minority rights
Reproductive rights
Right to sexuality
Social exclusion – social vulnerability
Violence against LGBT people
World Association for Sexual Health
References
Bibliography
The Yogyakarta Principles
Yogyakarta Principles plus 10
The Yogyakarta Principles (Official site of UNHCR)
Yogyakarta Principles in Action
Dittrich, Boris, Yogyakarta Principles: applying existing human rights norms to sexual orientation and gender identity, HIV AIDS Policy Law Rev. 2008 Dec;13(2–3):92-3.
S. Farrior, Human Rights Advocacy on Gender Issues: Challenges and Opportunities, J Human Rights Practice, March 1, 2009; 1(1): 83–100.
Michael O'Flaherty and John Fisher, Sexual Orientation, Gender Identity and International Human Rights Law: Contextualising the Yogyakarta Principles, Human Rights Law Review 2008 8(2):207–248;
External links
The Yogyakarta Principles
Anti-discrimination law
History of human rights
LGBTQ rights
Intersex rights
Human rights instruments
Transgender law
2006 in international relations
2006 in Indonesia
2006 in LGBTQ history | 0.775661 | 0.984725 | 0.763812 |
Land ethic | A land ethic is a philosophy or theoretical framework about how, ethically, humans should regard the land. The term was coined by Aldo Leopold (1887–1948) in his A Sand County Almanac (1949), a classic text of the environmental movement. There he argues that there is a critical need for a "new ethic", an "ethic dealing with human's relation to land and to the animals and plants which grow upon it".
Leopold offers an ecologically based land ethic that rejects strictly human-centered views of the environment and focuses on the preservation of healthy, self-renewing ecosystems. A Sand County Almanac was the first systematic presentation of a holistic or ecocentric approach to the environment. Although Leopold is credited with coining the term "land ethic", there are many philosophical theories that speak to how humans should treat the land. Some of the most prominent land ethics include those rooted in economics, utilitarianism, libertarianism, egalitarianism, and ecology.
Economics-based land ethic
This is a land ethic based wholly upon economic self-interest.> Leopold sees two flaws in this type of ethic. First, he argues that most members of an ecosystem have no economic worth. For this reason, such an ethic can ignore or even eliminate these members when they are actually necessary for the health of the biotic community of the land. And second, it tends to relegate conservation necessary for healthy ecosystems to the government and these tasks are too large and dispersed to be adequately addressed by such an institution. This ties directly into the context within which Leopold wrote A Sand County Almanac.
For example, when the US Forest Service was founded by Gifford Pinchot, the prevailing ethos was economic and utilitarian. Leopold argued for an ecological approach, becoming one of the first to popularize this term coined by Henry Chandler Cowles of the University of Chicago during his early 1900s research at the Indiana Dunes. Conservation became the preferred term for the more anthropocentric model of resource management, while the writing of Leopold and his inspiration, John Muir, led to the development of environmentalism.
Utilitarian-based land ethic
Utilitarianism was most prominently defended by British philosophers Jeremy Bentham and John Stuart Mill. Though there are many varieties of utilitarianism, generally it is the view that a morally right action is an action that produces the maximum good for people. Utilitarianism has often been used when deciding how to use land and it is closely connected with an economic-based ethic. For example, it forms the foundation for industrial farming; an increase in yield, which would increase the number of people able to receive goods from farmed land, is judged from this view to be a good action or approach. In fact, a common argument in favor of industrial agriculture is that it is a good practice because it increases the benefits for humans; benefits such as food abundance and a drop in food prices. However, a utilitarian-based land ethic is different from a purely economic one as it could be used to justify the limiting of a person's rights to make a profit. For example, in the case of the farmer planting crops on a slope, if the runoff of soil into the community creek led to the damage of several neighbor's properties, then the good of the individual farmer would be overridden by the damage caused to his neighbors. Thus, while a utilitarian-based land ethic can be used to support economic activity, it can also be used to challenge this activity.
Libertarian-based land ethic
Another philosophical approach often used to guide actions when making (or not making) changes to the land is libertarianism. Roughly, libertarianism is the ethical view that agents own themselves and have particular moral rights, including the right to acquire the property. In a looser sense, libertarianism is commonly identified with the belief that each individual person has a right to a maximum amount of freedom or liberty when this freedom does not interfere with other people's freedom. A well-known libertarian theorist is John Hospers. For right-libertarians, property rights are natural rights. Thus, it would be acceptable for the above farmer to plant on a slope as long as this action does not limit the freedom of his or her neighbors.
This view is closely connected to utilitarianism. Libertarians often use utilitarian arguments to support their own arguments. For example, in 1968, Garrett Hardin applied this philosophy to land issues when he argued that the only solution to the "Tragedy of the Commons" was to place soil and water resources into the hands of private citizens. Hardin supplied utilitarian justifications to support his argument. However, it can be argued that this leaves libertarian-based land ethics open to the above critique lodged against economic-based approaches. Even excepting this, the libertarian view has been challenged by the critique that numerous people making self-interested decisions often cause large ecological disasters, such as the Dust Bowl disaster. Even so, libertarianism is a philosophical view commonly held within the United States and, especially, held by U.S. ranchers and farmers.
Egalitarian-based land ethic
Egalitarian-based land ethics are often developed as a response to libertarianism. This is because, while libertarianism ensures the maximum amount of human liberty, it does not require that people help others. It also leads to the uneven distribution of wealth. A well-known egalitarian philosopher is John Rawls. When focusing on land use, egalitarianism evaluates its uneven distribution and the uneven distribution of the fruits of that land. While both a utilitarian- and libertarian-based land ethic could conceivably rationalize this mal-distribution, an egalitarian approach typically favors equality, whether that be an equal entitlement to land or access to food. However, there is also the question of negative rights when holding to an egalitarian-based ethic. In other words, if it is recognized that a person has a right to something, then someone has the responsibility to supply this opportunity or item; whether that be an individual person or the government. Thus, an egalitarian-based land ethic could provide a strong argument for the preservation of soil fertility and water because it links land and water with the right to food, the growth of human populations, and the decline of soil and water resources.
Ecologically based land ethic
Land ethics may also be based upon the principle that the land (and the organisms that live off the land) has intrinsic value. These ethics are, roughly, based on an ecological or systems view. This position was first put forth by Ayers Brinser in Our Use of the Land, published in 1939. Brinser argued that white settlers brought with them "the seeds of a civilization which has grown by consuming the land, that is, a civilization which has used up the land in much the same way that a furnace burns coal.” Later, Aldo Leopold's posthumously published A Sand County Almanac (1949) popularized this idea.
Another example is the deep ecology view, which argues that human communities are built upon a foundation of the surrounding ecosystems or the biotic communities and that all life is of inherent worth. Similar to egalitarian-based land ethics, the above land ethics were also developed as alternatives to utilitarian and libertarian-based approaches. Leopold's ethic is one of the most popular ecological approaches in the early 21st century. Other writers and theorists who hold this view include Wendell Berry (b. 1934), N. Scott Momaday, J. Baird Callicott, Paul B. Thompson, and Barbara Kingsolver.
Aldo Leopold's land ethic
In his classic essay, "The Land Ethic," published posthumously in A Sand County Almanac (1949), Leopold proposes that the next step in the evolution of ethics is the expansion of ethics to include nonhuman members of the biotic community, collectively referred to as "the land." Leopold states the basic principle of his land ethic as: "A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise."
He also describes it in this way: "The land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land . . . [A] land ethic changes the role of Homo sapiens from conqueror of the land community to plain member and citizen of it. It implies respect for his fellow-members, and also respect for the community as such."
Leopold was a naturalist, not a philosopher. There is much scholarly debate about what exactly Leopold's land ethic asserts and how he argues for it. At its core, the land ethics claims (1) that humans should view themselves as plain members and citizens of biotic communities, not as "conquerors" of the land; (2) that we should extend ethical consideration to ecological wholes ("soils, waters, plants, and animals"), (3) that our primary ethical concern should not be with individual plants or animals, but with the healthy functioning of whole biotic communities, and (4) that the "summary moral maxim" of ecological ethics is that we should seek to preserve the integrity, stability, and beauty of the biotic community. Beyond this, scholars disagree about the extent to which Leopold rejected traditional human-centered approaches to the environment and how literally he intended his basic moral maxim to be applied. They also debate whether Leopold based his land ethic primarily on human-centered interests, as many passages in A Sand County Almanac suggest, or whether he placed significant weight on the intrinsic value of nature. One prominent student of Leopold, J. Baird Callicott, has suggested that Leopold grounded his land ethics on various scientific claims, including a Darwinian view of ethics as rooted in special affections for kith and kin, a Copernican view of humans as plain members of nature and the cosmos, and the finding of modern ecology that ecosystems are complex, interrelated wholes. However, this interpretation has recently been challenged by Roberta Millstein, who has offered evidence that Darwin's influence on Leopold was not related to Darwin's views about moral sentiments, but rather to Darwin's views about interdependence in the struggle for existence.
Attractions of Leopold's land ethic
Leopold's ecocentric land ethic is popular today with mainstream environmentalists for a number of reasons. Unlike more radical environmental approaches, such as deep ecology or biocentrism, it does not require huge sacrifices of human interests. Leopold does not, for example, believe that humans should stop eating or hunting, or experimenting on animals. Nor does he call for a massive reduction in the human population, or for permitting humans to interfere with nature only to satisfy vital human needs (regardless of economic or other human costs). As an environmental ethic, Leopold's land ethic is a comparatively moderate view that seeks to strike a balance between human interests and a healthy and biotically diverse natural environment. Many of the things mainstream environmentalists favor—preference for native plants and animals over invasive species, hunting or selective culling to control overpopulated species that are damaging to the environment, and a focus on preserving healthy, self-regenerating natural ecosystems both for human benefit and for their own intrinsic value—jibe with Leopold's ecocentric land ethic.
A related understanding has been framed as global land as a commons. In this view biodiversity and terrestrial carbon storage - an element of climate change mitigation - are global public goods. Hence, land should be governed on a global scale as a commons, requiring increased international cooperation on nature preservation.
Criticism
Some critics fault Leopold for lack of clarity in spelling out exactly what the land ethic is and its specific implications for how humans should think about the environment. It is clear that Leopold did not intend his basic normative principle ("A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community") to be regarded as an ethical absolute. Thus construed, it would prohibit clearing land to build homes, schools, or farms, and generally require a "hands-off" approach to nature that Leopold plainly did not favor. Presumably, therefore, his maxim should be seen as a general guideline for valuing natural ecosystems and striving to achieve what he terms a sustainable state of "harmony between men and land." But this is vague and, according to some critics, not terribly helpful.
A second common criticism of Leopold is that he fails to state clearly why we should adopt the land ethic. He often cites examples of environmental damage (e.g., soil erosion, pollution, and deforestation) that result from traditional human-centered, "conqueror" attitudes towards nature. But it is unclear why such examples support the land ethic specifically, as opposed to biocentrism or some other nature-friendly environmental ethic. Leopold also frequently appeals to modern ecology, evolutionary theory, and other scientific discoveries to support his land ethic. Some critics have suggested that such appeals may involve an illicit move from facts to values. At a minimum, such critics claim, more should be said about the normative basis of Leopold's land ethic.
Other critics object to Leopold's ecological holism. According to animal rights advocate, Tom Regan, Leopold's land ethic condones sacrificing the good of individual animals to the good of the whole, and is thus a form of "environmental fascism." According to these critics, we rightly reject such holistic approaches in human affairs. Why, they ask, should we adopt them in our treatment of non-human animals?
Finally, some critics have questioned whether Leopold's land ethic might require unacceptable interferences with nature in order to protect current, but transient, ecological balances. If the fundamental environmental imperative is to preserve the integrity and stability of natural ecosystems, wouldn't this require frequent and costly human interventions to prevent naturally occurring changes to natural environments? In nature, the "stability and integrity" of ecosystems are disrupted or destroyed all the time by drought, fire, storms, pests, newly invasive predators, etc. Must humans act to prevent such ecological changes, and if so, at what cost? Why should we place such high value on current ecological balances? Why think it is our role to be nature's steward or policeman? According to these critics, Leopold's stress on preserving existing ecological balances is overly human-centered and fails to treat nature with the respect it deserves.
See also
Agrarianism
Biomimicry
Conservation biology
Conservation ethic
Conservation movement
Deep Ecology
Ecofeminism
Ecology
Ecology movement
Environmentalism
Environmental protection
Environmental stewardship
Glenn Albrecht
Habitat conservation
Land stewardship
Natural environment
Natural capital
Natural resource
Renewable resource
Solastalgia
Southern Agrarians
Sustainability
Water conservation
References
External links
Land Ethic Toolbox
The Land Ethic—abridged html version, with commentary on its critique of Biblical traditions, full pdf version—neohasid.org
The Aldo Leopold Foundation
Environmental ethics
Land use
Landscape
1949 introductions | 0.780131 | 0.979075 | 0.763807 |
Efficiency | Efficiency is the often measurable ability to avoid making mistakes or wasting materials, energy, efforts, money, and time while performing a task. In a more general sense, it is the ability to do things well, successfully, and without waste.
In more mathematical or scientific terms, it signifies the level of performance that uses the least amount of inputs to achieve the highest amount of output. It often specifically comprises the capability of a specific application of effort to produce a specific outcome with a minimum amount or quantity of waste, expense, or unnecessary effort. Efficiency refers to very different inputs and outputs in different fields and industries. In 2019, the European Commission said: "Resource efficiency means using the Earth's limited resources in a sustainable manner while minimising impacts on the environment. It allows us to create more with less and to deliver greater value with less input."
Writer Deborah Stone notes that efficiency is "not a goal in itself. It is not something we want for its own sake, but rather because it helps us attain more of the things we value."
Efficiency and effectiveness
Efficiency is very often confused with effectiveness. In general, efficiency is a measurable concept, quantitatively determined by the ratio of useful output to total useful input. Effectiveness is the simpler concept of being able to achieve a desired result, which can be expressed quantitatively but does not usually require more complicated mathematics than addition. Efficiency can often be expressed as a percentage of the result that could ideally be expected, for example if no energy were lost due to friction or other causes, in which case 100% of fuel or other input would be used to produce the desired result. In some cases efficiency can be indirectly quantified with a non-percentage value, e.g. specific impulse.
A common but confusing way of distinguishing between efficiency and effectiveness is the saying "Efficiency is doing things right, while effectiveness is doing the right things". This saying indirectly emphasizes that the selection of objectives of a production process is just as important as the quality of that process. This saying popular in business, however, obscures the more common sense of "effectiveness", which would/should produce the following mnemonic: "Efficiency is doing things right; effectiveness is getting things done". This makes it clear that effectiveness, for example large production numbers, can also be achieved through inefficient processes if, for example, workers are willing or used to working longer hours or with greater physical effort than in other companies or countries or if they can be forced to do so. Similarly, a company can achieve effectiveness, for example large production numbers, through inefficient processes if it can afford to use more energy per product, for example if energy prices or labor costs or both are lower than for its competitors.
Inefficiency
Inefficiency is the absence of efficiency. Kinds of inefficiency include:
Allocative inefficiency refers to a situation in which the distribution of resources between alternatives does not fit with consumer taste (perceptions of costs and benefits). For example, a company may have the lowest costs in "productive" terms, but the result may be inefficient in allocative terms because the "true" or social cost exceeds the price that consumers are willing to pay for an extra unit of the product. This is true, for example, if the firm produces pollution (see also external cost). Consumers would prefer that the firm and its competitors produce less of the product and charge a higher price, to internalize the external cost.
Distributive inefficiency refers to the inefficient distribution of income and wealth within a society. Decreasing marginal utilities of wealth, in theory, suggests that more egalitarian distributions of wealth are more efficient than inegalitarian distributions. Distributive inefficiency is often associated with economic inequality.
Economic inefficiency refers to a situation where "we could be doing a better job," i.e., attaining our goals at lower cost. It is the opposite of economic efficiency. In the latter case, there is no way to do a better job, given the available resources and technology. Sometimes, this type of economic efficiency is referred to as the Koopmans efficiency.
Keynesian inefficiency might be defined as incomplete use of resources (labor, capital goods, natural resources, etc.) because of inadequate aggregate demand. We are not attaining potential output, while suffering from cyclical unemployment. We could do a better job if we applied deficit spending or expansionary monetary policy.
Pareto inefficiency is a situation in which one person can not be made better off without making anyone else worse off. In practice, this criterion is difficult to apply in a constantly changing world, so many emphasize Kaldor-Hicks efficiency and inefficiency: a situation is inefficient if someone can be made better off even after compensating those made worse off, regardless of whether the compensation actually occurs.
Productive inefficiency says that we could produce the given output at a lower cost—or could produce more output for a given cost. For example, a company that is inefficient will have higher operating costs and will be at a competitive disadvantage (or have lower profits than other firms in the market). See Sickles and Zelenyuk (2019, Chapter 3) for more extensive discussions.
Resource-market inefficiency refers to barriers that prevent full adjustment of resource markets, so that resources are either unused or misused. For example, structural unemployment results from barriers of mobility in labor markets which prevent workers from moving to places and occupations where there are job vacancies. Thus, unemployed workers can co-exist with unfilled job vacancies.
X-inefficiency refers to inefficiency in the "black box" of production, connecting inputs to outputs. This type of inefficiency says that we could be organizing people or production processes more effectively. Often problems of "morale" or "bureaucratic inertia" cause X-inefficiency.
Productive inefficiency, resource-market inefficiency, and X-inefficiency might be analyzed using data envelopment analysis and similar methods.
Mathematical expression
Efficiency is often measured as the ratio of useful output to total input, which can be expressed with the mathematical formula r=P/C, where P is the amount of useful output ("product") produced per the amount C ("cost") of resources consumed. This may correspond to a percentage if products and consumables are quantified in compatible units, and if consumables are transformed into products via a conservative process. For example, in the analysis of the energy conversion efficiency of heat engines in thermodynamics, the product P may be the amount of useful work output, while the consumable C is the amount of high-temperature heat input. Due to the conservation of energy, P can never be greater than C, and so the efficiency r is never greater than 100% (and in fact must be even less at finite temperatures).
In science and technology
In physics
Useful work per quantity of energy, mechanical advantage over ideal mechanical advantage, often denoted by the Greek lowercase letter η (Eta):
Electrical efficiency
Energy conversion efficiency
Mechanical efficiency
Thermal efficiency, ratio of work done to thermal energy consumed
Efficient energy use, the objective of maximising efficiency
In thermodynamics:
Energy conversion efficiency, measure of second law thermodynamic loss
Radiation efficiency, ratio of radiated power to power absorbed at the terminals of an antenna
Volumetric efficiency, in internal combustion engine design for the RAF
Lift-to-drag ratio
Faraday efficiency, electrolysis
Quantum efficiency, a measure of sensitivity of a photosensitive device
Grating efficiency, a generalization of the reflectance of a mirror, extended to a diffraction grating
In economics
Productivity improving technologies
Economic efficiency, the extent to which waste or other undesirable features are avoided
Market efficiency, the extent to which a given market resembles the ideal of an efficient market
Pareto efficiency, a state of its being impossible to make one individual better off, without making any other individual worse off
Kaldor-Hicks efficiency, a less stringent version of Pareto efficiency
Allocative efficiency, the optimal distribution of goods
Efficiency wages, paying workers more than the market rate for increased productivity
Business efficiency, revenues relative to expenses, etc.
Efficiency Movement, of the Progressive Era (1890–1932), advocated efficiency in the economy, society and government
In other sciences
In computing:
Algorithmic efficiency, optimizing the speed and memory requirements of a computer program.
A non-functional requirement (criterion for quality) in systems design and systems architecture which says something about the resource consumption for given load
Efficiency factor, in data communications
Storage efficiency, effectiveness of computer data storage
Efficiency (statistics), a measure of desirability of an estimator
Material efficiency, compares material requirements between construction projects or physical processes
Administrative efficiency, measuring transparency within public authorities and simplicity of rules and procedures for citizens and businesses
In biology:
Photosynthetic efficiency
Ecological efficiency
See also
Jevons paradox
References
Economic efficiency
Heat transfer
Engineering concepts
Waste management
Waste of resources | 0.768237 | 0.994221 | 0.763797 |
Pre-industrial society | Pre-industrial society refers to social attributes and forms of political and cultural organization that were prevalent before the advent of the Industrial Revolution, which occurred from 1750 to 1850. Pre-industrial refers to a time before there were machines and tools to help perform tasks en masse. Pre-industrial civilization dates back to centuries ago, but the main era known as the pre-industrial society occurred right before the industrial society. Pre-Industrial societies vary from region to region depending on the culture of a given area or history of social and political life. Europe was known for its feudal system and the Italian Renaissance.
The term "pre-industrial" is also used as a benchmark for environmental conditions before the development of industrial society: for example, the
Paris Agreement, adopted in Paris on 12 December, 2015 and in force from 4 November, 2016, "aims to limit global warming to well below 2, preferably to 1.5 degrees celsius, compared to pre-industrial levels." The date for the end of the "pre-industrial era" is not defined.
Common attributes
Limited production
Extreme agricultural economy
Limited division of labor. In pre-industrial societies, production was relatively simple and the number of specialized crafts was limited.
Limited variation of social classes
Parochialism—Communications were limited between communities in pre-industrial societies. Few had the opportunity to see or hear beyond their own village. Industrial societies grew with the help of faster means of communication, having more information at hand about the world, allowing knowledge transfer and cultural diffusion between them.
Populations grew at substantial rates
Social classes: peasants and lords
Subsistence level of living
Population dependent on peasants for food
People were located in villages rather than in cities
Economic systems
Hunter gather society
Commodity market
Mercantilism
Subsistence agriculture
Subsistence
Labor conditions
Harsh working conditions had been prevalent long before the Industrial Revolution took place. Pre-industrial society was very static, and child labour, dirty living conditions, and long working hours were not as equally prevalent before the Industrial Revolution.
See also
Agrarian society
Industrialisation
Modernization theory
Traditional society
Dependency Theory
Imperialism
Hunter gatherers
Low technology
Transhumance
Nomads
Pastoral nomads
Nomadic
Post-industrial society
Proto-industrialization
References
Bibliography
Grinin, L. 2007. Periodization of History: A theoretic-mathematical analysis. In: History & Mathematics. Ed. by Leonid Grinin, Victor de Munck, and Andrey Korotayev. Moscow: KomKniga/URSS. P.10-38. .
Sociological terminology
Industrial Revolution | 0.768853 | 0.99342 | 0.763794 |
Environmental conflict | Environmental conflicts, socio-environmental conflict or ecological distribution conflicts (EDCs) are social conflicts caused by environmental degradation or by unequal distribution of environmental resources. The Environmental Justice Atlas documented 3,100 environmental conflicts worldwide as of April 2020 and emphasised that many more conflicts remained undocumented.
Parties involved in these conflicts include locally affected communities, states, companies and investors, and social or environmental movements; typically environmental defenders are protecting their homelands from resource extraction or hazardous waste disposal. Resource extraction and hazardous waste activities often create resource scarcities (such as by overfishing or deforestation), pollute the environment, and degrade the living space for humans and nature, resulting in conflict. A particular case of environmental conflicts are forestry conflicts, or forest conflicts which "are broadly viewed as struggles of varying intensity between interest groups, over values and issues related to forest policy and the use of forest resources". In the last decades, a growing number of these have been identified globally.
Frequently environmental conflicts focus on environmental justice issues, the rights of indigenous people, the rights of peasants, or threats to communities whose livelihoods are dependent on the ocean. Outcomes of local conflicts are increasingly influenced by trans-national environmental justice networks that comprise the global environmental justice movement.
Environmental conflict can complicate response to natural disaster or exacerbate existing conflictsespecially in the context of geopolitical disputes or where communities have been displaced to create environmental migrants. The study of these conflicts is related to the fields of ecological economics, political ecology, and environmental justice.
Causes
The origin of environmental conflicts can be directly linked to the industrial economy. As less than 10% of materials and energy are recycled, the industrial economy is constantly expanding energy and material extraction at commodity frontiers through two main processes:
Appropriating new natural resources through territorial claims and land grabs.
Making exploitation of existing sites more efficient through investments or social and technical innovation
EDCs are caused by the unfair distribution of environmental costs and benefits. These conflicts arise from social inequality, contested claims over territory, the proliferation of extractive industries, and the impacts of the economic industrialization over the past centuries. Oil, mining, and agriculture industries are focal points of environmental conflicts.
Types of conflicts
A 2020 paper mapped the arguments and concerns of environmental defenders in over 2743 conflicts found in the Environmental Justice Atlas (EJAtlas). The analysis found that the industrial sectors most frequently challenged by environmental conflicts were mining (21%), fossil energy (17%), biomass and land uses (15%), and water management (14%). Killings of environmental defenders happened in 13% of the reported cases.
There was also a distinct difference in the types of conflict found in high and low income countries. There were more conflicts around conservation, water management, and biomass and land use in low income countries; while in high income countries almost half of conflicts focused on waste management, tourism, nuclear power, industrial zones, and other infrastructure projects. The study also found that most conflicts start with self-organized local groups defending against infringement, with a focus on non-violent tactics.
Water protectors and land defenders who defend indigenous rights are criminalized at a much higher rate than in other conflicts.
Environmental conflicts can be classified based on the different stages of the commodity chain: during the extraction of energy sources or materials, in the transportation and production of goods, or at the final disposal of waste.
EJAtlas Categories
The EJAtlas was founded and is co-directed by Leah Temper and Joan Martinez-Alier, and it is coordinated by Daniela Del Bene. Its aim is “to document, understand and analyse the political outcomes that emerge or that may emerge” from ecological distribution conflicts. It is housed at the ICTA of the Universitat Autonoma de Barcelona. Since 2012, academics and activists have collaborated to write the entries, reaching 3,500 by July 2021.
The EJ Atlas identifies ten categories of ecological distribution conflicts:
Biodiversity conservation conflicts:
Biomass and land conflicts (Forests, Agriculture, Fisheries and Livestock Management)
Fossil Fuels and Climate Justice/Energy
Industrial and Utilities Conflicts
Infrastructure and Built Environment
Mineral Ores and Building Materials Extraction
Nuclear
Tourism Recreation
Waste Management
Water Management
Ecological distribution conflicts
Ecological Distribution Conflicts (EDCs) were introduced as a concept in 1995 by Joan Martínez-Alier and Martin O'Connor to facilitate more systematic documentation and analysis of environmental conflicts and to produce a more coherent body of academic, activist, and legal work around them. EDCs arise from the unfair access to natural resources, unequally distributed burdens of environmental pollution, and relate to the exercise of power by different social actors when they enter into disputes over access to or impacts on natural resources. For example, a factory may pollute a river thus affecting the community whose livelihood depends on the water of the river. The same can apply to the climate crisis, which may cause sea level rise on some Pacific islands. This type of damage is often not valued by the market, preventing those affected from being compensated.
Ecological conflicts occur at both global and local scales. Often conflicts take place between the global South and the global North, e.g. a Finnish forest company operating in Indonesia, or in econonomic peripheries, although there is a growing emergence of conflicts in Europe, including violent ones. There are also local conflicts that occur within a short commodity chain (e.g. local extraction of sand and gravel for a nearby cement factory).
Intellectual history
Since its conception, the term Ecological Distribution Conflict has been linked to research from the fields of political ecology, ecological economics, and ecofeminism. It has also been adopted into a non-academic setting through the environmental justice movement, where it branches academia and activism to assist social movements in legal struggles.
In his 1874 lecture ‘Wage Labour and Capital’, Karl Marx introduced the idea that economic relations under capitalism are inherently exploitative, meaning economic inequality is an inevitability of the system. He theorised that this is because capitalism expands through capital accumulation, an ever-increasing process which requires the economic subjugation of parts of the population in order to function.
Building on this theory, academics in the field of political economy created the term ‘economic distribution conflicts’ to describe the conflicts that occur from this inherent economic inequality. This type of conflict typically occurs between parties with an economic relationship but unequal power dynamic, such as buyers and sellers, or debtors and creditors.
However, Martinez Allier and Martin O’Connor noticed that this term focuses solely on the economy, omitting the conflicts that do not occur from economic inequality but from the unequal distribution of environmental resources. In response, in 1995, they coined the term ‘ecological distribution conflict’. This type of conflict occurs at commodity frontiers, which are constantly being moved and reframed due to society's unsustainable social metabolism. These conflicts might occur between extractive industries and Indigenous populations, or between polluting actors and those living on marginalised land. Its roots can still be seen in Marxian theory, as it is based on the idea that capitalism's need for expansion drives inequality and conflict.
Unfair ecological distribution can be attributed to capitalism as a system of cost-shifting. Neoclassical economics usually consider these impacts as “market failures” or “externalities” that can be valued in monetary terms and internalized into the price system. Ecological economics and political ecology scholars oppose the idea of economic commensuration that could form the basis of eco-compensation mechanisms for impacted communities. Instead, they advocate for different valuation languages such as sacredness, livelihood, rights of nature, Indigenous territorial rights, archaeological values, and ecological or aesthetic value.
Social movements
Ecological distribution conflicts have given rise to many environmental justice movements around the globe. Environmental justice scholars conclude that these conflicts are a force for sustainability. These scholars study the dynamics that drive these conflicts towards an environmental justice success or a failure.
Globally, around 17% of all environmental conflicts registered in the EJAtlas report environmental justices 'successes', such as stopping an unsustainable project or redistributing resources in a more egalitarian way.
Movements usually shape their repertoires of contention as protest forms and direct actions, which are influenced by national and local backgrounds. In environmental justice struggles, the biophysical characteristics of the conflict can further shape the forms of mobilization and direct action. Resistance strategies can take advantage of ‘biophysical opportunity structures’, where they attempt to identify, change or disrupt the damaging ecological processes they are confronting.
Finally, the ‘collective action frames’ of movements emerging in response to environmental conflicts becomes very powerful when they challenge the mainstream relationship of human societies with the environment. These frames are often expressed through pithy protest slogans, that scholars refer to as the ‘vocabulary of environmental justice’ and which includes concepts and phrases such as ‘environmental racism’, ‘tree plantations are not forests’, ‘keep the oil in the soil’, ‘keep the coal in the hole’ and the like, resonating and empathizing with those communities affected by EDC.
Environmentalism of the poor
Some scholars make a distinction between environmentalist conflicts that have an objective of sustainability or resource conservation and environmental conflicts more broadly (which are any conflict over a natural resource). The former type of conflict gives rise to environmentalism of the poor, in which environmental defenders protect their land from degradation by industrial economic forces. Environmentalist conflicts tend to be intermodal conflicts in which peasant or agricultural land uses are in conflict with industrial uses (such as mining). Intramodal conflicts, in which peasants dispute amongst themselves about land use may not be environmentalist.
In this division movement such as La Via Campesina (LVC), or the International Planning Committee for Food Sovereignty (IPC) can be considered in the halfway between these two approaches. In their defense of peasant agriculture and against large-scale capitalist industrial agriculture, both LVC and the IPC have fundamentally contributed to promoting agroecology as a sustainable agriculture model across the globe, adopting an intermodal approach against industrial agriculture and providing new sources of education to poor communities that could incentive an aware integration in the redistribution of resources. A similar attitude has shaped the action of the Brazilian Landless Farmworkers movement (MST) in the way it has struggled with the idea of productivity and the use of chemical products by several agribusiness realities that destroy resources rich in fertility and biodiversity.
Such movements often question the dominant form of valuation of resource uses (i.e. monetary values and cost-benefit analyses) and renegotiate the values deemed relevant for sustainability. Sometimes, particularly when the resistance weakens, demands for monetary compensation are made (in a framework of ‘weak sustainability’). The same groups, at other times or when feeling stronger, might argue in terms of values which are not commensurate with money, such as indigenous territorial rights, irreversible ecological values, human right to health or the sacredness of redefining the very economic, ecological and social principles behind particular uses of the Mother Earth, implicitly defending a conception of ‘strong sustainability’. In contesting and environment, such intermodal conflicts are those that are most clearly forced towards broader sustainability transitions.
Conflict resolution
A distinct field of conflict resolution called Environmental Conflict Resolution, focuses on developing collaborative methods for deescalating and resolving environmental conflicts. As a field of practice, people working on conflict resolution focus on the collaboration, and consensus building among stakeholders. An analysis of such resolution processes found that the best predictor of successful resolution was sufficient consultation with all parties involved.
A new tool with certain potential in this regard is the development of video games proposing distinct options to the gamers for handling conflicts over environmental resources, for instance in the fishery sector.
Critique
Some scholars critique the focus on natural resources used in descriptions of environmental conflict. Often these approaches focus on the commercialization of the natural environment that doesn't acknowledge the underlying value of a healthy environment.
See also
US Institute for Environmental Conflict Resolution
Inventory of Conflict and Environment
References
Environmental controversies
Social conflict
Environmental justice | 0.784113 | 0.974065 | 0.763777 |
Glasser's choice theory | The term "choice theory" is the work of William Glasser, MD, author of the book so named, and is the culmination of some 50 years of theory and practice in psychology and counselling.
Characteristics
Choice theory posits that the behaviors we choose are central to our existence. Our behavior (choices) is driven by five genetically driven needs in hierarchical order: survival, love, power, freedom, and fun.
The most basic human needs are survival (physical component) and love (mental component). Without physical (nurturing) and emotional (love), an infant will not survive to attain power, freedom, and fun.
“No matter how well-nourished and intellectually stimulated a child is, going without human touch can stunt his mental, emotional, and even physical growth”,which is proved by the research of livestrong on the influence of physical touch on a child's development
“Touching Empathy: Lack of Physical Affection Can Actually Kill Babies”, Psychology Today, October 1, 2010.
Survival needs include:
Food
Clothing
Shelter
Breathing
personal safety
security and sex, having children
And four fundamental psychological needs:
Belonging/connecting/love
Power/significance/competence
Freedom/autonomy
Fun/learning
Choice theory suggests the existence of a "quality world." The idea of a "quality world" in choice theory has been compared to Jungian archetypes, but Glasser's acknowledgement of this connection is unclear. Some argue that Glasser's "quality world" and what Jung would call healthy archetypes share similarities.
Our "quality world" images are our role models of an individual's "perfect" world of parents, relations, possessions, beliefs, etc. How each person's "quality world" is somewhat unusual, even in the same family of origin, is taken for granted.
Starting from birth and continuing throughout our lives, each person places significant role models, significant possessions, and significant systems of belief (religion, cultural values, icons, etc.) into a mostly unconscious framework Glasser called our "quality world". The issue of negative role models and stereotypes is not extensively discussed in choice theory.
Glasser also posits a "comparing place," where we compare and contrast our perceptions of people, places, and things immediately in front of us against our ideal images (archetypes) of these in our Quality World framework. Our subconscious pushes us towards calibrating—as best we can—our real-world experience with our quality world (archetypes).
Behavior ("total behavior" in Glasser's terms) is made up of these four components: acting, thinking, feeling, and physiology. Glasser suggests we have considerable control or choice over the first two of these, yet little ability to directly choose the latter two as they are more deeply sub- and unconscious. These four components remain closely intertwined, and the choices we make in our thinking and acting will greatly affect our feelings and physiology.
Glasser frequently emphasizes that failed or strained relationships with significant individuals can contribute to personal unhappiness. spouses, parents, children, friends, and colleagues.
The symptoms of unhappiness are widely variable and are often seen as mental illnesses. Glasser believed that "pleasure" and "happiness" are related but far from synonymous. Sex, for example, is a "pleasure" but may well be divorced from a "satisfactory relationship," which is a precondition for lasting "happiness" in life. Hence the intense focus on the improvement of relationships in counseling with choice theory—the "new reality therapy". Individuals who are familiar with both reality therapy and choice theory may have a preference for the latter, which is considered a more modern approach.
According to choice theory, mental illness can be linked to personal unhappiness. Glasser champions how we are able to learn and choose alternate behaviors that result in greater personal satisfaction. Reality therapy is a choice theory-based counseling process focused on helping clients learn to make those self-optimizing choices.
The Ten Axioms of Choice
The only person whose behavior we can control is ourselves.
All we can give another person is information.
All long-lasting psychological problems are relationship problems.
The problem relationship is always part of our present life.
What happened in the past has everything to do with who we are today, but we can only satisfy our basic needs right now and plan to continue satisfying them in the future.
We can only satisfy our needs by satisfying the pictures in our quality world.
All we do is behave.
All behavior is total behavior and is made up of four components: acting, thinking, feeling, and physiology.
All of our total behavior is chosen, but we only have direct control over the acting and thinking components. We can only control our feelings and physiology indirectly through how we choose to act and think.
All total behavior is designated by verbs and named by the part that is the most recognizable.
In Classroom Management
William Glasser's choice theory begins: Behavior is not separate from choice; we all choose how to behave at any time. Second, we cannot control anyone's behavior but our own. Glasser emphasized the importance of classroom meetings as a means to improve communication and solve classroom problems. Glasser suggested that teachers should assist students in envisioning a fulfilling school experience and planning the choices that would enable them to achieve it.
For example, Johnny Waits is an 18-year-old high school senior and plans on attending college to become a computer programmer. Glasser suggests that Johnny could be learning as much as he can about computers instead of reading Plato. Glasser proposed a curriculum approach that emphasizes practical, real-world topics chosen by students based on their interests and inclinations. This approach is referred to as the quality curriculum. The quality curriculum places particular emphasis on topics that have practical career applications. According to Glasser's approach, teachers facilitate discussions with students to identify topics they are interested in exploring further when introducing new material. In line with Glasser's approach, students are expected to articulate the practical value of the material they choose to explore.
Education
Glasser did not endorse Summerhill, and the quality schools he oversaw typically had conventional curriculum topics. The main innovation of these schools was a deeper, more humanistic approach to the group process between teachers, students, and learning.
Critiques
In a book review, Christopher White writes that Glasser believes everything in the DSM-IV-TR is a result of an individual's brain creatively expressing its unhappiness. White also notes that Glasser criticizes the psychiatric profession and questions the effectiveness of medications in treating mental illness. White points out that the book does not provide a set of randomized clinical trials demonstrating the success of Glasser's teachings.
See also
Cognitive psychology
Introspection illusion
Léopold Szondi
References
Bourbon, W. Thomas and Ford, Ed. (1994) Discipline at Home and at School. Brandt: New York.
Personal observations (1996–2005). Teacher. Centennial High School, Champaign, Illinois.
Weinstein, Jay. (2000). "The Place of Theory in Applied Sociology: A Reflection." Theory and Science 1, 1.
External links
The William Glasser Institute official website
The Sudbury Valley School official website
Cognitive science | 0.774601 | 0.986021 | 0.763773 |
Astrobotany | Astrobotany is an applied sub-discipline of botany that is the study of plants in space environments. It is a branch of astrobiology and botany.
Astrobotany concerns both the study of extraterrestrial vegetation discovery, as well as research into the growth of terrestrial vegetation in outer space by humans.
It has been a subject of study that plants may be grown in outer space typically in a weightless but pressurized controlled environment in specific space gardens. In the context of human spaceflight, they can be consumed as food and/or provide a refreshing atmosphere. Plants can metabolize carbon dioxide in the air to produce valuable oxygen, and can help control cabin humidity. Growing plants in space may provide a psychological benefit to human spaceflight crews.
The first challenge in growing plants in space is how to get plants to grow without gravity. This runs into difficulties regarding the effects of gravity on root development, providing appropriate types of lighting, and other challenges. In particular, the nutrient supply to root as well as the nutrient biogeochemical cycles, and the microbiological interactions in soil-based substrates are particularly complex, but have been shown to make possible space farming in hypo- and micro-gravity.
NASA plans to grow plants in space to help feed astronauts, and to provide psychological benefits for long-term space flight.
Extraterrestrial vegetation
Vegetation red edge
The vegetation red edge (VRE) is a biosignature of near-infrared wavelengths that is observable through telescopic observation of Earth, and has increased in strength as evolution has made vegetative life more complex. On Earth, this phenomenon has been detected through analysis of planetshine on the Moon, which can show a reflection spectrum that spikes at 700 nm. In an article published in Nature in 1990, Sagan et al. described Galileo's detection of infrared light radiating from Earth as evidence of "widespread biological activity" on earth, with evidence of photosynthesis a particularly strong factor.
The increase-in-strength of Earth's VRE biosignature has been assessed through modelling of early Earth radiation. Mosses and ferns, which were dominant on Earth in the Ordovician and Carboniferous periods, produce weaker detectable infrared radiation spikes at 700 nm than modern Earth vegetation. Astrobotanists focused on extraterrestrial vegetation have thus theorized that by using these same models, it could be possible to measure whether exoplanets in their respective Goldilocks zones currently hold vegetation, and by comparing VRE biosignatures to modelled historic Earth radiation, estimate the complexity of this vegetation.
There are a number of obstacles to the detection of exoplanetary VREs:
Galileo's detection of Earth's VRE was facilitated by the satellite's physical proximity to Earth; up until the launch of the James Webb Space Telescope in December 2021, telescopic technology was not yet advanced enough to detect the telltale infrared radiation spikes of VRE in distant exoplanet systems.
Heavy cloud cover has been observed to be detrimental to the detection of VRE, as more cloud cover increases overall albedo, which makes it more difficult to detect radiation wavelength variety. In addition, clouds are detrimental to surface observation, leading to an estimation of ≥20% vegetation cover AND cloud-free surface present as the minimum for detectable exoplanetary VRE visible from telescopes on Earth.
Certain minerals have been shown to demonstrate similar sharp edge reflective spectra as light-harvesting photosynthetic pigments. This means that mineral origins for VRE-like effects must first be ruled-out before a biological explanation can be confirmed. This may be difficult to achieve from Earth as minerals in finer regolith particle form demonstrate different reflective characteristics than large crystal forms found on Earth. One suggestion made by Sara Seager et al. is to use atmospheric measurements to determine the level of atmospheric oxygen, which if high would rule out surface abundance of non-oxidised minerals.
Vegetation searches
Dubbed ‘the creator of astrobotany’, Gavriil Adrianovich Tikhov coined the term in 1945 to describe the emerging field surrounding the search for extraterrestrial vegetation. Owing to storms on Mars that cause surface darkening visible from Earth, Tikhov's contemporaries often believed in the existence of Martian vegetation comparable to Earth's seasonal vegetation color changes. Building off of conclusions reached through examining earthshine on the Moon in 1914, in 1918 and 1921 Tikhov discovered through using telescopic color filters that chlorophylls were undetectable on the Martian surface, leading him to hypothesize that the character of Martian vegetation was likely to be blue hued, composed mostly of mosses and lichens. Tikhov's research into astrobotany would later develop into research into growing plants in space, or demonstrating the possibility of plants to grow in extraterrestrial conditions (especially comparing the climate of Mars and Siberia), but he was the first known astronomer to use color to attempt to measure the level of vegetation on an extraterrestrial satellite.
After Galileo's 1990 fly-by demonstrating the VRE effect on Earth, astrobotanical interest in extraterrestrial vegetation has mainly focused on examining the feasibility of VRE detection, and a number of projects have been proposed:
Both the European Space Agency Darwin project and NASA Terrestrial Planet Finder were cited as projects that could have analyzed exoplanetary VRE biosignatures before being cancelled in 2007 and 2011, respectively.
The ESO Extremely Large Telescope, set to launch in 2028, has also been cited as another telescope that will be able to detect exoplanetary VRE biosignatures.
Future NASA space telescopes, such as the Habitable Exoplanet Imaging Mission, have been planned with the capacity to examine for VRE biosignatures.
The James Webb Space Telescope has been searching the TRAPPIST-1 exoplanet system since 2021 for signs of extraterrestrial vegetation through capturing atmospheric data, including a VRE biosignature, that is made visible when TRAPPIST-1's exoplanets pass across the face of the star. NASA have judged three of TRAPPIST-1's rocky exoplanets (1e, 1f, and 1g) as within the habitable zone for liquid water (and other biological matter, such as vegetation).
Character of extraterrestrial vegetation
Accurate description of extraterrestrial vegetation character is highly speculative, but follows "solid physics and atmospheric chemistry" principles, according to Professor John Albert Raven from the University of Dundee.
One factor determining the character of extraterrestrial vegetation is the star at the centre of the system. The Sun is a G-type main-sequence star, which provides the conditions for chlorophyll photosynthesis, and radiation levels that govern atmospheric conditions such as wind, affecting evolutionary development. TRAPPIST-1 is an ultra-cool red dwarf star, providing almost half the energy as the Sun, leading to astrobotanical speculation that vegetation in the TRAPPIST-1 exoplanet system could be much darker, even black to human eyes.
F-type main-sequence stars, on the other hand, such as sigma Boötis, have been speculated to encourage the growth of either yellow-tinted, or blue-tinted extraterrestrial vegetation within its exoplanet system, in order to reflect back the high levels of blue photons emitted by stars of its type.
Growing plants in space
The study of plant response in space environments is another subject of astrobotany research. In space, plants encounter unique environmental stressors not found on Earth including microgravity, ionizing radiation, and oxidative stress. Experiments have shown that these stressors cause genetic alterations in plant metabolism pathways. Changes in genetic expression have shown that plants respond on a molecular level to a space environment. Astrobotanical research has been applied to the challenges of creating life support systems both in space and on other planets, primarily Mars.
History
Russian scientist Konstantin Tsiolkovsky was one of the first people to discuss using photosynthetic life as a resource in space agricultural systems. Speculation about plant cultivation in space has been around since the early 20th century. The term astrobotany was first used in 1945 by Soviet astronomer and astrobiology pioneer Gavriil Adrianovich Tikhov. Tikhov is considered to be the father of astrobotany. Research in the field has been conducted both with growing Earth plants in space environments and searching for botanical life on other planets.
Seeds
The first organisms in space were "specially developed strains of seeds" launched to on 9 July 1946 on a U.S. launched V-2 rocket. These samples were not recovered. The first seeds launched into space and successfully recovered were maize seeds launched on 30 July 1946, which were soon followed by rye and cotton. These early suborbital biological experiments were handled by Harvard University and the Naval Research Laboratory and were concerned with radiation exposure on living tissue. In 1971, 500 tree seeds (Loblolly pine, Sycamore, Sweetgum, Redwood, and Douglas fir) were flown around the Moon on Apollo 14. These Moon trees were planted and grown with controls back on Earth where no changes were detected.
Plants
In 1982, the crew of the Soviet Salyut 7 space station conducted an experiment, prepared by Lithuanian scientists (Alfonsas Merkys and others), and grew some Arabidopsis using Fiton-3 experimental micro-greenhouse apparatus, thus becoming the first plants to flower and produce seeds in space. A Skylab experiment studied the effects of gravity and light on rice plants. The SVET-2 Space Greenhouse successfully achieved seed to seed plant growth in 1997 aboard space station Mir. Bion 5 carried Daucus carota and Bion 7 carried maize (aka corn).
Plant research continued on the International Space Station. Biomass Production System was used on the ISS Expedition 4. The Vegetable Production System (Veggie) system was later used aboard ISS. Plants tested in Veggie before going into space included lettuce, Swiss chard, radishes, Chinese cabbage and peas. Red Romaine lettuce was grown in space on Expedition 40 which were harvested when mature, frozen and tested back on Earth. Expedition 44 members became the first American astronauts to eat plants grown in space on 10 August 2015, when their crop of Red Romaine was harvested. Since 2003 Russian cosmonauts have been eating half of their crop while the other half goes towards further research. In 2012, a sunflower bloomed aboard the ISS under the care of NASA astronaut Donald Pettit. In January 2016, US astronauts announced that a zinnia had blossomed aboard the ISS.
in 2018 the Veggie-3 experiment was tested with plant pillows and root mats. One of the goals is to grow food for crew consumption. Crops tested at this time include cabbage, lettuce, and mizuna.
Known terrestrial plants grown in space
Plants that have been grown in space include:
Arabidopsis (Thale cress)
Bok choy (Tokyo Bekana) (Chinese cabbage)
Tulips
Kalanchoe
Flax
Onions, peas, radishes, lettuce, wheat, garlic, cucumbers, parsley, potato, and dill
Cinnamon basil
Cabbage
Zinnia hybrida ('Profusion' var.)
Red romaine lettuce ('Outredgeous' var.)
Sunflower
Ceratopteris richardii
Brachypodium distachyon
Some plants, like tobacco and morning glory, have not been directly grown in space but have been subjected to space environments and then germinated and grown on Earth.
Plants for life support in space
Algae was the first candidate for human-plant life support systems. Initial research in the 1950s and 1960s used Chlorella, Anacystis, Synechocystis, Scenedesmus, Synechococcus, and Spirulina species to study how photosynthetic organisms could be used for and cycling in closed systems. Later research through Russia's BIOS program and the US's CELSS program investigated the use of higher plants to fulfill the roles of atmospheric regulators, waste recyclers, and food for sustained missions. The crops most commonly studied include starch crops such as wheat, potato, and rice; protein-rich crops such as soy, peanut, and common bean; and a host of other nutrition-enhancing crops like lettuce, strawberry, and kale. Tests for optimal growth conditions in closed systems have required research both into environmental parameters necessary for particular crops (such as differing light periods for short-day versus long-day crops) and cultivars that are a best-fit for life support system growth.
Tests of human-plant life support systems in space are relatively few compared to similar testing performed on Earth and micro-gravity testing on plant growth in space. The first life support systems testing performed in space included gas exchange experiments with wheat, potato, and giant duckweed (Spyrodela polyrhiza). Smaller scale projects, sometimes referred to as "salad machines", have been used to provide fresh produce to astronauts as a dietary supplement. Future studies have been planned to investigate the effects of keeping plants on the mental well-being of humans in confined environments.
More recent research has been focused on extrapolating these life support systems to other planets, primarily Martian bases. Interlocking closed systems called "modular biospheres" have been prototyped to support four- to five-person crews on the Martian surface. These encampments are designed as inflatable greenhouses and bases. They are anticipated to use Martian soils for growth substrate and wastewater treatment, and crop cultivars developed specifically for extraplanetary life. There has also been discussion of using the Martian moon Phobos as a resources base, potentially mining frozen water and carbon dioxide from the surface and eventually using hollowed craters for autonomous growth chambers that can be harvested during mining missions.
Plant research
The study of plant research has yielded information useful to other areas of botany and horticulture. Extensive research into hydroponics systems was fielded successfully by NASA in both the CELSS and ALS programs, as well as the effects of increased photoperiod and light intensity for various crop species. Research also led to optimization of yields beyond what had been previously achieved by indoor cropping systems. Intensive studying of gas exchange and plant volatile concentrations in closed systems led to increased understanding of plant response to extreme levels of gases such as carbon dioxide and ethylene. Usage of LEDs in closed life support systems research also prompted the increased use of LEDs in indoor growing operations.
Experiments
Some experiments to do with plants include:
Bion satellites
Biomass Production System, aboard ISS
Vegetable Production System (Veggie), aboard ISS.
SVET
SVET-2, aboard Mir.
ADVASC
TAGES, aboard ISS.
Plant Growth/Plant Phototropism, aboard Skylab
Oasis plant growth unit
Plant Signaling (STS-135)
Plant growth experiment (STS-95)
NASA Clean Air Study
ECOSTRESS, 2018
Results of experiments
Several experiments have been focused on how plant growth and distribution compares in micro-gravity, space conditions versus Earth conditions. This enables scientists to explore whether certain plant growth patterns are innate or environmentally driven. For instance, Allan H. Brown tested seedling movements aboard the Space Shuttle Columbia in 1983. Sunflower seedling movements were recorded while in orbit. They observed that the seedlings still experienced rotational growth and circumnation despite lack of gravity, showing these behaviors are built-in.
Other experiments have found that plants have the ability to exhibit gravitropism, even in low-gravity conditions. For instance, the ESA's European Modular Cultivation System enables experimentation with plant growth; acting as a miniature greenhouse, scientists aboard the International Space Station can investigate how plants react in variable-gravity conditions. The Gravi-1 experiment (2008) utilized the EMCS to study lentil seedling growth and amyloplast movement on the calcium-dependent pathways. The results of this experiment found that the plants were able to sense the direction of gravity even at very low levels. A later experiment with the EMCS placed 768 lentil seedlings in a centrifuge to stimulate various gravitational changes; this experiment, Gravi-2 (2014), displayed that plants change calcium signalling towards root growth while being grown in several gravity levels.
Many experiments have a more generalized approach in observing overall plant growth patterns as opposed to one specific growth behavior. One such experiment from the Canadian Space Agency, for example, found that white spruce seedlings grew differently in the anti-gravity space environment compared with Earth-bound seedlings; the space seedlings exhibited enhanced growth from the shoots and needles, and also had randomized amyloplast distribution compared with the Earth-bound control group.
In popular culture
Astrobotany has had several acknowledgements in science fiction literature and film.
In the 1972 film Silent Running it is implied that, in the future, all plant life on Earth has become extinct. As many specimens as possible have been preserved in a series of enormous, greenhouse-like geodesic domes, attached to a large spaceship named Valley Forge, forming part of a fleet of American Airlines space freighters, currently just outside the orbit of Saturn.
Charles Sheffield's 1989 novel Proteus Unbound mentions the use of algae suspended in a giant hollow "planet" as a biofuel, creating a closed energy system.
The 2009 film Avatar features an exobiologist, Dr. Grace Augustine, who wrote the first astrobotanical text on the flora of Pandora.
The 2011 book and 2015 film The Martian by Andy Weir highlights the heroic survival of botanist Mark Watney, who uses his horticultural background to grow potatoes for food while trapped on Mars.
See also
Space farming
References
Branches of botany
Astrobiology | 0.781509 | 0.977302 | 0.763771 |
Behavioural genetics | Behavioural genetics, also referred to as behaviour genetics, is a field of scientific research that uses genetic methods to investigate the nature and origins of individual differences in behaviour. While the name "behavioural genetics" connotes a focus on genetic influences, the field broadly investigates the extent to which genetic and environmental factors influence individual differences, and the development of research designs that can remove the confounding of genes and environment. Behavioural genetics was founded as a scientific discipline by Francis Galton in the late 19th century, only to be discredited through association with eugenics movements before and during World War II. In the latter half of the 20th century, the field saw renewed prominence with research on inheritance of behaviour and mental illness in humans (typically using twin and family studies), as well as research on genetically informative model organisms through selective breeding and crosses. In the late 20th and early 21st centuries, technological advances in molecular genetics made it possible to measure and modify the genome directly. This led to major advances in model organism research (e.g., knockout mice) and in human studies (e.g., genome-wide association studies), leading to new scientific discoveries.
Findings from behavioural genetic research have broadly impacted modern understanding of the role of genetic and environmental influences on behaviour. These include evidence that nearly all researched behaviours are under a significant degree of genetic influence, and that influence tends to increase as individuals develop into adulthood. Further, most researched human behaviours are influenced by a very large number of genes and the individual effects of these genes are very small. Environmental influences also play a strong role, but they tend to make family members more different from one another, not more similar.
History
Selective breeding and the domestication of animals is perhaps the earliest evidence that humans considered the idea that individual differences in behaviour could be due to natural causes. Plato and Aristotle each speculated on the basis and mechanisms of inheritance of behavioural characteristics. Plato, for example, argued in The Republic that selective breeding among the citizenry to encourage the development of some traits and discourage others, what today might be called eugenics, was to be encouraged in the pursuit of an ideal society. Behavioural genetic concepts also existed during the English Renaissance, where William Shakespeare perhaps first coined the phrase "nature versus nurture" in The Tempest, where he wrote in Act IV, Scene I, that Caliban was "A devil, a born devil, on whose nature Nurture can never stick".
Modern-day behavioural genetics began with Sir Francis Galton, a nineteenth-century intellectual and cousin of Charles Darwin. Galton was a polymath who studied many subjects, including the heritability of human abilities and mental characteristics. One of Galton's investigations involved a large pedigree study of social and intellectual achievement in the English upper class. In 1869, 10 years after Darwin's On the Origin of Species, Galton published his results in Hereditary Genius. In this work, Galton found that the rate of "eminence" was highest among close relatives of eminent individuals, and decreased as the degree of relationship to eminent individuals decreased. While Galton could not rule out the role of environmental influences on eminence, a fact which he acknowledged, the study served to initiate an important debate about the relative roles of genes and environment on behavioural characteristics. Through his work, Galton also "introduced multivariate analysis and paved the way towards modern Bayesian statistics" that are used throughout the sciences—launching what has been dubbed the "Statistical Enlightenment".
The field of behavioural genetics, as founded by Galton, was ultimately undermined by another of Galton's intellectual contributions, the founding of the eugenics movement in 20th century society. The primary idea behind eugenics was to use selective breeding combined with knowledge about the inheritance of behaviour to improve the human species. The eugenics movement was subsequently discredited by scientific corruption and genocidal actions in Nazi Germany. Behavioural genetics was thereby discredited through its association to eugenics. The field once again gained status as a distinct scientific discipline through the publication of early texts on behavioural genetics, such as Calvin S. Hall's 1951 book chapter on behavioural genetics, in which he introduced the term "psychogenetics", which enjoyed some limited popularity in the 1960s and 1970s. However, it eventually disappeared from usage in favour of "behaviour genetics".
The start of behaviour genetics as a well-identified field was marked by the publication in 1960 of the book Behavior Genetics by John L. Fuller and William Robert (Bob) Thompson. It is widely accepted now that many if not most behaviours in animals and humans are under significant genetic influence, although the extent of genetic influence for any particular trait can differ widely. A decade later, in February 1970, the first issue of the journal Behavior Genetics was published and in 1972 the Behavior Genetics Association was formed with Theodosius Dobzhansky elected as the association's first president. The field has since grown and diversified, touching many scientific disciplines.
Methods
The primary goal of behavioural genetics is to investigate the nature and origins of individual differences in behaviour. A wide variety of different methodological approaches are used in behavioural genetic research, only a few of which are outlined below.
Animal studies
Investigators in animal behaviour genetics can carefully control for environmental factors and can experimentally manipulate genetic variants, allowing for a degree of causal inference that is not available in studies on human behavioural genetics. In animal research selection experiments have often been employed. For example, laboratory house mice have been bred for open-field behaviour, thermoregulatory nesting, and voluntary wheel-running behaviour. A range of methods in these designs are covered on those pages.
Behavioural geneticists using model organisms employ a range of molecular techniques to alter, insert, or delete genes. These techniques include knockouts, floxing, gene knockdown, or genome editing using methods like CRISPR-Cas9. These techniques allow behavioural geneticists different levels of control in the model organism's genome, to evaluate the molecular, physiological, or behavioural outcome of genetic changes. Animals commonly used as model organisms in behavioural genetics include mice, zebra fish, and the nematode species C. elegans.
Machine learning and A.I. developments are allowing researchers to design experiments that are able to manage the complexity and large data sets generated, allowing for increasingly complex behavioural experiments.
Human studies
Some research designs used in behavioural genetic research are variations on family designs (also known as pedigree designs), including twin studies and adoption studies. Quantitative genetic modelling of individuals with known genetic relationships (e.g., parent-child, sibling, dizygotic and monozygotic twins) allows one to estimate to what extent genes and environment contribute to phenotypic differences among individuals.
Twin and family studies
The basic intuition of the twin study is that monozygotic twins share 100% of their genome and dizygotic twins share, on average, 50% of their segregating genome. Thus, differences between the two members of a monozygotic twin pair can only be due to differences in their environment, whereas dizygotic twins will differ from one another due to genes in addition to the environment. Under this simplistic model, if dizygotic twins differ more than monozygotic twins it can only be attributable to genetic influences. An important assumption of the twin model is the equal environment assumption that monozygotic twins have the same shared environmental experiences as dizygotic twins. If, for example, monozygotic twins tend to have more similar experiences than dizygotic twins—and these experiences themselves are not genetically mediated through gene-environment correlation mechanisms—then monozygotic twins will tend to be more similar to one another than dizygotic twins for reasons that have nothing to do with genes. While this assumption should be kept in mind when interpreting the results of twin studies, research tends to support the equal environment assumption.
Twin studies of monozygotic and dizygotic twins use a biometrical formulation to describe the influences on twin similarity and to infer heritability.
The formulation rests on the basic observation that the variance in a phenotype is due to two sources, genes and environment. More formally, , where is the phenotype, is the effect of genes, is the effect of the environment, and is a gene by environment interaction. The term can be expanded to include additive, dominance, and epistatic genetic effects. Similarly, the environmental term can be expanded to include shared environment and non-shared environment, which includes any measurement error. Dropping the gene by environment interaction for simplicity (typical in twin studies) and fully decomposing the and terms, we now have . Twin research then models the similarity in monozygotic twins and dizygotic twins using simplified forms of this decomposition, shown in the table.
The simplified Falconer formulation can then be used to derive estimates of , , and . Rearranging and substituting the and equations one can obtain an estimate of the additive genetic variance, or heritability, , the non-shared environmental effect and, finally, the shared environmental effect . The Falconer formulation is presented here to illustrate how the twin model works. Modern approaches use maximum likelihood to estimate the genetic and environmental variance components.
Measured genetic variants
The Human Genome Project has allowed scientists to directly genotype the sequence of human DNA nucleotides. Once genotyped, genetic variants can be tested for association with a behavioural phenotype, such as mental disorder, cognitive ability, personality, and so on.
Candidate Genes. One popular approach has been to test for association candidate genes with behavioural phenotypes, where the candidate gene is selected based on some a priori theory about biological mechanisms involved in the manifestation of a behavioural trait or phenotype. In general, such studies have proven difficult to broadly replicate and there has been concern raised that the false positive rate in this type of research is high.
Genome-wide association studies In genome-wide association studies, researchers test the relationship of millions of genetic polymorphisms with behavioural phenotypes across the genome. This approach to genetic association studies is largely atheoretical, and typically not guided by a particular biological hypothesis regarding the phenotype. Genetic association findings for behavioural traits and psychiatric disorders have been found to be highly polygenic (involving many small genetic effects). Genetic variants identified to be associated with some trait or disease through GWAS may be used to improve disease risk predictions. However, the genetic variants identified through GWAS of common genetic variants are most likely to have a modest effect on disease risk or development of a given trait. This is different from the strong genetic contribution seen in Mendelian conditions or for some rare variants that may have a larger effect on disease.
SNP heritability and co-heritability Recently, researchers have begun to use similarity between classically unrelated people at their measured single nucleotide polymorphisms (SNPs) to estimate genetic variation or covariation that is tagged by SNPs, using mixed effects models implemented in software such as genome-wide complex trait analysis (GCTA). To do this, researchers find the average genetic relatedness over all SNPs between all individuals in a (typically large) sample, and use Haseman–Elston regression or restricted maximum likelihood to estimate the genetic variation that is "tagged" by, or predicted by, the SNPs. The proportion of phenotypic variation that is accounted for by the genetic relatedness has been called "SNP heritability". Intuitively, SNP heritability increases to the degree that phenotypic similarity is predicted by genetic similarity at measured SNPs, and is expected to be lower than the true narrow-sense heritability to the degree that measured SNPs fail to tag (typically rare) causal variants. The value of this method is that it is an independent way to estimate heritability that does not require the same assumptions as those in twin and family studies, and that it gives insight into the allelic frequency spectrum of the causal variants underlying trait variation.
Quasi-experimental designs
Some behavioural genetic designs are useful not to understand genetic influences on behaviour, but to control for genetic influences to test environmentally-mediated influences on behaviour. Such behavioural genetic designs may be considered a subset of natural experiments, quasi-experiments that attempt to take advantage of naturally occurring situations that mimic true experiments by providing some control over an independent variable. Natural experiments can be particularly useful when experiments are infeasible, due to practical or ethical limitations.
A general limitation of observational studies is that the relative influences of genes and environment are confounded. A simple demonstration of this fact is that measures of 'environmental' influence are heritable. Thus, observing a correlation between an environmental risk factor and a health outcome is not necessarily evidence for environmental influence on the health outcome. Similarly, in observational studies of parent-child behavioural transmission, for example, it is impossible to know if the transmission is due to genetic or environmental influences, due to the problem of passive gene–environment correlation. The simple observation that the children of parents who use drugs are more likely to use drugs as adults does not indicate why the children are more likely to use drugs when they grow up. It could be because the children are modelling their parents' behaviour. Equally plausible, it could be that the children inherited drug-use-predisposing genes from their parent, which put them at increased risk for drug use as adults regardless of their parents' behaviour. Adoption studies, which parse the relative effects of rearing environment and genetic inheritance, find a small to negligible effect of rearing environment on smoking, alcohol, and marijuana use in adopted children, but a larger effect of rearing environment on harder drug use.
Other behavioural genetic designs include discordant twin studies, children of twins designs, and Mendelian randomization.
General findings
There are many broad conclusions to be drawn from behavioural genetic research about the nature and origins of behaviour. Three major conclusions include:
all behavioural traits and disorders are influenced by genes
environmental influences tend to make members of the same family more different, rather than more similar
the influence of genes tends to increase in relative importance as individuals age.
Genetic influences on behaviour are pervasive
It is clear from multiple lines of evidence that all researched behavioural traits and disorders are influenced by genes; that is, they are heritable. The single largest source of evidence comes from twin studies, where it is routinely observed that monozygotic (identical) twins are more similar to one another than are same-sex dizygotic (fraternal) twins.
The conclusion that genetic influences are pervasive has also been observed in research designs that do not depend on the assumptions of the twin method. Adoption studies show that adoptees are routinely more similar to their biological relatives than their adoptive relatives for a wide variety of traits and disorders. In the Minnesota Study of Twins Reared Apart, monozygotic twins separated shortly after birth were reunited in adulthood. These adopted, reared-apart twins were as similar to one another as were twins reared together on a wide range of measures including general cognitive ability, personality, religious attitudes, and vocational interests, among others. Approaches using genome-wide genotyping have allowed researchers to measure genetic relatedness between individuals and estimate heritability based on millions of genetic variants. Methods exist to test whether the extent of genetic similarity (aka, relatedness) between nominally unrelated individuals (individuals who are not close or even distant relatives) is associated with phenotypic similarity. Such methods do not rely on the same assumptions as twin or adoption studies, and routinely find evidence for heritability of behavioural traits and disorders.
Nature of environmental influence
Just as all researched human behavioural phenotypes are influenced by genes (i.e., are heritable), all such phenotypes are also influenced by the environment. The basic fact that monozygotic twins are genetically identical but are never perfectly concordant for psychiatric disorder or perfectly correlated for behavioural traits, indicates that the environment shapes human behaviour.
The nature of this environmental influence, however, is such that it tends to make individuals in the same family more different from one another, not more similar to one another. That is, estimates of shared environmental effects in human studies are small, negligible, or zero for the vast majority of behavioural traits and psychiatric disorders, whereas estimates of non-shared environmental effects are moderate to large. From twin studies is typically estimated at 0 because the correlation between monozygotic twins is at least twice the correlation for dizygotic twins. When using the Falconer variance decomposition this difference between monozygotic and dizygotic twin similarity results in an estimated . The Falconer decomposition is simplistic. It removes the possible influence of dominance and epistatic effects which, if present, will tend to make monozygotic twins more similar than dizygotic twins and mask the influence of shared environmental effects. This is a limitation of the twin design for estimating . However, the general conclusion that shared environmental effects are negligible does not rest on twin studies alone. Adoption research also fails to find large components; that is, adoptive parents and their adopted children tend to show much less resemblance to one another than the adopted child and his or her non-rearing biological parent. In studies of adoptive families with at least one biological child and one adopted child, the sibling resemblance also tends to be nearly zero for most traits that have been studied.
The figure provides an example from personality research, where twin and adoption studies converge on the conclusion of zero to small influences of shared environment on broad personality traits measured by the Multidimensional Personality Questionnaire including positive emotionality, negative emotionality, and constraint.
Given the conclusion that all researched behavioural traits and psychiatric disorders are heritable, biological siblings will always tend to be more similar to one another than will adopted siblings. However, for some traits, especially when measured during adolescence, adopted siblings do show some significant similarity (e.g., correlations of .20) to one another. Traits that have been demonstrated to have significant shared environmental influences include internalizing and externalizing psychopathology, substance use and dependence, and intelligence.
Nature of genetic influence
Genetic effects on human behavioural outcomes can be described in multiple ways. One way to describe the effect is in terms of how much variance in the behaviour can be accounted for by alleles in the genetic variant, otherwise known as the coefficient of determination or . An intuitive way to think about is that it describes the extent to which the genetic variant makes individuals, who harbour different alleles, different from one another on the behavioural outcome. A complementary way to describe effects of individual genetic variants is in how much change one expects on the behavioural outcome given a change in the number of risk alleles an individual harbours, often denoted by the Greek letter (denoting the slope in a regression equation), or, in the case of binary disease outcomes by the odds ratio of disease given allele status. Note the difference: describes the population-level effect of alleles within a genetic variant; or describe the effect of having a risk allele on the individual who harbours it, relative to an individual who does not harbour a risk allele.
When described on the metric, the effects of individual genetic variants on complex human behavioural traits and disorders are vanishingly small, with each variant accounting for of variation in the phenotype. This fact has been discovered primarily through genome-wide association studies of complex behavioural phenotypes, including results on substance use, personality, fertility, schizophrenia, depression, and endophenotypes including brain structure and function. There are a small handful of replicated and robustly studied exceptions to this rule, including the effect of APOE on Alzheimer's disease, and CHRNA5 on smoking behaviour, and ALDH2 (in individuals of East Asian ancestry) on alcohol use.
On the other hand, when assessing effects according to the metric, there are a large number of genetic variants that have very large effects on complex behavioural phenotypes. The risk alleles within such variants are exceedingly rare, such that their large behavioural effects impact only a small number of individuals. Thus, when assessed at a population level using the metric, they account for only a small amount of the differences in risk between individuals in the population. Examples include variants within APP that result in familial forms of severe early onset Alzheimer's disease but affect only relatively few individuals. Compare this to risk alleles within APOE, which pose much smaller risk compared to APP, but are far more common and therefore affect a much greater proportion of the population.
Finally, there are classical behavioural disorders that are genetically simple in their etiology, such as Huntington's disease. Huntington's is caused by a single autosomal dominant variant in the HTT gene, which is the only variant that accounts for any differences among individuals in their risk for developing the disease, assuming they live long enough. In the case of genetically simple and rare diseases such as Huntington's, the variant and the are simultaneously large.
Additional general findings
In response to general concerns about the replicability of psychological research, behavioural geneticists Robert Plomin, John C. DeFries, Valerie Knopik, and Jenae Neiderhiser published a review of the ten most well-replicated findings from behavioural genetics research. The ten findings were:
"All psychological traits show significant and substantial genetic influence."
"No behavioural traits are 100% heritable."
"Heritability is caused by many genes of small effect."
"Phenotypic correlations between psychological traits show significant and substantial genetic mediation."
"The heritability of intelligence increases throughout development."
"Age-to-age stability is mainly due to genetics."
"Most measures of the 'environment' show significant genetic influence."
"Most associations between environmental measures and psychological traits are significantly mediated genetically."
"Most environmental effects are not shared by children growing up in the same family."
"Abnormal is normal."
Criticisms and controversies
Behavioural genetic research and findings have at times been controversial. Some of this controversy has arisen because behavioural genetic findings can challenge societal beliefs about the nature of human behaviour and abilities. Major areas of controversy have included genetic research on topics such as racial differences, intelligence, violence, and human sexuality. Other controversies have arisen due to misunderstandings of behavioural genetic research, whether by the lay public or the researchers themselves. For example, the notion of heritability is easily misunderstood to imply causality, or that some behaviour or condition is determined by one's genetic endowment. When behavioural genetics researchers say that a behaviour is X% heritable, that does not mean that genetics causes, determines, or fixes up to X% of the behaviour. Instead, heritability is a statement about genetic differences correlated with trait differences on the population level.
Historically, perhaps the most controversial subject has been on race and genetics. Race is not a scientifically exact term, and its interpretation can depend on one's culture and country of origin. Instead, geneticists use concepts such as ancestry, which is more rigorously defined. For example, a so-called "Black" race may include all individuals of relatively recent African descent ("recent" because all humans are descended from African ancestors). However, there is more genetic diversity in Africa than the rest of the world combined, so speaking of a "Black" race is without a precise genetic meaning.
Qualitative research has fostered arguments that behavioural genetics is an ungovernable field without scientific norms or consensus, which fosters controversy. The argument continues that this state of affairs has led to controversies including race, intelligence, instances where variation within a single gene was found to very strongly influence a controversial phenotype (e.g., the "gay gene" controversy), and others. This argument further states that because of the persistence of controversy in behaviour genetics and the failure of disputes to be resolved, behaviour genetics does not conform to the standards of good science.
The scientific assumptions on which parts of behavioural genetic research are based have also been criticized as flawed. Genome wide association studies are often implemented with simplifying statistical assumptions, such as additivity, which may be statistically robust but unrealistic for some behaviours. Critics further contend that, in humans, behaviour genetics represents a misguided form of genetic reductionism based on inaccurate interpretations of statistical analyses. Studies comparing monozygotic (MZ) and dizygotic (DZ) twins assume that environmental influences will be the same in both types of twins, but this assumption may also be unrealistic. MZ twins may be treated more alike than DZ twins, which itself may be an example of evocative gene–environment correlation, suggesting that one's genes influence their treatment by others. It is also not possible in twin studies to eliminate effects of the shared womb environment, although studies comparing twins who experience monochorionic and dichorionic environments in utero do exist, and indicate limited impact. Studies of twins separated in early life include children who were separated not at birth but part way through childhood. The effect of early rearing environment can therefore be evaluated to some extent in such a study, by comparing twin similarity for those twins separated early and those separated later.
See also
Behavior Genetics
Behavior Genetics Association
Behavioural neurogenetics
Biocultural evolution
Evolutionary psychology
Genes, Brain and Behavior
Genome-wide association study
International Behavioural and Neural Genetics Society
International Society of Psychiatric Genetics
Journal of Neurogenetics
Nature versus nurture
Personality psychology
Psychiatric genetics
Psychiatric Genetics
Quantitative genetics
References
Further reading
External links | 0.771017 | 0.990566 | 0.763744 |
South African environmental law | South African environmental law describes the legal rules in South Africa relating to the social, economic, philosophical and jurisprudential issues raised by attempts to protect and conserve the environment in South Africa. South African environmental law encompasses natural resource conservation and utilization, as well as land-use planning and development. Issues of enforcement are also considered, together with the international dimension, which has shaped much of the direction of environmental law in South Africa. The role of the country's Constitution, crucial to any understanding of the application of environmental law, also is examined. The National Environmental Management Act (NEMA) provides the underlying framework for environmental law.
The concept of the "environment"
The National Environmental Management Act (NEMA) defines "environment" as the surroundings within which humans exist. These are made up of:
the land, the water and the atmosphere of the earth;
micro-organisms, plant and animal life;
any part or combination of the first two items on this list, and the interrelationships among and between them; and
the physical, chemical, aesthetic and cultural properties and conditions of the foregoing that influence human health and well-being.
In addition, the Environment Conservation Act defines the environment as "the aggregate of surrounding objects, conditions and influences that influence the life and habits of man or any other organism or collection of organisms."
Scope of environmental law
Prof Jan Glazewski of the University of Cape Town takes the view that environmental law encompasses the following three "distinct but interrelated areas of general concern." They are:
land-use planning and development;
resource conservation and utilisation; and
waste management and pollution control.
Legal norms and standards
"Not every legal norm relating to the environment," observes Rabie, "is regarded as constituting environmental law. Environmental law presupposes that the norm in question is aimed at or is used for environmental conservation."
"Environmental conservation" describes the conservation of natural resources and control of environmental pollution. This is done through a process known as "environmental management." Environmental-law norms relate to the management of the environment.
Emerging international norms and concepts
A few of the emerging international norms and concepts in environmental law are noted below, together in some cases with a discussion of their application in South Africa.
Sustainable development
Sustainable development seeks to combat the idea that, while moving away from traditional sources of energy, civilisation would be forced to sacrifice growth, innovation, and progress. The 1983 World Commission on Environment and Development, convened by UN General Assembly, provided the most-cited definition of the concept: "development that meets the needs of the present without compromising the ability of future generations to meet their own needs." This aspiration contains within it two key concepts:
"the concept of needs, in particular the essential needs of the world's poor, to which overriding priority must be given;" and
"the idea of limitations imposed by the state of technology and social organisation on the environment's ability to meet present and future needs."
The concept encompasses more than merely the environment, so for present purposes the focus should be on environmental sustainability: the goal of utilising the environment in a way which both meets human needs and ensures the environment's indefinite preservation.
NEMA defines "sustainable development" as "the integration of social, economic and environmental factors into planning, implementation and decision-making so as to ensure that development serves present and future generations." NEMA provides further that "sustainable development requires the consideration of all relevant factors including:
"that the disturbance of ecosystems and loss of biological diversity are avoided, or, where they cannot be altogether avoided, are minimised and remedied;
"that pollution and degradation of the environment are avoided, or, where they cannot be altogether avoided, are minimised and remedied;
"that the disturbance of landscapes and sites that constitute the nation's cultural heritage is avoided, or where it cannot be altogether avoided, is minimised and remedied;
"that waste is avoided, or where it cannot be altogether avoided, minimised and re-used or recycled where possible and otherwise disposed of in a responsible manner;
"that the use and exploitation of non-renewable natural resources is responsible and equitable, and takes into account the consequences of the depletion of the resource;
"that the development, use and exploitation of renewable resources and the ecosystems of which they are part do not exceed the level beyond which their integrity is jeopardised;
"that a risk-averse and cautious approach is applied, which takes into account the limits of current knowledge about the consequences of decisions and actions; and
"that negative impacts on the environment and on people's environmental rights be anticipated and prevented, and where they cannot be altogether prevented, are minimised and remedied."
Intergenerational equity
Intergenerational equity is, as the name implies, the concept of equality between the generations—children, youth, adults and seniors. In discussions of climate change especially, people are often exhorted to think of the legacy they are leaving their children and grandchildren.
Environmental justice
NEMA provides that "environmental justice must be pursued so that adverse environmental impacts shall not be distributed in such a manner as to unfairly discriminate against any person, particularly vulnerable and disadvantaged persons."
Environmental rights
This term does not imply that "environment' has rights in South African Law but rather the right of people to an environment that is safeguarded, in fulfilment of the government's public trust duties, for current and future generations.
Section 24 of the South African Constitution states that "everyone has the right:
"to an environment that is not harmful to their health or well-being; and
"to have the environment protected, for the benefit of present and future generations, through reasonable legislative and other measures that
"prevent pollution and ecological degradation;
"promote conservation; and
"secure ecologically sustainable development and use of natural resources while promoting justifiable economic and social development."
Public trust doctrine
"The environment," according to NEMA, "is held in public trust for the people. The beneficial use of environmental resources must serve the public interest and the environment must be protected as the people's common heritage."
Precautionary principle
Principle 15 of the Rio Declaration provides as follows:
Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.
As noted above, NEMA requires "that a risk-averse and cautious approach [be] applied, which takes into account the limits of current knowledge about the consequences of decisions and actions."
Preventive principle
Underlying this principle is the idea that only to be reacting to crises, when they happen, is far more expensive (and in more than just the pecuniary sense) than forestalling or preventing them before they happen. This is the fundamental notion behind laws regulating the generation, transportation, treatment, storage and disposal of hazardous waste, and laws regulating the use of pesticides. It is also the foundation of the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal (1989), which sought to minimise the production of hazardous waste and to combat illegal dumping. The preventive principle was an important element, too, of the European Community's Third Environmental Action Programme, adopted in 1983.
In South Africa, NEMA requires "that pollution and degradation of the environment are avoided, or, where they cannot be altogether avoided, are minimised and remedied."
Polluter-pays principle
This principle, widely understood to be commonsensical and intuitively fair, is analogous to the slogan "you break, you pay." It makes the party responsible for producing the pollution responsible for paying for the damage done to the natural environment. It has attained the status of a regional custom, because of the strong support it has received in most OECD and EC countries. In terms of Principle 16 of the Rio Declaration,
National authorities should endeavour to promote the internalisation of environmental costs and the use of economic instruments, taking into account the approach that the polluter should, in principle, bear the costs of pollution, with due regard to the public interests, and without distorting international trade and investment.
NEMA echoes this:
The costs of remedying pollution, environmental degradation and consequent adverse health effects and of preventing, controlling or minimising further pollution, environmental damage or adverse health effects must be paid for by those responsible for harming the environment.
Local-level governance
Questions about how environmental decisions are made, and about who makes them, are questions of environmental governance. This lies at the heart of environmental law and policy. Of especial relevance in the context of South African law are Schedules 4 and 5 of the Constitution.
Common but differentiated responsibility
The principle of common but differentiated responsibility (CBDR) is an important principle of international environmental law, explicitly formulated in the Principle 7 of the Rio Declaration:
In view of the different contributions to global environmental degradation, States have common but differentiated responsibilities. The developed countries acknowledge the responsibility that they bear in the international pursuit of sustainable development in view of the pressures their societies place on the global environment and of the technologies and financial resources they command.
Legislation regulating environmental management
Legislation, from an environmental point of view, may be divided into six categories:
legislation aimed exclusively at environmental management, like the National Parks Act and the Atmospheric Pollution Prevention Act;
legislation calculated to promote an environmental object, like the Mountain Catchment Areas Act;
legislation not specifically directed at environmental management, but including individual provisions aimed at environmental management, like the Nuclear Energy Act, the Sea-Shore Act and the National Roads Act;
legislation not aimed at environmental management, but including provisions that are directly or potentially of environmental significance, like land-use planning legislation and the Customs and Excise Act;
legislation not aimed at environmental management, but rather at environmental exploitation (like the old mining legislation and legislation promoting afforestation and fishing, and the development of townships); and, finally,
Legislation with no environmental relevance .
Sources
There are a number of diverse sources of South African environmental law:
International law – Both international customary law and international conventions function as sources of South African environmental law.
Common law – A variety of common-law rules, derived from neighbour law, for example, and the law of nuisance, are of significance as sources of environmental law. The dictum sic utere tuo ut alienum non laedas ("use your own so as to cause no harm") furnishes one instance.
Constitution of South Africa – The Constitution now informs and underlies the entire legal system in South Africa. Of prime importance is the Bill of Rights, with its explicit provision for environmental rights. The Constitution provides a framework for the administration of environmental laws.
Statute law – Environmental law is also derived, fairly obviously, from national and provincial legislation, and from local by-laws.
Customary law – Custom functions to some degree as a source of environmental law.
Jurisprudential grounding
There are, broadly speaking, two bases in jurisprudence for protection of the environment:
the biocentric (or life-centred) approach; and
the anthropocentric (or human-centred) approach.
The anthropocentric approach finds some support in the common law of South Africa. There is, again, the Roman-law maxim sic utere tuo ut alienum laedas ("you may use your property only in such a way as will not harm another"), for example. The Constitution, insofar as it deals with the environment, also embraces anthropocentric philosophy, providing for basic environmental rights.
NEMA additionally provides that "environmental management must place people and their needs at the forefront of its concern, and serve their physical, psychological, developmental, cultural and social interests equitably."
History
Pre-1994
First three centuries
For the first three centuries of South African law, the most prominent environmental issues were
the control of drinking water;
pollution; and
the conservation of wild animals. This last became increasingly important in the late Nineteenth and early Twentieth Century, when the first conservation areas were established.
1940–1969
In the three decades from 1940 to 1969, environmental concern intensified. Several important pieces of legislation were passed, including the Water Act and the Atmospheric Pollution Prevention Act.
The legislature, however, only responded to environmental concerns on an ad hoc basis, leading to a piece-meal effort.
1970–1994
The 1970s heralded an environmental watershed worldwide, with the publication of Rachel Carson's The Silent Spring in 1962, the Torrey Canyon Disaster of 1967, and Woodstock in 1970.
In South Africa, a variety of new laws were passed, and several novel concepts were introduced. Several important Acts were also updated, including the Environment Conservation Act.
Post-1994
In 1996, section 24 of the Constitution enshrined basic environmental rights. A strong theme in the current legal order is that of equitable access to resources.
In the late 1990s, South Africa ratified several international conventions relating to the environment. It also enacted the National Environmental Management Act (NEMA), which supplemented but did not entirely repeal the provisions of the Environment Conservation Act, some of which are still in force.
Other important recent legislation includes
the National Water Act;
the National Forests Act;
the National Environmental Management: Protected Areas Act;
the National Environmental Management: Biodiversity Act; and
the Marine Living Resources Act.
International environmental law
International law is made up of:
international conventions (or treaties);
international custom, as evidence of a general practice accepted as law;
general principles of law as recognised by civilised nations; and
judicial decisions and the writings of the most highly qualified publicists.
Most of this is applicable to South African environmental law, and is binding in South Africa.
Impact on South African law
International environmental law has had a considerable influence on South African environmental law. The former is usually incorporated into the latter in one of three ways:
by incorporation of the provisions of the treaty into an Act of Parliament;
by including the treaty as a schedule to a statute; and
by proclamation by the executive in the Government Gazette, under the authorisation of a particular Act, giving the executive the power to bring the treaty into effect.
Constitution
See Chapter 14 of the Constitution and Chapter 6 of NEMA.
Constitution
s 24
Section 24 of the Constitution explicitly grants environmental rights to "everyone."
s 24(a)
Section 24(a) grants everyone the right to an environment that is not harmful to their health. This goes beyond the right of access to healthcare, established in section 27 of the Constitution, as a particular environment may be damaging to one's health and yet not on infringe on one's right of access to healthcare. In Verstappen v Port Edward Town Board, where the plaintiff sought an interdict on the ground that she was suffering health problems due to the local council's dumping waste, without the requisite permit, on the adjoining property, she might have invoked section 24, but did not.
The right to an environment that is not harmful to one's "well-being," the second aspect of subsection 24(a), "elevates the right beyond health but to a not readily determinable realm," writes Glazewski. He takes the word "well-being" to imply "that the environment has not only an instrumental value [...], but that in addition, aspects of the environment [...] are derserving of conservation for their intrinsic value."
The ambit of "well-being" is potentially limitless, but is clearly relevant to pollution. It was invoked in Hichange Investments v Cape Produce Company, where Leach J opined,
One should not be obliged to work in an environment of stench and, in my view, to be in an environment contaminated by H2S [as it was in casu] is adverse to one's 'well-being.'"
It may be argued that what constitutes "well-being" is relative to the nature and personality of the person seeking to assert this right, and that it will be decided on the facts of the particular case. Leach J concurs: "The assessment of what is significant involves, in my view, a considerable measure of subjective import."
s 24(b)
Section 24(b) states that the government must use "reasonable legislative and other measures" to protect the environment.
Glazewski argues that "the government has clearly complied" with the constitutional injunction to take legislative measures: It has enacted "a plethora of environmental legislation and accompanying regulations since 1994."
The meaning of "reasonable [...] other measures" was considered in the context of the environmental right in BP Southern Africa v MEC for Agriculture, Conservation & Land Affairs, where the court determined that it is up to the courts to determine what measures are reasonable.
s 24(b)(i)
Section 24(b)(i) requires these measures to "prevent pollution and ecological degradation." This leads to the question: What degree of pollution should be tolerated in a developing country like South Africa? This issue was highlighted in Hichange Investments, where Leach J considered what constitutes "significant pollution." He answered, that "the assessment [...] involves [...] a considerable measure of subjective import," and referred to the right to an environment that is not harmful to one's well-being.
s 24(b)(ii)
Section 24(b)(ii) requires these measures to "promote conservation." This is satisfied by "the various statutory obligations on the state contained in the vast array of environmental statutes and regulations enacted before and after 1994."
s 24(b)(iii)
Section 24(b)(iii) requires "measures that [...] secure ecologically sustainable development and use of natural resources while promoting justifiable economic and social development."
In Minister of Public Works v Kyalami Ridge Environmental Association, where the government sought to establish a transit camp for people rendered homeless as a result of severe flooding, the court found that, in effect, the government's duty to fulfil its obligations in terms of the right to housing trumped other legal claims, including the environmental concerns of the respondents.
The notion that sustainable development is an inherent factor to be considered in environmental decision-making was specifically endorsed in BP Southern Africa v MEC for Agriculture, Conservation & Land Affairs:
The concept of "sustainable development" is the fundamental building block around which environmental legal norms have been fashioned, both internationally and in South Africa, and is reflected in section 24(b)(iii) of the constitution [sic].
Pure economic principles will no longer determine in an unbridled fashion whether a development is acceptable. Development, which may be regarded as economically and financially sound, will in future be balanced by its environmental impact, taking coherent cognisance of the principle of intergenerational equity and sustainable use of resources in order to arrive at an integrated management of the environment, sustainable development and socio-economic concerns.
s 25
Section 25 of the Constitution guarantees property rights, which Glazewski states is fundamentally linked to environmental concerns.
Property rights are not absolute; owners may not use their property as they please.
A central question is the extent to which private property rights may be limited in the public environmental interest, and when compensation is triggered if they are so limited. This tension has always been present in South African law; it is now more acute in view of the relatively recent recognition of environmental rights. In Diepsloot Residents' & Landowners Association v Administrator, Transvaal, a landowners' association challenged the administrator's decision to settle squatters near a residential area, on the grounds that this decision constituted an unwarranted interference with its property rights. The landowners contended that these property rights included an environmental component: The settlement would pollute the water and air. Their application was dismissed, however, on numerous grounds.
In BP Southern Africa v MEC for Agriculture, Conservation & Land Affairs, the court found that:
the constitutional right to environment is on a par with the rights to freedom of trade, occupation, profession and property entrenched in sections 22 and 25 of the Constitution. In any dealings with the physical expressions of property, land and freedom to trade, the environmental rights requirements should be part and parcel of the factors to be considered without any a priori grading of the rights. It will require a balancing of rights where competing interests and norms are concerned.
s 32
Section 32 accords "everyone" the right to access information. This echoes Principle 10 of the Rio Declaration, which provides as follows:
Environmental issues are best handled with the participation of all concerned citizens at the relevant level. At the national level, each individual shall have appropriate access to information concerning the environment that is held by public authorities, including information on hazardous materials and activities in their communities, and the opportunity to participate in decision-making processes. States shall facilitate and encourage public awareness and participation by making information widely available.
s 33
The Constitution also provides that "everyone has the right to administrative action that is lawful, reasonable and procedurally fair," and that "everyone whose rights have been adversely affected by administrative action has the right to be given written reasons." It requires that national legislation, enacted "to give effect to these rights," must
"provide for the review of administrative action by a court or, where appropriate, an independent and impartial tribunal;"
"impose a duty on the state to give effect to the rights" above; and
"promote an efficient administration."
Section 1(5) of NEMA, inserted by section 1 of the National Environmental Management Amendment Act, and effective from 1 May 2009, provides that
any administrative process conducted or decision taken in terms of this Act must be conducted or taken in accordance with the Promotion of Administrative Justice Act [...] unless otherwise provided for in this Act.
In terms of the Promotion of Administrative Justice Act (PAJA), "administrative action" refers to "any decision taken, or any failure to take a decision, by
"an organ of state, when
"exercising a power in terms of the Constitution or a provincial constitution; or
"exercising a public power or performing a public function in terms of any legislation; or
"a natural or juristic person, other than an organ of state, when exercising a public power or performing a public function in terms of an empowering provision,
"which adversely affects the rights of any person and which has a direct, external legal effect." The provision goes on to list those actions which are not included in the definition of "administrative action."
An "administrator" is defined in PAJA is "an organ of state or any natural or juristic person taking administrative action."
A "decision," meanwhile, means "any decision of an administrative nature made, proposed to be made, or required to be made, as the case may be, under an empowering provision, including a decision relating to
"making, suspending, revoking or refusing to make an order, award or determination;
"giving, suspending, revoking or refusing to give a certificate, direction, approval, consent or permission;
"issuing, suspending, revoking or refusing to issue a licence, authority or other instrument;
"imposing a condition or restriction;
"making a declaration, demand or requirement;
"retaining, or refusing to deliver up, an article; or
"doing or refusing to do any other act or thing of an administrative nature, and a reference to a failure to take a decision must be construed accordingly."
An "empowering provision" is "a law, a rule of common law, customary law, or an agreement, instrument or other document in terms of which an administrative action was purportedly taken."
Procedural fairness
Fundamental to administrative decision-making, and to the right to just administrative action, is procedural fairness, referred to in section 33(1) of the Bill of Rights. It has been taken up by PAJA, which provides that "administrative action which materially and adversely affects the rights or legitimate expectations of any person must be procedurally fair."
Inherent in procedural fairness is the common-law audi alteram partem rule: "Hear the other side." Its application is illustrated in The Director: Mineral Development, Gauteng Region v Save the Vaal Environment, where the applicant had granted a mining licence to carry out open-cast mining near the Vaal river. The respondent, an environmental NGO, which had not been permitted to make representations prior to the granting of the permit, applied successfully to the High Court for a review of the decision. The question on appeal before the Supreme Court of Appeal was whether (and, if so at what point) interested parties wishing to oppose a mining licence application on environmental grounds were entitled to be heard by the Director. The court rejected the Director's argument that section 9 of the Minerals Act excluded the application of the audi alteram partem rule, finding that the respondent should have been granted a hearing when the licence decision was made.
Right to reasons
The importance of right to reasons for an administrative action, whether generally or in the environmental context, was well known before the advent of section 33 of the Bill of Rights and section 5 of PAJA, where the right is now entrenched. Lawrence Baxter in 1984, in his textbook on administrative law, provided a précis of the right's importance:
In the first place, a duty to give reasons entails a duty to rationalise the decision. Reasons therefore help to structure the exercise of discretion, and the necessity of explaining why a decision is reached requires one to address one's mind to the decisional referents which ought to be taken into account. Secondly, furnishing reasons satisfies an important desire on the part of the affected individual to know why a decision was reached. This is not only fair: it is also conducive to public confidence in the administrative decision-making process. Thirdly—and probably a major reason for the reluctance to give reasons—rational criticism of a decision may only be made when the reasons for it are known. This subjects the administration to public scrutiny and it also provides an important basis for appeal or review. Finally, reasons may serve a genuine educative purpose, for example where an applicant has been refused on grounds which he is able to correct for the purpose of future applications.
Section 5 of PAJA gives effect to the constitutional imperative in section 33(2) for written reasons in the following terms:
Any person whose rights have been materially and adversely affected by administrative action and who has not been given reasons for the action may, within 90 days after the date on which that person became aware of the action or might reasonably have been expected to have become aware of the action, request that the administrator concerned furnish written reasons for the action.
The rest of section 5 sets out the procedures for obtaining reasons; it also sets out the circumstances in which reasons need not be furnished by the administrator concerned. "This section is welcome," writes Glazewski, "as it lays down for the first time the right to reasons in clear statutory terms."
The importance of the right to reasons in the environmental context is illustrated in Administrator, Transvaal and The Firs Investments (Pty) Ltd v Johannesburg City Council, concerning opposition to the then-controversial proposal to rezone a residential area to business, in order to enable the establishment of what today is the Firs Shopping centre in northern Johannesburg. On the question of reasons, Chief Justice Ogilivie Thompson said the following:
The Administrator would have been well advised to state the reasons for his decision [...] for just as the failure of a party to testify on a matter within his knowledge may, under certain circumstances, give rise to an inference against him, so may the failure to give reasons for the decision constitute an adverse element in assessing the conduct of the person making that decision. In particular [...] the failure to furnish reasons may—I emphasise "may" not "must"—add colour to an inference of arbitrariness.
Reasons for administrative decision-making in the environmental context were also in issue in Minister of Environmental Affairs and Tourism v Phambili Fisheries (hereafter referred to as "Phambili 1"). The first respondent, a fishing company, feeling aggrieved by the inadequacy of the fishing quota allocated to it, contended that inadequate reasons had been given regarding the historical baseline used to allocate current quotas. The court dismissed the argument, quoting an Australian decision in which it was held that
the [Australian] Judicial Review Act requires the decision-maker to explain his decision in a way which will enable a person aggrieved to say, in effect: "Even though I may not agree with it, I now understand why the decision went against me. I am now in a position to decide whether that decision has involved an unwarranted finding of fact, or an error of law, which is worth challenging." This requires that the decision-maker should set out his understanding of the relevant law, any findings a fact on which his conclusions depend (especially if those facts have been in dispute), and the reasoning processes which led him to those conclusions. He should do so in clear and unambiguous language, not in vague generalities or the formal language of legislation.
The court also quoted Cora Hoexter, a leading authority on South African administrative law, to the effect that
it is apparent that reasons are not really reasons unless they are properly informative. They must explain why action was taken or not taken; otherwise they are better described as findings or other information.
The court applied these dicta to the respondents' contention that the reasons given were no reasons at all in respect of the question regarding the historical baseline used, and dismissed the contention, holding that "a fair reading of the reasons makes it clear that the Chief Director, suitably assisted, in the exercise of his discretion, decided that an appropriate percentage for the diminution of quotas at the end of 2001 was 5%," and satisfied itself that adequate reasons had been given for the administrative decisions taken in this instance.
Legitimate expectations
Section 3 of PAJA, quoted above, applies procedural fairness not only to the "rights of persons," but also to situations where there may be "legitimate expectations."
Legitimate expectation is relevant in the environmental context: for example, in the marine fisheries domain, where legal persons, having had regular fishing quotas in the past, may now be granted a lesser quota; conversely, a historically disadvantaged person may expect to receive a quota in the new dispensation.
This issue, as well as a number of other administrative law principles, were considered relatively recently, in just such a fisheries-allocation question, by both the Supreme Court of Appeal, in Phambili 1, and the Constitutional Court, in Bato Star Fishing v Minister of Environmental Affairs (referred to hereinafter as "Phambili 2").
Historically, however, the leading case on legitimate expectation is Administrator, Transvaal v Traub where a group of medical doctors successfully argued that they had a legitimate expectation that their posts would be confirmed. Chief Justice Corbett, quoting with approval Lord Denning's judgment in Ridge v Baldwin, held
that an administrative body may, in a proper case, be bound to give a person who is affected by their decision an opportunity of making representations. It all depends on whether he has some right or interest, or, I would add, some legitimate expectation, of which it would not be fair to deprive him without hearing what he has to say.
Later in the judgment, the Chief Justice described the doctrine of legitimate expectation as follows:
The legitimate expectations doctrine is sometimes expressed in terms of some substantive benefit or advantage or privilege which the person concerned could reasonably expect to acquire or retain and which it would be unfair to deny such person without prior consultation or a prior hearing; and at other times in terms of a legitimate expectation to be accorded a hearing before some decision adverse to the interests of the person concerned is taken.
In Phambili 1, the respondent argued that it had a legitimate expectation that it would receive increased allocations under the quota system for hake fishing. The Supreme Court of Appeal considered the legitimate-expectation doctrine, noting with approval National Director of Public Prosecutions v Phillips, where Heher J described the doctrine in the following terms:
The law does not protect every expectation but only those which are "legitimate". The requirements for legitimacy of the expectation, include the following: (i) The representation underlying the expectation must be "clear, unambiguous and devoid of relevant qualification" [....] The requirement is a sensible one. It accords with the principle of fairness in public administration, fairness both to the administration and the subject. It protects public officials against the risk that their unwitting ambiguous statements may create legitimate expectations. It is also not unfair to those who choose to rely on such statements. It is always open to them to seek clarification before they do so, failing which they act at their peril. (ii) The expectation must be reasonable [....] (iii) The representation must have been induced by the decision-maker [....] (iv) The representation must be one which it was competent and lawful for the decision-maker to make without which the reliance cannot be legitimate.
Applying the above principles to the case in point, the court dismissed the argument that the appellants had a legitimate expectation on the ground that the various statements made by government officials regarding the allocation of fishing quotas did not amount to statements which were "clear, unambiguous and devoid of relevant qualification."
Judicial review
Promotion of Administrative Justice Act (PAJA)
The grounds of judicial review have been codified in section 6(2) of PAJA. Grounds for judicial review will exist if the administrator who took the administrative action
"was not authorised to do so by the empowering provision;
"acted under a delegation of power which was not authorised by the empowering provision; or
"was biased or reasonably suspected of bias."
Judicial review will also be possible if
"a mandatory and material procedure or condition prescribed by an empowering provision was not complied with;
"the action was procedurally unfair; or
"the action was materially influenced by an error of law."
If the action was taken
"for a reason not authorised by the empowering provision;
"for an ulterior purpose or motive;
"because irrelevant considerations were taken into account or relevant considerations were not considered;
"because of the unauthorised or unwarranted dictates of another person or body;
"in bad faith; or
"arbitrarily or capriciously,"
the courts will be entitled to review such action. They may do so, too, if "the action itself contravenes a law or is not authorised by the empowering provision," or if it "is not rationally connected to
"the purpose for which it was taken;
"the purpose of the empowering provision;
"the information before the administrator."
Finally, judicial review is possible if
"the action concerned consists of a failure to take a decision;
"the exercise of the power or the performance of the function authorised by the empowering provision, in pursuance of which the administrative action was purportedly taken, is so unreasonable that no reasonable person could have so exercised the power or performed the function; or
"the action is otherwise unconstitutional or unlawful."
Section 8(1) of PAJA provides for remedies in judicial-review proceedings in the following terms: "The court or tribunal, in proceedings for judicial review [...], may grant any order that is just and equitable, including orders
"directing the administrator
"to give reasons; or
"to act in the manner the court or tribunal requires;
"prohibiting the administrator from acting in a particular manner;
"setting aside the administrative action and
"remitting the matter for reconsideration by the administrator, with or without directions; or
"in exceptional cases
"substituting or varying the administrative action or correcting a defect resulting from the administrative action; or
"directing the administrator or any other party to the proceedings to pay compensation;
"declaring the rights of the parties in respect of any matter to which the administrative action relates;
"granting a temporary interdictor other temporary relief; or
"as to costs."
Common law
In the environmental context, litigation around the enforcement of statutory duties arises in two broad ways:
An application may be brought to compel the exercise of a statutory duty: for example, for the Minister to declare an environmental policy or to allocate fishing quotas.
A plaintiff may seek some form of relief, like compensation for harm suffered due to the failure by the government to carry out a statutory duty.
Compelling exercise of statutory duty
Regard must be had to whether the provision imposing the duty is peremptory or permissive.
A question in Van Huyssteen NO v Minister of Environmental Affairs and Tourism, concerning the erection of a steel mill at Langebaan Lagoon, was whether or not the applicant had the right to compel the respondent Minister to appoint a Board of Investigation provided for in section 15(1) of the Environmental Conservation Act (ECA), and to order such appointment. It was held that, as the relevant provisions of the ECA were permissive, not directory or peremptory, there was no obligation on the Minister to appoint a Board. The applicants accordingly had no right to compel the constitution thereof.
In Wildlife Society of Southern Africa v Minister of Environmental Affairs and Tourism of the Republic of South Africa, the court held, as regards the merits of an application for a mandamus compelling the State to comply with its statutory obligations to protect the environment, that the first respondent's opposition to the application rested largely upon the fact that there was in existence a Task Group which had been established to tackle the issue. The court found, however, that the Task Group was a non-statutory, advisory body of uncertain nature and duration, whose actions had in any event fallen short of establishing that the provisions of section 39(2) of the Transkei Environmental Decree were being enforced by first respondent. The Court held, accordingly, that the applicants were entitled to an order that the first respondent enforce the provisions of section 39(2) of the Decree, which were, as "degree" implies, peremptory rather than permissive.
Relief
An example of the second scenario is Verstappen v Port Edward Town Board, where the plaintiff sought an interdict on the ground that she was suffering health problems, as the local authority was dumping waste on the adjoining property without the requisite permit. The case (heard prior to the advent of the interim Constitution) failed, as the applicant not shown that she was likely to suffer "special damage."
s 38
The use of word "everyone" in the environmental right raises the issue of locus standi, traditionally a serious obstacle to individual litigants or NGOs concerned with the implementation and enforcement of environmental laws, or those wishing to assert environmental rights or defend environmental actions. South African law, in common with many other legal systems, formerly required that to have legal standing to challenge administrative lawfulness, an individual must show that he had some degree of personal interest in the administrative action under challenge.
Section 38 of the Constitution dramatically changed this. The following persons, among others, may approach a competent court:
anyone acting as a member of, or in the interest of, a group or class of persons;
anyone acting in the public interest; and
an association acting in the interest of its members.
Most importantly, litigation may now also be brought in the public interest.
Administration
Co-operative governance
Government in South Africa, as in most modern states, is divided broadly into three branches:
the legislative;
the executive; and
the judicial.
The Constitution sets the framework for these three branches.
Of particular practical importance for the administration of environmental laws are the respective powers of the national, provincial and local levels of government. "Co-operative governance" refers to and regulates the interrelationship between these levels.
Chapter 3 of the Constitution, entitled "Co-operative government," reflects a "fundamental departure from the past," in that the three levels of government are "no longer regarded as hierarchical tiers with the national government at the helm," but rather, in the words of the Constitution, as "distinctive, interdependent and interrelated."
Co-operative relationships between all spheres of government play a central role in the development of an integrated environmental management framework for South Africa.
Section 41 of the Constitution sets out the principles of co-operative governance and intergovernmental relations. Particularly important are subsections 41(1)(g)-(h), which provides that all levels of government, and all organs of state, must
"exercise their powers and perform their functions in a manner that does not encroach on the geographical, functional or institutional integrity of government in another sphere; and
"co-operate with one another in mutual trust and good faith by
"fostering friendly relations;
"assisting and supporting one another;
"informing one another of, and consulting one another on, matters of common interest;
"co-ordinating their actions and legislation with one another;
"adhering to agreed procedures; and
"avoiding legal proceedings against one another."
Section 41(3) provides that "an organ of state involved in an intergovernmental dispute must make every reasonable effort to settle the dispute by means of mechanisms and procedures provided for that purpose, and must exhaust all other remedies before it approaches a court to resolve the dispute."
Section 43 of the Constitution also states "the legislative authority
"of the national sphere of government is vested in Parliament, as set out in section 44;
"of the provincial sphere of government is vested in the provincial legislatures, as set out in section 104; and
"of the local sphere of government is vested in the Municipal Councils, as set out in section 156."
All three levels of government, write Paterson and Kotze,
have a key role to play in environmental governance and, accordingly, environmental compliance and enforcement. However, this role has to a degree been undermined by significant overlap in their respective competences, which, during the course of the past decade, has resulted in legislative and institutional fragmentation, both within and between the different spheres of governance. This fragmentation has in turn led to functional duplication and confusion, an undesirable reality in a country with significant resource constraints.
Co-operative governance is accordingly regarded as "a necessary precursor" for the development of an effective environmental compliance and enforcement effort in South Africa.
National authority
National executive authority is vested in the President who, together with his Cabinet, must implement national legislation, develop and implement national policy, co-ordinate the functions of state departments and administrations, prepare and initiate legislation, and perform any other executive function provided for in law. The Cabinet consists of the President, a Deputy President and the Ministers. The members of the Cabinet must, inter alia, act in accordance with the Constitution and provide Parliament with full and regular reports concerning matters under their control.
In the environmental context, the Minister of Environmental Affairs and Tourism, with his Department of Environmental Affairs and Tourism, constitutes the leading national environmental authority. There are a number of other ministries and departments which play a role in environmental governance. They include Agriculture, Foreign Affairs, Health, Housing, Justice and Constitutional Development, Land Affairs, Provincial and Local Government, Science and Technology, Transport, Minerals and Energy, Trade and Industry, and Water Affairs and Forestry. The fact that environmental matters fall within the jurisdiction of so many different ministries and departments "poses an immense challenge for developing a coherent and effective environmental regime in South Africa."
The national government's legislative authority is similarly prescribed in the Constitution. It has exclusive competence to make laws governing the following environmental matters:
national parks;
national botanical gardens;
marine resources;
fresh-water resources; and
mining.
Furthermore, it has concurrent competence with provincial government to make laws regulating the following environmental matters:
indigenous forests;
agriculture;
disaster management;
cultural matters;
environment;
health services;
housing;
nature conservation;
pollution control;
regional planning and development;
soil conservation;
trade; and
urban and rural development.
The national government has exercised this legislative authority to prescribe an extensive array of new environmental laws, such as
NEMA;
the National Environmental Managemeht: Biodiversity Act;
the National Environmental Management: Air Quality Act;
the National Environmental Management: Protected Areas Act;
the National Water Act; and
the Mineral and Petroleum Resources Development Act.
These laws, which apply across the entire territory of South Africa, and are generally administered by several national departments, contain a myriad of provisions of relevance to environmental compliance and enforcement.
National legislative and executive competence is provided for in section 44 of the Constitution, which states that Parliament may pass legislation on any matter, including a matter referred to in Schedule 4, but excluding a matter in Schedule 5 unless it is a matter in which it is specifically authorised to intervene. Among the reasons for which it may intervene within a functional area listed in Schedule 5 are the following, which are relevant to environmental concerns:
"to maintain essential national standards;
"to establish minimum standards required for the rendering of services; or
"to prevent unreasonable action taken by a province which is prejudicial to the interests of another province or to the country as a whole."
It ought to be noted, however, that this may only be done in accordance with the procedure set out in section 76(1), which provides for ordinary bills affecting provinces, and stipulates that "the Bill must be referred to the National Council of Provinces." It provides for certain procedures, depending on whether the bill is accepted, amended or rejected by the NCOP.
Parliament therefore enjoys "residual competence," in that it has exclusive legislative competence in respect of all matters which are not expressly assigned to the concurrent or exclusive competence of provincial legislatures. If, in other words, the matter appears in neither Schedule 4 nor Schedule 5, Parliament has exclusive competence to deal with it.
Apart from section 44, intervention is also possible under the national override section, which deals with conflicts between national and provincial legislation falling within the functional areas of concurrent competences listed in Schedule 4. It provides that national legislation prevails over provincial legislation if the former meets certain stipulated conditions.
National legislation which applies uniformly across the nation will prevail over provincial legislation if it is necessary for "the protection of the environment."
Similarly, if the national legislation deals with a matter that requires uniformity if it is to be dealt with effectively, it will prevail over provincial legislation if it establishes uniform norms and standards, frameworks or national policy. Pollution control is a pertinent example.
If standards are not uniform throughout the country, individual provinces could pass, for example, less stringent standards for their individual provinces in order to attract industrial investment. This, however, could be detrimental to the national public environmental interest. "Uniform standards," notes Glazewski, "would inhibit a situation where polluting industries go 'polluter-haven shopping', for the provinces with the least stringent environmental standards."
Provincial authority
South Africa has nine provinces, each with its own provincial government, which possesses legislative and executive authority. The legislative authority of a province vests in its provincial legislature, which section 104 of the Constitution states may pass legislation not only in respect of the functional areas listed in Schedule 4 and 5, but also in respect of "any matter outside those functional areas, and that is expressly assigned to the province by national legislation."
Furthermore, "provincial legislation with regard to a matter that is reasonably necessary for, or incidental to, the effective exercise of a power concerning any matter listed in Schedule 4, is for all purposes legislation with regard to a matter listed in Schedule 4."
Provincial legislatures must provide for mechanisms to ensure that all provincial executive organs of state are accountable to it, and must maintain oversight of the exercise of provincial executive authority in the province, including the implementation of legislation.
The executive power in the principal sphere vests in the premier of the province, who exercises this authority together with the Members of the Executive Council (MECs).
Executive powers accorded to the provincial executives include
implementing provincial legislation in the province;
implementing all national legislation within the functional areas listed in Schedules 4 and 5 of the Constitution;
developing and implementing provincial policy;
co-ordinating the functions of the provincial administration and its departments; and
preparing and initiating provincial legislation.
Possible conflicts which arise between national and provincial legislation are regulated in sections 146 to 150 of the Constitution.
The Constitution also enables relevant provincial executive authorities to intervene in local governance, where a municipality refrains from or fails to fulfil an executive obligation in terms of legislation, by taking any appropriate steps to ensure fulfilment of that obligation: "A typical example would be where provincial legislation compels all local governments within the province to draft a cultural heritage resources management plan, and a particular municipality fails to do so."
In most instances, MECs are responsible for the various provincial departments, certain of which undertake environmental functions. The manner in which these functions are grouped per department varies between the provinces:
In Gauteng, for example, the Department of Agriculture, Conservation and Environment administers environmental matters.
In the Western Cape, on the other hand, the Department of Environmental Affairs and Development Planning is the provincial environmental authority.
These provincial authorities administer
various old provincial conservation and land-use planning ordinances;
new provincial environmental Acts; and
environmental functions delegated to them by the national executive.
They have "a key role to play," therefore, in environmental compliance and enforcement.
Local authority
Within the sphere of local government, South Africa has 284 municipalities. "As the sphere of government closest to communities," write Kotze and Paterson, "local government has an essential role to play in promoting not only socio-economic development and the provision of basic services, but also environmental compliance and enforcement" (33).
The Constitution prescribes the objectives, composition, executive powers and legislative functions of local governments. They generally have the right to govern, at their own initiative, the local affairs relevant to their community, subject to national and provincial legislation. National and provincial governments may not, however, compromise or impede a municipality's ability or right to exercise its powers or to perform its functions.
Some of the environmentally relevant areas over which local governments exercise legislative competence include
building regulations;
electricity and gas reticulation;
municipal planning;
specified water and sanitation services;
cleansing;
control of public nuisances;
municipal roads;
noise pollution;
public places;
refuse removal;
refuse dumps; and
solid waste disposal.
The Constitution goes on to set out the areas of local authority competence, stipulating that a municipality has executive authority and the right to administer
local government matters listed in the respective Part Bs of Schedules 4 and 5, so that "air pollution," for example, being a Part-B item in Schedule 4, may be administered by local authorities; and
"any other matter assigned to it by national or provincial legislation." In this regard, a further subsection stipulates that national and provincial government must assign, by agreement, the administration of any "Part A" matter listed in Schedules 4 and 5, if the matter would be more effectively administered locally and the municipality has the capacity to administer it.
Although section 156 of the Constitution refers to municipalities' "executive authority," and the "right to administer" certain matters, it specifically stipulates that "a municipality may make and administer by-laws for the effective administration of the matters which it has the right to administer."
Therefore, although the section does not refer specifically to a municipality's legislative competence, it may legislate for Part B matters of Schedules 4 and 5.
The Constitution requires provincial government to establish municipalities in a manner consistent with legislation prescribed in the Constitution, and to monitor, support and promote the development of local government capacity. National legislation, in the form of the Local Government: Municipal Structures Act 117 of 1998, which deals with local authority competences, has been passed.
The Constitution establishes three categories of municipalities:
A "Category A" municipality has exclusive municipal executive and legislative authority in area.
A "Category B" municipality shares municipal executive and legislative authority in its area with a "Category C" municipality.
A "Category C" municipality has municipal executive and legislative authority in an area that includes more than one municipality.
The Local Government: Municipal Structures Act elaborates on this categorisation, providing for "the establishment of municipalities in accordance with the requirements relating to categories and types of municipality," and seeks "to establish criteria for determining the category of municipality in an area and related matters." The Act includes chapters on
categories and types of municipality;
the establishment of municipalities; and
the functions and powers of municipalities.
The Act was assented to in December 1998, and came into force in February 1999. In Cape Metropolitan Council v Minister for Provincial Affairs and Constitutional Development, the applicant challenged the constitutionality of the Act without success.
Schedules 4 and 5
Schedules 4 and 5 of the Constitution include various environmental matters (See Glazewski 113).
Schedule 4 includes "Pollution control" under Part A, but "Air Pollution" under Part B, which also includes a further item relevant to pollution: "Municipal Health Services."
Schedule 5 includes "control of public nuisances" in Part B as one of its items, which is also relevant to pollution.
Therefore, while "pollution," and specifically "air pollution," generally is a concurrent matter, the inclusion of "air pollution" in Part B of Schedule 4 means that local authorities have specific executive authority and the right of administration in respect of that matter.
Moreover, the national and provincial governments have a duty to see to the effective performance by municipalities of their functions.
As "pollution" and "air pollution" are designated concurrent matters in Schedule 4, either national or provincial government could conceivably promulgate air pollution Acts.
The Constitution is clear, however, that national government has overriding powers as regards the setting of standards. Where uniform standards are warranted, national government could invoke the provisions of the Constitution which deal with conflicting laws.
The so-called override provision, which specifically applies to conflicts between national and provincial legislation within the functional areas listed in Schedule 4, provides that national legislation prevails over provincial legislation if the former "deals with a matter that, to be dealt with effectively, requires uniformity across the nation, and the national legislation provides that uniformity by establishing [...] norms and standards." This is particularly relevant to the prevention of "polluter-haven shopping."
Furthermore, national legislation which applies uniformly across the nation prevails over provincial legislation if it is necessary for "the protection of the environment."
The differentiation between Parts A and B of Schedules 4 and 5 has to do with the respective roles of provinces and local authorities in administering the items listed in these respective parts of the two schedules.
Municipalities have executive authority and the right to administer the local-government matters listed in Part B of both Schedules 4 and 5, and the right to make and administer by-laws in this regard. They also have this right in respect of those matters specifically assigned to them by national or provincial legislation. Furthermore, Part A matters which relate to local government must be assigned to municipalities if the matter would most effectively be administered locally, and if the municipality has the capacity to administer it.
It follows from all this that either national government or provincial governments are to administer pollution laws generally, but that, in the case of air pollution, local authorities have the right to do so.
The question of the respective competence of national and provincial governments in respect of Schedules 4 and 5 has not yet been considered by the courts in an environmental matter, but analogies may be drawn from the case of Ex parte the President of the Republic of South Africa, In re: Constitutionality of the Liquor Bill. "Liquor licences" are specifically mentioned in Schedule 5, but "trade" and "industrial promotion" appear in Schedule 4: They are concurrent matters, therefore. The question considered by Cameron J in the Constitutional Court was whether or not the override provision, applied in casu, gives national government the competence to enact legislation on various facets of the liquor trade. After a thorough analysis of the position the court pointed out that
Where a matter requires regulation inter-provincially, as opposed to intra-provincially, the Constitution ensures that national government has been accorded the necessary power, whether exclusively or concurrently under Schedule 4, or through the powers of intervention accorded by section 44(2). The corrolorary is that where provinces are accorded exclusive powers these should be interpreted as applying primarily to matters which may appropriately be regulated intra-provincially.
The court found the Bill to be unconstitutional because, while the national government had made out a case for intervening in crediting a national system of registration for manufacturers aid and wholesale distributors of liquor, no such case had been made out in the case of retail sales of liquor.
"In summary," writes Glazewski,
in considering the question of who does what, the starting point is that national level of government enjoys exclusive competence with respect to all matters which are not expressly assigned to the concurrent or exclusive competence of provincial legislatures, but the provinces have only those powers and functions specifically allocated to them by the Constitution.
Mechanisms
The Constitution specifically prescribes a set of principles of cooperative governance and intergovernmental relations. The Intergovernmental Relations Framework Act (IRFA) contains detailed provisions on co-operative governance, while NEMA prescribes an array of statutory mechanisms for achieving co-operative environmental governance, such as a set of national environmental management principles, planning frameworks and procedures for conflict resolution.
Notwithstanding the above array of provisions, "some commentators are of the view that that these mechanisms will not achieve cooperative governance unless they are accompanied by the requisite political will" (Paterson and Kotze 34).
Intergovernmental Relations Framework Act
IRFA is the primary Act on co-operative governance. Its specific objectives include
facilitating and co-ordinating the implementation of policy and legislation, including coherent government;
monitoring such implementation;
providing for effective services; and
realising national priorities.
When read together with the conflict-resolution procedures prescribed in NEMA, IRFA "should significantly contribute to resolving disputes arising as a result of environmental governance inefficiencies" (Paterson and Kotze 124).
National Environmental Management Act
The long title of NEMA describes its purpose below:
"To provide for co-operative environmental governance by establishing principles for decision-making on matters affecting the environment, institutions that will promote cooperative governance and procedures for co-ordinating environmental functions exercised by organs of state;
"to provide for certain aspects of the administration and enforcement of other environmental management laws; and
"to provide for matters connected therewith."
Chapter 3 of NEMA, entitled "Procedures for Co-operative Governance," provides for the drawing up of environmental implementation plans by certain scheduled national government departments and provinces. These reflect how the activities of the organ of state affect the environment (s 13).
In addition, environmental management plans shall be drawn up by certain other scheduled national departments. These reflect how the respective functions of the departments listed involve the management of the environment (s 14).
These plans are one of the principle ways of implementing the set of principles contained in section 2 of the Act.
All provinces, and only those national government departments listed in Schedules 1 and 2, have to carry out environmental implementation and/or management plans.
Schedule 1 lists national government departments which exercise functions which "may affect the environment." These have to prepare environmental implementation plans.
Schedule 2 lists national departments exercising functions that "involve the management of the environment". These have to prepare environmental management plans.
"It is accordingly evident," writes Glazewski,
that a primary focus of the Act is not to impose a set of burdensome requirements on the private sector but to design a national environmental management system applicable to certain organs of state whether at national, provincial, and possibly local level. The private sector, however, will obviously be influenced indirectly thereby" (143).
The following departments are listed in both Schedules 1 and 2:
the Department of Environmental Affairs and Tourism;
the Department of Water Affairs and Forestry; and
the Department of Land Affairs.
These, accordingly, have to carry out both environmental implementation and environmental management plans, but the two sets of plans may be consolidated.
The following departments are listed only in Schedule 1, and therefore have to prepare only environmental implementation plans:
Agriculture;
Housing;
Trade and Industry;
Transport; and
Defence.
The following are listed only in Schedule 2 and therefore have to prepare only environmental management plans:
Minerals and Energy;
Health; and
Labour.
Glazewski notes that, although the Department of Minerals and Energy is listed in Schedule 2, its activities clearly "affect the environment," and should therefore logically fall into Schedule 1 (143).
The provinces have to prepare environmental implementation plans only.
Local authorities do not appear to be directly affected by these requirements.
Both implementation and management plans have to be prepared within one year of the promulgation of the Act, and every four years thereafter.
The purpose of both environmental implementation and environmental management plans is set out in some detail. In essence, these plans must
give effect to the principle of co-operative governance;
give preference to national rather than provincial interests where the latter are unreasonable or prejudicial to the interests of the country as a whole;
enable the Minister to monitor the achievement, promotion and protection of sustainable environment; and
co-ordinate and harmonise environmental policies, plans, programmes and decisions of national, provincial and local tiers of government to minimise duplication and promote consistency. "A confusing point," notes Glazewski, "is that this subsection refers to 'functions that may affect the environment,' implying that it only refers to Schedule 1 departments. However," he continues, "this phrase is wide enough to embrace functions involving the 'management of the environment', that is Schedule 2 departments as well" (144).
These guidelines appear to give a wide discretion to those charged with drawing up these plans (Glazewski 144).
One of the differences between environmental implementation plans and environmental management plans is illustrated by sections 13 and 14, which set out the contents of environmental management plans and environmental implementation plans respectively.
Another difference between the two kinds of plans is evident in the two headings of Schedules 1 and 2. Schedule 1 is applicable to national departments exercising functions "which may affect the environment", while Schedule 2 refers to national departments that exercise functions that "involve the management of the environment."
The environmental implementation plan should reflect how the activities of the particular organ of state affect the environment. To this end, the relevant section provides that it must contain:
a description of policies, plans and programmes that may significantly affect the environment;
a description of the manner in which the relevant national department or province will ensure that its policies, plans and programmes will comply with the principles set out in section 2, as well as any national norms and standards as envisaged under section 146(2)(b)(i) of the Constitution and set out by the Minister or by any other Minister, which have as their objective the achievement, promotion, and protection of the environment;
a description of the manner in which the relevant national department or province will ensure that its functions are exercised so as to ensure compliance with relevant legislative provisions, including the principles set out in section 2, and any national norms and standards envisaged under section 146 (2)(b)(i) of the Constitution and set out by the Minister, or by any other Minister, which are in accordance with the same objective described above; and
recommendations for the promotion of the objectives and plans for the implementation of integrated environmental management procedure and regulations referred to in Chapter 5 of the Act. "This," according to Glazewski, "is a particularly significant provision as it places the onus of compliance with the integrated environmental assessment procedures squarely in the court of the provinces and listed sectoral ministries" (145).
Environmental management plans should in contrast reflect how the respective functions of the departments listed in Schedule 2 "involve the management of the environment."
"Clearly," writes Glazewski, "this is onerous than is the case with environmental implementation plans."
The relevant section provides that the contents of environmental management plans must include:
a description of the functions exercised by the relevant department in respect of the environment;
a description of environmental norms and standards, including norms and standards contemplated in section 146(2)(b)(i) of the Constitution, set or applied by the relevant department;
a description of the policies, plans and programmes of the relevant department that are designed to ensure compliance with its policies by other organs of state and persons;
a description of priorities regarding compliance with the relevant department's policies by other organs of state and persons;
a description of the extent of compliance with the relevant department's policies by other organs of state and persons;
a description of arrangements for co-operation with other national departments and spheres of government, including any existing or proposed memoranda of understanding entered into, or delegation or assignment of powers to other organs of state, with a bearing on environmental management; and
proposals for the promotion of the objectives and plans for the implementation of the procedures and regulations referred to in Chapter 5 of the Act.
Although the two sets of plans are "very similar," Glazewski notes what he describes as "a significant difference," which is that "the implementation plans have to set out how they will give effect to the section 2 principles while management plans do not" (145).
Both must be submitted for approval to the Minister or MEC, as the case may be (s 15(1)).
Both have to comply with the norms-and-standards provisions of section 146(2)(b)(i) of the Constitution, which requires uniformity across the country.
Both environmental management plans and environmental implementation plans must be submitted to the Committee for Environmental Co-ordination (the CEC).
In the case of environmental implementation plans, the CEC must scrutinise these and either adopt them, or report to the Minister of Environment and every other responsible Minister that they do not comply with certain stipulated criteria.
Where the CEC agrees to the adoption of an environmental implementation plan, the plan must be published in the Government Gazette by the relevant organ of state within ninety days of such adoption, whereupon it becomes effective.
If the CEC finds that the plan does not comply with the principles in section 2, the purpose and objectives of such environmental implementation plans, or any relevant environmental management plan, this fact must be reported to the Minister of Environment and every other responsible Minister.
In the event of a dispute concerning the content or submission of an environmental implementation plan, this must be submitted to the Minister of Environment in consultation with the other Schedule 2 Ministers for determination by him or her, where a national department is concerned.
Where such a dispute concerns a province, it must be submitted to the Director-General for conciliation in accordance with the procedure set out in Chapter 4 of the Act.
Although environmental management plans are submitted to the CEC, they are not subject to scrutiny by the CEC, but must simply be published in the Gazette within ninety days of such submission, whereupon they become effective.
Section 16 provides for compliance with these plans in different ways. Firstly, it obliges organs of state which have prepared these plans to exercise all their functions in accordance with them. Secondly, all organs of state are obliged to submit their plans to the Director-General and the Committee annually.
Where the plans have not been submitted or adopted, the Minister may recommend that it comply after consultation with the Committee.
The Director-General is charged with monitoring compliance with the environmental implementation and management plans, and may make inquiries or take other appropriate steps to establish whether or not the plans are being complied with.
Where the plans are not being substantially complied with, the Director-General may serve a written notice on the organ state concerned to remedy the failure of compliance. The organ of state must respond within thirty days. If it fails to do so, the Director-General may "specify steps and a time period within which steps must be taken to remedy the failure of compliance."
If thereafter the non-compliance persists, the matter must go to conciliation in accordance with Chapter 4 of the Act.
Finally, each provincial department must ensure that the relevant environmental implementation plan is complied with by each municipality in its province.
The Director-General is obliged to keep a record of all environmental implementation and management plans, and make them available to the public.
Guidelines may be published by the Minister to assist provinces in the preparation of these plans.
Of particular relevance to this process is the fact that the preparation of these plans may "consist of the assembly of information or plans compiled for other purposes." Integrated Development Plans would fall into this category.
Environmental implementation plans
Environmental implementation plans should reflect how the activities of a particular organ of state affect the environment, focusing on the ways in which general policies and functions take account of environmental management.
Environmental implementation plans are the primary statutory instruments for the promotion of cooperative governance around environmental management, through the alignment of governmental policies, plans and programmes and decisions in respect of the environment.
Their content is prescribed in section 13.
Environmental management plans
Environmental management plans should reflect how the respective functions of the departments listed involve management of the environment. They should focus on policies and mechanisms to ensure that other bodies comply with the departments' environmental management mandate.
An environmental management plan is defined as "an environmental management tool used to ensure that undue or reasonably avoidable adverse impacts of the construction, operation and decommissioning of a project are prevented; and that the positive benefits of the projects are enhanced."
They are, therefore, very important tools for ensuring that the management actions arising from EIA processes are clearly defined and implemented through all phases of the project life-cycle.
Their content is prescribed in section 13.
See Maccsand v City of Cape Town.
Implementation compliance and enforcement
"In principle," write Paterson and Kotze, "environmental compliance and enforcement are about ensuring adherence to statutorily prescribed environmental standards." They add,
The historical application of unjust and discriminatory laws has unquestionably undermined the development of a culture of legal compliance and accordingly clouded the application of the rule of law in South Africa, a reality compounded by inadequate legal enforcement. This has negatively affected the environmental sector, and the extent of environmental non-compliance with South Africa's environmental legal framework is accordingly not surprising.
Elements of both the rationalist and normative theories of compliance are evident in South Africa's current environmental regime. Historically, wildlife and conservation authorities adopted a rationalist approach, relying on the deterrence theory, with enforcement being very much secured through arrest and criminal prosecution. On the other hand, compliance and enforcement in the industrial sector, "perhaps under the influence of large corporations," has focused more on the normative theory, adopting "a far more conciliatory approach to compliance and enforcement."
This approach to environmental compliance and enforcement "appears to have shifted somewhat in recent times. There is a current trend inherent in the conservation sector," observe Paterson and Kotze, "to entrench a more normative approach focusing on cooperation and community-based participation."
Conversely, since the establishment of the Environmental Management Inspectorate (EMI), the initial normative approach, adopted in the industrial context, has turned in a more rationalist direction, "with punishment being the key enforcement strategy for compelling compliance and achieving improved environmental performance."
The term "environmental compliance and enforcement" has adopted "its own peculiar flavour" in South Africa. Many of the prescribed environmental standards are outdated; accordingly, the traditional mechanisms used to regulate behaviour and ensure compliance therewith, such as environmental permits with associated conditions, "have on occasion proven inappropriate." A recent trend has been to seek to include other non-binding standards in the compliance effort. Compliance and enforcement in the South African context may, therefore, "also describe attempts to ensure adherence to environmental standards contained in non-binding instruments such as environmental policies, guidelines and strategies."
Paterson and Kotze believe that "this approach is not ideal." Until such time, however, as appropriate environmental standards are prescribed throughout South Africa's environmental regime, they concede that it may continue.
"So," they ask, "what would be the appropriate environmental standard on which to base the country's compliance and enforcement effort?" There are numerous international precedents, including
best available technology (BAT);
best available technology not entailing excessive cost (BATNEEC); and
best practicable environmental option (BPEO).
As South Africa begins to consolidate its environmental standards, and to review the measures for ensuring compliance with such standards, "it would appear that BPEO is set to become the desired environmental standard."
Constitutional mandate
A further factor shaping the distinct nature of South Africa's environmental compliance and enforcement effort is the Constitution, particularly the principle of environmental right. Section 24(b) states that everyone has the right to have the environment protected, for the benefit of present and future generations, "through reasonable legislative and other measures." These "other measures," in the view of Paterson and Kotze, "no doubt include those aimed at ensuring environmental compliance and enforcement."
The High Court has recently confirmed, in Khabisi v Aquarella Investment, that the State and its organs, and their representatives, have an "onerous constitutional mandate to promote conservation and protection of the environment."
The constitutional duty to ensure environmental compliance and enforcement is amplified in a suite of environmental legislation promulgated since 1996. Cumulatively, this legislation prescribes concrete statutory mechanisms for both encouraging and compelling compliance with, and facilitating the enforcement of, South Africa's contemporary environmental regime.
More specifically, South Africa's framework environmental law, NEMA, provides for the designation of Environmental Management Inspectors (EMIs), whose specific mandate it is to monitor and enforce compliance with South Africa's environmental regime, and to investigate potential offences and breaches of it.
International obligations
In fulfilling its constitutional mandate, the South African government is also required to comply with its international compliance and enforcement obligations. Agenda 21, one of the primary international environmental instruments, expressly recognises that building strong institutions, and prescribing dedicated compliance and enforcement programmes, are important prerequisites for achieving the goal of sustainable development. This tenor was reinforced at the World Summit on Sustainable Development, held in Johannesburg in 2002.
In addition, a number of specific international environmental instruments, to which South Africa is a party, require the government to strengthen domestic compliance and enforcement capacity, in order to execute effectively the obligations set out therein.
Key cases
Bareki NO and Another v Gencor Ltd and Others 2006 (8) BCLR 920 (T).
Bato Star Fishing (Pty) Ltd v Minister of Environmental Affairs and Others 2004 (4) SA 490 (CC).
Breytenbach Appellant v Frankel & Another Respondents 1913 AD 390.
The Director: Mineral Development, Gauteng Region v Save the Vaal Environment 1999 (2) SA 709 (SCA).
Harmony Gold Mining Co Ltd v Regional Director: Free State, Dept of Water Affairs & Forestry & another [2006] JOL 17506 (SCA).
Hichange Investments (Pty) Ltd v Cape Produce Company (Pty) Ltd t/a Pelts Products and others [2004] 1 All SA 636 (E).
Khabisi NO and Another v Aquarella Investment 83 (Pty) Ltd and Others (9114/2007) [2007] ZAGPHC 116 (22 June 2007).
Maccsand (Pty) Ltd v City of Cape Town and Others (Chamber of Mines of South Africa and Another as Amici Curiae) 2012 (7) BCLR 690 (CC); 2012 (4) SA 181 (CC).
MEC: Department of Agriculture, Conservation and Environment and another v HTF Developers (Pty) Limited 2008 (4) BCLR 417 (CC).
Minister of Environmental Affairs and Tourism and Others v Phambili Fisheries (Pty) Ltd; Minister of Environmental Affairs and Tourism and Others v Bato Star Fishing (Pty) Ltd 2003 (6) SA 407 (SCA).
Minister of Health and Welfare v Woodcarb (Pty) Ltd and Another 1996 (3) SA 155 (N).
Minister of Public Works v Kyalami Ridge Environmental Association & Others 2001 (7) BCLR 652 (CC).
Minister of Water Affairs and Forestry v Stilfontein Gold Mining Co Ltd and Others 2006 (5) SA 333 (W).
Verstappen v Port Edward Town Board and Others 1994 (3) SA 569 (D).
See also
Administrative law in South Africa
Constitution of South Africa
Environmental law
International law
Insurance law
Natural Justice: Lawyers for Communities and the Environment
South African criminal law
South African law of delict
South African property law
References
Notes
Bibliography
Birnie, P.W., and A.E. Boyle. (1992). International Law and the Environment
Glazewski, J. (2009). Environmental Law in South Africa, 2nd ed.
Kidd, M. (2011) Environmental Law, 2nd ed.
National Environmental Management Act (Act 107 of 1998)
Rabie, A. (1991). "Environmental Law in search of an Identity," Stell LR (2): 202.
Sands, P. (2003). Principles of International Environmental Law, 2nd ed.
Law of South Africa | 0.781402 | 0.977392 | 0.763735 |
Pluralism | Pluralism in general denotes a diversity of views or stands, rather than a single approach or method.
Pluralism or pluralist may refer more specifically to:
Politics and law
Pluralism (political philosophy), the acknowledgement of a diversity of political systems
Pluralism (political theory), belief that there should be diverse and competing centres of power in society
Legal pluralism, the existence of differing legal systems in a population or area
Pluralist democracy, a political system with more than one center of power
Philosophy
Pluralism (philosophy), a doctrine according to which many basic substances make up reality
Pluralist school, a Greek school of pre-Socratic philosophers
Epistemological pluralism or methodological pluralism, the view that some phenomena require multiple methods to account for their nature
Value pluralism, the idea that several values may be equally correct and yet in conflict with each other
Religion
Religious pluralism, the acceptance of all religious paths as equally valid, promoting coexistence
Holding multiple ecclesiastical offices; see "Pluralism" at Benefice
Pluralism Project, a Harvard-affiliated project on religious diversity in the United States
Other uses
Cosmic pluralism, the belief in numerous other worlds beyond the Earth, which may possess the conditions suitable for life
Cultural pluralism, when small groups within a larger society maintain their unique cultural identities
Media pluralism, the representation of different cultural groups and political opinions in the media
Pluralist commonwealth, a systemic model of wealth democratization
Pluralism in economics, a campaign to enrich the academic discipline of economics
See also
Plurality (disambiguation)
Journal of Legal Pluralism, a peer-reviewed academic journal that focuses on legal pluralism
Global Centre for Pluralism, an international centre for research of pluralist societies
Multiculturalism, the existence of multiple cultural traditions within a single country
Postmodernism, a broad movement in the late-20th century that is skeptical toward grand narratives or ideologies | 0.775569 | 0.984738 | 0.763733 |
Biosafety | Biosafety is the prevention of large-scale loss of biological integrity, focusing both on ecology and human health.
These prevention mechanisms include the conduction of regular reviews of biosafety in laboratory settings, as well as strict guidelines to follow. Biosafety is used to protect from harmful incidents. Many laboratories handling biohazards employ an ongoing risk management assessment and enforcement process for biosafety. Failures to follow such protocols can lead to increased risk of exposure to biohazards or pathogens. Human error and poor technique contribute to unnecessary exposure and compromise the best safeguards set into place for protection.
The international Cartagena Protocol on Biosafety deals primarily with the agricultural definition but many advocacy groups seek to expand it to include post-genetic threats: new molecules, artificial life forms, and even robots which may compete directly in the natural food chain.
Biosafety in agriculture, chemistry, medicine, exobiology and beyond will likely require the application of the precautionary principle, and a new definition focused on the biological nature of the threatened organism rather than the nature of the threat.
When biological warfare or new, currently hypothetical, threats (i.e., robots, new artificial bacteria) are considered, biosafety precautions are generally not sufficient. The new field of biosecurity addresses these complex threats.
Biosafety level refers to the stringency of biocontainment precautions deemed necessary by the Centers for Disease Control and Prevention (CDC) for laboratory work with infectious materials.
Typically, institutions that experiment with or create potentially harmful biological material will have a committee or board of supervisors that is in charge of the institution's biosafety. They create and monitor the biosafety standards that must be met by labs in order to prevent the accidental release of potentially destructive biological material. (note that in the US, several groups are involved, and efforts are being made to improve processes for government run labs, but there is no unifying regulatory authority for all labs.
Biosafety is related to several fields:
In ecology (referring to imported life forms from beyond ecoregion borders),
In agriculture (reducing the risk of alien viral or transgenic genes, genetic engineering or prions such as BSE/"MadCow", reducing the risk of food bacterial contamination)
In medicine (referring to organs or tissues from biological origin, or genetic therapy products, virus; levels of lab containment protocols measured as 1, 2, 3, 4 in rising order of danger),
In chemistry (i.e., nitrates in water, PCB levels affecting fertility)
In exobiology (i.e., NASA's policy for containing alien microbes that may exist on space samples. See planetary protection and interplanetary contamination), and
In synthetic biology (referring to the risks associated with this type of lab practice)
Hazards
Chemical hazards typically found in laboratory settings include carcinogens, toxins, irritants, corrosives, and sensitizers. Biological hazards include viruses, bacteria, fungi, prions, and biologically derived toxins, which may be present in body fluids and tissue, cell culture specimens, and laboratory animals. Routes of exposure for chemical and biological hazards include inhalation, ingestion, skin contact, and eye contact.
Physical hazards include ergonomic hazards, ionizing and non-ionizing radiation, and noise hazards. Additional safety hazards include burns and cuts from autoclaves, injuries from centrifuges, compressed gas leaks, cold burns from cryogens, electrical hazards, fires, injuries from machinery, and falls.
In synthetic biology
A complete understanding of experimental risks associated with synthetic biology is helping to enforce the knowledge and effectiveness of biosafety.
With the potential future creation of man-made unicellular organisms, some are beginning to consider the effect that these organisms will have on biomass already present. Scientists estimate that within the next few decades, organism design will be sophisticated enough to accomplish tasks such as creating biofuels and lowering the levels of harmful substances in the atmosphere. Scientist that favor the development of synthetic biology claim that the use of biosafety mechanisms such as suicide genes and nutrient dependencies will ensure the organisms cannot survive outside of the lab setting in which they were originally created. Organizations like the ETC Group argue that regulations should control the creation of organisms that could potentially harm existing life. They also argue that the development of these organisms will simply shift the consumption of petroleum to the utilization of biomass in order to create energy. These organisms can harm existing life by affecting the prey/predator food chain, reproduction between species, as well as competition against other species (species at risk, or act as an invasive species).
Synthetic vaccines are now being produced in the lab. These have caused a lot of excitement in the pharmaceutical industry as they will be cheaper to produce, allow quicker production, as well as enhance the knowledge of virology and immunology.
In medicine, healthcare settings and laboratories
Biosafety, in medicine and health care settings, specifically refers to proper handling of organs or tissues from biological origin, or genetic therapy products, viruses with respect to the environment, to ensure the safety of health care workers, researchers, lab staff, patients, and the general public. Laboratories are assigned a biosafety level numbered 1 through 4 based on their potential biohazard risk level. The employing authority, through the laboratory director, is responsible for ensuring that there is adequate surveillance of the health of laboratory personnel. The objective of such surveillance is to monitor for occupationally acquired diseases. The World Health Organization attributes human error and poor technique as the primary cause of mishandling of biohazardous materials.
Biosafety is also becoming a global concern and requires multilevel resources and international collaboration to monitor, prevent and correct accidents from unintended and malicious release and also to prevent that bioterrorists get their hands-on biologics sample to create biologic weapons of mass destruction. Even people outside of the health sector needs to be involved as in the case of the Ebola outbreak the impact that it had on businesses and travel required that private sectors, international banks together pledged more than $2 billion to combat the epidemic. The bureau of international Security and nonproliferation (ISN) is responsible for managing a broad range of U.S. nonproliferation policies, programs, agreements, and initiatives, and biological weapon is one their concerns
Biosafety has its risks and benefits. All stakeholders must try to find a balance between cost-effectiveness of safety measures and use evidence-based safety practices and recommendations, measure the outcomes and consistently reevaluate the potential benefits that biosafety represents for human health.
Biosafety level designations are based on a composite of the design features, construction, containment facilities, equipment, practices and operational procedures required for working with agents from the various risk groups.
Classification of biohazardous materials is subjective and the risk assessment is determined by the individuals most familiar with the specific characteristics of the organism. There are several factors taken into account when assessing an organism and the classification process.
Risk Group 1: (no or low individual and community risk) A microorganism that is unlikely to cause human or animal disease.
Risk Group 2 : (moderate individual risk, low community risk) A pathogen that can cause human or animal disease but is unlikely to be a serious hazard to laboratory workers, the community, livestock or the environment. Laboratory exposures may cause serious infection, but effective treatment and preventive measures are available and the risk of spread of infection is limited.
Risk Group 3 : (high individual risk, low community risk) A pathogen that usually causes serious human or animal disease but does not ordinarily spread from one infected individual to another. Effective treatment and preventive measures are available.
Risk Group 4 : (high individual and community risk) A pathogen that usually causes serious human or animal disease and that can be readily transmitted from one individual to another, directly or indirectly. Effective treatment and preventive measures are not usually available.
See World Health Organization Biosafety Laboratory Guidelines (4th edition, 2020): World Health Organization Biosafety Laboratory Guidelines
Investigations have shown that there are hundreds of unreported biosafety accidents, with laboratories self-policing the handling of biohazardous materials and lack of reporting. Poor record keeping, improper disposal, and mishandling biohazardous materials result in increased risks of biochemical contamination for both the public and environment.
Along with the precautions taken during the handling process of biohazardous materials, the World Health Organization recommends:
Staff training should always include information on safe methods for highly hazardous procedures that are commonly encountered by all laboratory personnel, and which involve:
Inhalation risks (i.e. aerosol production) when using loops, streaking agar plates,
pipetting, making smears, opening cultures, taking blood/serum samples, centrifuging, etc.
Ingestion risks when handling specimens, smears and cultures
Risks of percutaneous exposures when using syringes and needles
Bites and scratches when handling animals
Handling of blood and other potentially hazardous pathological materials
Decontamination and disposal of infectious material.
Biosafety management in laboratory
First of all the laboratory director, who holds immediate responsibility for the laboratory, is tasked with ensuring the development and adoption of a biosafety management plan as well as a safety or operations manual. Secondly, the laboratory supervisor, who reports to the laboratory director, is responsible for organizing regular training sessions on laboratory safety.
The third point, the personnel must be informed about any special hazards and be required to review the safety or operations manual and adhere to established practices and procedures. The laboratory supervisor is responsible for ensuring that all personnel have a clear understanding of these guidelines, and a copy of the safety or operations manual should be readily available within the laboratory. Finally, adequate medical assessment, monitoring, and treatment must be made available to all personnel when needed, and comprehensive medical records should be maintained.
Policy and practice in the United States
Legal information
In June 2009, the Trans-Federal Task Force on Optimizing Biosafety and Biocontainment Oversight recommended the formation of an agency to coordinate high safety risk level labs (3 and 4), and voluntary, non-punitive measures for incident reporting. However, it is unclear as to what changes may or may not have been implemented following their recommendations.
United States Code of Federal Regulations
The United States Code of Federal Regulations is the codification (law), or collection of laws specific to a specific to a jurisdiction that represent broad areas subject to federal regulation. Title 42 of the Code of Federal Regulations addresses laws concerning Public Health issues including biosafety which can be found under the citation 42 CFR 73 to 42 CFR 73.21 by accessing the US Code of Federal Regulations (CFR) website.
Title 42 Section 73 of the CFR addresses specific aspects of biosafety including occupational safety and health, transportation of biohazardous materials and safety plans for laboratories using potential biohazards. While biocontainment, as defined in the Biosafety in Microbiological and Biomedical Laboratories and Primary Containment for Biohazards: Selection, Installation and Use of Biosafety Cabinets manuals available at the Centers for Disease Control and Prevention website much of the design, implementation and monitoring of protocols are left up to state and local authorities.
The United States CFR states "An individual or entity required to register [as a user of biological agents] must develop and implement a written biosafety plan that is commensurate with the risk of the select agent or toxin" which is followed by three recommended sources for laboratory reference:
The CDC/NIH publication, "Biosafety in Microbiological and Biomedical Laboratories."
The Occupational Safety and Health Administration (OSHA) regulations in 29 CFR parts 1910.1200 and 1910.1450.
The "NIH Guidelines for Research Involving Recombinant DNA Molecules" (NIH Guidelines).
While clearly the needs of biocontainment and biosafety measures vary across government, academic and private industry laboratories, biological agents pose similar risks independent of their locale. Laws relating to biosafety are not easily accessible and there are few federal regulations that are readily available for a potential trainee to reference outside of the publications recommended in 42 CFR 73.12. Therefore, training is the responsibility of lab employers and is not consistent across various laboratory types thereby increasing the risk of accidental release of biological hazards that pose serious health threats to the humans, animals and the ecosystem as a whole.
Agency guidance
Many government agencies have made guidelines and recommendations in an effort to increase biosafety measures across laboratories in the United States. Agencies involved in producing policies surrounding biosafety within a hospital, pharmacy or clinical research laboratory include: the CDC, FDA, USDA, DHHS, DoT, EPA and potentially other local organizations including public health departments. The federal government does set some standards and recommendations for States to meet their standards, most of which fall under the Occupational Safety and Health Act of 1970. but currently, there is no single federal regulating agency directly responsible for ensuring the safety of biohazardous handling, storage, identification, clean-up and disposal. In addition to the CDC, the Environmental Protection Agency has some of the most accessible information on ecological impacts of biohazards, how to handle spills, reporting guidelines and proper disposal of agents dangerous to the environment. Many of these agencies have their own manuals and guidance documents relating to training and certain aspects of biosafety directly tied to their agency's scope, including transportation, storage and handling of blood borne pathogens (OSHA, IATA). The American Biological Safety Association (ABSA) has a list of such agencies and links to their websites, along with links to publications and guidance documents to assist in risk assessment, lab design and adherence to laboratory exposure control plans. Many of these agencies were members of the 2009 Task Force on BioSafety. There was also a formation of a Blue Ribbon Study Panel on Biodefense, but this is more concerned with national defense programs and biosecurity.
Ultimately states and local governments, as well as private industry labs, are left to make the final determinants for their own biosafety programs, which vary widely in scope and enforcement across the United States. Not all state programs address biosafety from all necessary perspectives, which should not just include personal safety, but also emphasize an full understanding among laboratory personnel of quality control and assurance, exposure potential impacts on the environment, and general public safety.
Toby Ord puts into question whether the current international conventions regarding biotechnology research and development regulation, and self-regulation by biotechnology companies and the scientific community are adequate.
State occupational safety plans are often focused on transportation, disposal, and risk assessment, allowing caveats for safety audits, but ultimately leaves the training in the hands of the employer. 22 states have approved Occupational Safety plans by OSHA that are audited annually for effectiveness. These plans apply to private and public sector workers, and not necessarily state/ government workers, and not all specifically have a comprehensive program for all aspects of biohazard management from start to finish. Sometimes biohazard management plans are limited only to workers in transportation specific job titles. The enforcement and training on such regulations can vary from lab to lab based on the State's plans for occupational health and safety. With the exception of DoD lab personnel, CDC lab personnel, First responders, and DoT employees, enforcement of training is inconsistent, and while training is required to be done, specifics on the breadth and frequency of refresher training does not seem consistent from state to state; penalties may never be assessed without larger regulating bodies being aware of non-compliance, and enforcement is limited.
Medical waste management in the United States
Medical waste management was identified as an issue in the 1980s, with the Medical Waste Tracking Act of 1988 becoming the new standard in biohazard waste disposal.
Although the Federal Government, EPA & DOT provide some oversight of regulated medical waste storage, transportation, and disposal the majority of biohazard medical waste is regulated at the state level. Each state is responsible for regulation and management of their own biohazardous waste with each state varying in their regulatory process. Record keeping of biohazardous waste also varies between states.
Medical healthcare centers, hospitals veterinary clinics, clinical laboratories and other facilities generate over one million tons of waste each year. Although the majority of this waste is as harmless as common household waste, as much as 15 percent of this waste poses a potential infection hazard, according to the Environmental Protection Agency (EPA). Medical waste is required to be rendered non-infectious before it can be disposed of. There are several different methods to treat and dispose of biohazardous waste. In the United States, the primary methods for treatment and disposal of biohazard, medical and sharps waste may include:
Incineration
Microwave
Autoclaves
Mechanical/Chemical Disinfection
Irradiation
Different forms of biohazardous wasted required different treatments for their proper waste management. This is determined largely be each states regulations.
Incidents of non-compliance and reform efforts
The United States Government has made it clear that biosafety is to be taken very seriously. In 2014, incidents with anthrax and Ebola pathogens in CDC laboratories prompted the CDC director Tom Frieden to issue a moratorium for research with these types of select agents. An investigation concluded that there was a lack of adherence to safety protocols and "inadequate safeguards" in place. This indicated a lack of proper training or reinforcement of training and supervision on regular basis for lab personnel.
Following these incidents, the CDC established an External Laboratory Safety Workgroup (ELSW), and suggestions have been made to reform effectiveness of the Federal Select Agent Program. The White House issued a report on national biosafety priorities in 2015, outlining next steps for a national biosafety and security program, and addressed biological safety needs for health research, national defense, and public safety.
In 2016, the Association of Public Health Laboratories (APHL) had a presentation at their annual meeting focused on improving biosafety culture. This same year, The UPMC Center for Health Security issued a case study report including reviews of ten different nations' current biosafety regulations, including the United States. Their goal was to "provide a foundation for identifying national-level biosafety norms and enable initial assessment of biosafety priorities necessary for developing effective national biosafety regulation and oversight."
See also
Biological hazard
Cartagena Protocol on Biosafety
Centers for Disease Control
European BioSafety Association
Interplanetary contamination
Quarantine
References
External links
WHO Biosafety Manual
CDC Biosafety pages
International Centre for Genetic Engineering and Biotechnology (ICGEB): Biosafety pages
Greenpeace safe trade campaign
American Biological Safety Association
Biosafety in Microbiological and Biomedical Laboratories
Genetic engineering
Bioethics
Safety
Biological hazards | 0.774981 | 0.985476 | 0.763725 |
Animism | Animism (from meaning 'breath, spirit, life') is the belief that objects, places, and creatures all possess a distinct spiritual essence. Animism perceives all things—animals, plants, rocks, rivers, weather systems, human handiwork, and in some cases words—as being animated, having agency and free will. Animism is used in anthropology of religion as a term for the belief system of many Indigenous peoples in contrast to the relatively more recent development of organized religions. Animism is a metaphysical belief which focuses on the supernatural universe: specifically, on the concept of the immaterial soul.
Although each culture has its own mythologies and rituals, animism is said to describe the most common, foundational thread of indigenous peoples' "spiritual" or "supernatural" perspectives. The animistic perspective is so widely held and inherent to most indigenous peoples that they often do not even have a word in their languages that corresponds to "animism" (or even "religion"). The term "animism" is an anthropological construct.
Largely due to such ethnolinguistic and cultural discrepancies, opinions differ on whether animism refers to an ancestral mode of experience common to indigenous peoples around the world or to a full-fledged religion in its own right. The currently accepted definition of animism was only developed in the late 19th century (1871) by Edward Tylor. It is "one of anthropology's earliest concepts, if not the first."
Animism encompasses beliefs that all material phenomena have agency, that there exists no categorical distinction between the spiritual and physical world, and that soul, spirit, or sentience exists not only in humans but also in other animals, plants, rocks, geographic features (such as mountains and rivers), and other entities of the natural environment. Examples include water sprites, vegetation deities, and tree spirits, among others. Animism may further attribute a life force to abstract concepts such as words, true names, or metaphors in mythology. Some members of the non-tribal world also consider themselves animists, such as author Daniel Quinn, sculptor Lawson Oyekan, and many contemporary Pagans.
Etymology
English anthropologist Sir Edward Tylor initially wanted to describe the phenomenon as spiritualism, but he realized that it would cause confusion with the modern religion of spiritualism, which was then prevalent across Western nations. He adopted the term animism from the writings of German scientist Georg Ernst Stahl, who had developed the term in 1708 as a biological theory that souls formed the vital principle, and that the normal phenomena of life and the abnormal phenomena of disease could be traced to spiritual causes.
The origin of the word comes from the Latin word , which means life or soul.
The first known usage in English appeared in 1819.
"Old animism" definitions
Earlier anthropological perspectives, which have since been termed the old animism, were concerned with knowledge on what is alive and what factors make something alive. The old animism assumed that animists were individuals who were unable to understand the difference between persons and things. Critics of the old animism have accused it of preserving "colonialist and dualistic worldviews and rhetoric."
Edward Tylor's definition
The idea of animism was developed by anthropologist Sir Edward Tylor through his 1871 book Primitive Culture, in which he defined it as "the general doctrine of souls and other spiritual beings in general." According to Tylor, animism often includes "an idea of pervading life and will in nature;" a belief that natural objects other than humans have souls. This formulation was little different from that proposed by Auguste Comte as "fetishism", but the terms now have distinct meanings.
For Tylor, animism represented the earliest form of religion, being situated within an evolutionary framework of religion that has developed in stages and which will ultimately lead to humanity rejecting religion altogether in favor of scientific rationality. Thus, for Tylor, animism was fundamentally seen as a mistake, a basic error from which all religions grew. He did not believe that animism was inherently illogical, but he suggested that it arose from early humans' dreams and visions and thus was a rational system. However, it was based on erroneous, unscientific observations about the nature of reality. Stringer notes that his reading of Primitive Culture led him to believe that Tylor was far more sympathetic in regard to "primitive" populations than many of his contemporaries and that Tylor expressed no belief that there was any difference between the intellectual capabilities of "savage" people and Westerners.
The idea that there had once been "one universal form of primitive religion" (whether labelled animism, totemism, or shamanism) has been dismissed as "unsophisticated" and "erroneous" by archaeologist Timothy Insoll, who stated that "it removes complexity, a precondition of religion now, in all its variants."
Social evolutionist conceptions
Tylor's definition of animism was part of a growing international debate on the nature of "primitive society" by lawyers, theologians, and philologists. The debate defined the field of research of a new science: anthropology. By the end of the 19th century, an orthodoxy on "primitive society" had emerged, but few anthropologists still would accept that definition. The "19th-century armchair anthropologists" argued that "primitive society" (an evolutionary category) was ordered by kinship and divided into exogamous descent groups related by a series of marriage exchanges. Their religion was animism, the belief that natural species and objects had souls.
With the development of private property, the descent groups were displaced by the emergence of the territorial state. These rituals and beliefs eventually evolved over time into the vast array of "developed" religions. According to Tylor, as society became more scientifically advanced, fewer members of that society would believe in animism. However, any remnant ideologies of souls or spirits, to Tylor, represented "survivals" of the original animism of early humanity.
Confounding animism with totemism
In 1869 (three years after Tylor proposed his definition of animism), Edinburgh lawyer John Ferguson McLennan, argued that the animistic thinking evident in fetishism gave rise to a religion he named totemism. Primitive people believed, he argued, that they were descended from the same species as their totemic animal. Subsequent debate by the "armchair anthropologists" (including J. J. Bachofen, Émile Durkheim, and Sigmund Freud) remained focused on totemism rather than animism, with few directly challenging Tylor's definition. Anthropologists "have commonly avoided the issue of animism and even the term itself, rather than revisit this prevalent notion in light of their new and rich ethnographies."
According to anthropologist Tim Ingold, animism shares similarities with totemism but differs in its focus on individual spirit beings which help to perpetuate life, whereas totemism more typically holds that there is a primary source, such as the land itself or the ancestors, who provide the basis to life. Certain indigenous religious groups such as the Australian Aboriginals are more typically totemic in their worldview, whereas others like the Inuit are more typically animistic.
From his studies into child development, Jean Piaget suggested that children were born with an innate animist worldview in which they anthropomorphized inanimate objects and that it was only later that they grew out of this belief. Conversely, from her ethnographic research, Margaret Mead argued the opposite, believing that children were not born with an animist worldview but that they became acculturated to such beliefs as they were educated by their society.
Stewart Guthrie saw animism—or "attribution" as he preferred it—as an evolutionary strategy to aid survival. He argued that both humans and other animal species view inanimate objects as potentially alive as a means of being constantly on guard against potential threats. His suggested explanation, however, did not deal with the question of why such a belief became central to the religion. In 2000, Guthrie suggested that the "most widespread" concept of animism was that it was the "attribution of spirits to natural phenomena such as stones and trees."
"New animism" non-archaic definitions
Many anthropologists ceased using the term animism, deeming it to be too close to early anthropological theory and religious polemic. However, the term had also been claimed by religious groups—namely, Indigenous communities and nature worshippers—who felt that it aptly described their own beliefs, and who in some cases actively identified as "animists." It was thus readopted by various scholars, who began using the term in a different way, placing the focus on knowing how to behave toward other beings, some of whom are not human. As religious studies scholar Graham Harvey stated, while the "old animist" definition had been problematic, the term animism was nevertheless "of considerable value as a critical, academic term for a style of religious and cultural relating to the world."
Hallowell and the Ojibwe
The new animism emerged largely from the publications of anthropologist Irving Hallowell, produced on the basis of his ethnographic research among the Ojibwe communities of Canada in the mid-20th century. For the Ojibwe encountered by Hallowell, personhood did not require human-likeness, but rather humans were perceived as being like other persons, who for instance included rock persons and bear persons. For the Ojibwe, these persons were each willful beings, who gained meaning and power through their interactions with others; through respectfully interacting with other persons, they themselves learned to "act as a person".
Hallowell's approach to the understanding of Ojibwe personhood differed strongly from prior anthropological concepts of animism. He emphasized the need to challenge the modernist, Western perspectives of what a person is, by entering into a dialogue with different worldwide views. Hallowell's approach influenced the work of anthropologist Nurit Bird-David, who produced a scholarly article reassessing the idea of animism in 1999. Seven comments from other academics were provided in the journal, debating Bird-David's ideas.
Postmodern anthropology
More recently, postmodern anthropologists are increasingly engaging with the concept of animism. Modernism is characterized by a Cartesian subject-object dualism that divides the subjective from the objective, and culture from nature. In the modernist view, animism is the inverse of scientism, and hence, is deemed inherently invalid by some anthropologists. Drawing on the work of Bruno Latour, some anthropologists question modernist assumptions and theorize that all societies continue to "animate" the world around them. In contrast to Tylor's reasoning, however, this "animism" is considered to be more than just a remnant of primitive thought. More specifically, the "animism" of modernity is characterized by humanity's "professional subcultures", as in the ability to treat the world as a detached entity within a delimited sphere of activity.
Human beings continue to create personal relationships with elements of the aforementioned objective world, such as pets, cars, or teddy bears, which are recognized as subjects. As such, these entities are "approached as communicative subjects rather than the inert objects perceived by modernists." These approaches aim to avoid the modernist assumption that the environment consists of a physical world distinct from the world of humans, as well as the modernist conception of the person being composed dualistically of a body and a soul.
Nurit Bird-David argues that:
She explains that animism is a "relational epistemology" rather than a failure of primitive reasoning. That is, self-identity among animists is based on their relationships with others, rather than any distinctive features of the "self". Instead of focusing on the essentialized, modernist self (the "individual"), persons are viewed as bundles of social relationships ("dividuals"), some of which include "superpersons" (i.e. non-humans).
Stewart Guthrie expressed criticism of Bird-David's attitude towards animism, believing that it promulgated the view that "the world is in large measure whatever our local imagination makes it." This, he felt, would result in anthropology abandoning "the scientific project."
Like Bird-David, Tim Ingold argues that animists do not see themselves as separate from their environment:
Rane Willerslev extends the argument by noting that animists reject this Cartesian dualism and that the animist self identifies with the world, "feeling at once within and apart from it so that the two glide ceaselessly in and out of each other in a sealed circuit". The animist hunter is thus aware of himself as a human hunter, but, through mimicry, is able to assume the viewpoint, senses, and sensibilities of his prey, to be one with it. Shamanism, in this view, is an everyday attempt to influence spirits of ancestors and animals, by mirroring their behaviors, as the hunter does its prey.
Ethical and ecological understanding
Cultural ecologist and philosopher David Abram proposed an ethical and ecological understanding of animism, grounded in the phenomenology of sensory experience. In his books The Spell of the Sensuous and Becoming Animal, Abram suggests that material things are never entirely passive in our direct perceptual experience, holding rather that perceived things actively "solicit our attention" or "call our focus", coaxing the perceiving body into an ongoing participation with those things.
In the absence of intervening technologies, he suggests that sensory experience is inherently animistic in that it discloses a material field that is animate and self-organizing from the beginning. David Abram used contemporary cognitive and natural science, as well as the perspectival worldviews of diverse indigenous oral cultures, to propose a richly pluralist and story-based cosmology in which matter is alive. He suggested that such a relational ontology is in close accord with humanity's spontaneous perceptual experience by drawing attention to the senses, and to the primacy of sensuous terrain, enjoining a more respectful and ethical relation to the more-than-human community of animals, plants, soils, mountains, waters, and weather-patterns that materially sustains humanity.
In contrast to a long-standing tendency in the Western social sciences, which commonly provide rational explanations of animistic experience, Abram develops an animistic account of reason itself. He holds that civilised reason is sustained only by intensely animistic participation between human beings and their own written signs. For instance, as soon as someone reads letters on a page or screen, they can "see what it says"—the letters speak as much as nature spoke to pre-literate peoples. Reading can usefully be understood as an intensely concentrated form of animism, one that effectively eclipses all of the other, older, more spontaneous forms of animistic participation in which humans were once engaged.
Relation to the concept of 'I-thou'
Religious studies scholar Graham Harvey defined animism as the belief "that the world is full of persons, only some of whom are human, and that life is always lived in relationship with others." He added that it is therefore "concerned with learning how to be a good person in respectful relationships with other persons."
In his Handbook of Contemporary Animism (2013), Harvey identifies the animist perspective in line with Martin Buber's "I-thou" as opposed to "I-it". In such, Harvey says, the animist takes an I-thou approach to relating to the world, whereby objects and animals are treated as a "thou", rather than as an "it".
Religion
There is ongoing disagreement (and no general consensus) as to whether animism is merely a singular, broadly encompassing religious belief or a worldview in and of itself, comprising many diverse mythologies found worldwide in many diverse cultures. This also raises a controversy regarding the ethical claims animism may or may not make: whether animism ignores questions of ethics altogether; or, by endowing various non-human elements of nature with spirituality or personhood, it in fact promotes a complex ecological ethics.
Concepts
Distinction from pantheism
Animism is not the same as pantheism, although the two are sometimes confused. Moreover, some religions are both pantheistic and animistic. One of the main differences is that while animists believe everything to be spiritual in nature, they do not necessarily see the spiritual nature of everything in existence as being united (monism) the way pantheists do. As a result, animism puts more emphasis on the uniqueness of each individual soul. In pantheism, everything shares the same spiritual essence, rather than having distinct spirits or souls. For example, Giordano Bruno equated the world soul with God and espoused a pantheistic animism.
Fetishism / totemism
In many animistic world views, the human being is often regarded as on a roughly equal footing with other animals, plants, and natural forces.
African indigenous religions
Traditional African religions: most religious traditions of Sub-Saharan Africa are basically a complex form of animism with polytheistic and shamanistic elements and ancestor worship.
In East Africa the Kerma culture display Animistic elements similar to other Traditional African religions. In contrast, the later polytheistic Napatan and Meroitic periods, with displays of animals in Amulets and the esteemed antiques of Lions, appear to be an Animistic culture rather than a polytheistic culture. The Kermans likely treated Jebel Barkal as a special sacred site, and passed it on to the Kushites and Egyptians who venerated the mesa.
In North Africa, the traditional Berber religion includes the traditional polytheistic, animist, and in some rare cases, shamanistic, religions of the Berber people.
Asian origin religions
Indian-origin religions
In the Indian-origin religions, namely Hinduism, Buddhism, Jainism, and Sikhism, the animistic aspects of nature worship and ecological conservation are part of the core belief system.
Matsya Purana, a Hindu text, has a Sanskrit language shloka (hymn), which explains the importance of reverence of ecology. It states: "A pond equals ten wells, a reservoir equals ten ponds, while a son equals ten reservoirs, and a tree equals ten sons." Indian religions worship trees such as the Bodhi Tree and numerous superlative banyan trees, conserve the sacred groves of India, revere the rivers as sacred, and worship the mountains and their ecology.
Panchavati are the sacred trees in Indic religions, which are sacred groves containing five type of trees, usually chosen from among the Vata (Ficus benghalensis, Banyan), Ashvattha (Ficus religiosa, Peepal), Bilva (Aegle marmelos, Bengal Quince), Amalaki (Phyllanthus emblica, Indian Gooseberry, Amla), Ashoka (Saraca asoca, Ashok), Udumbara (Ficus racemosa, Cluster Fig, Gular), Nimba (Azadirachta indica, Neem) and Shami (Prosopis spicigera, Indian Mesquite).
The banyan is considered holy in several religious traditions of India. The Ficus benghalensis is the national tree of India. Vat Purnima is a Hindu festival related to the banyan tree, and is observed by married women in North India and in the Western Indian states of Maharashtra, Goa, Gujarat. For three days of the month of Jyeshtha in the Hindu calendar (which falls in May–June in the Gregorian calendar) married women observe a fast, tie threads around a banyan tree, and pray for the well-being of their husbands. Thimmamma Marrimanu, sacred to Indian religions, has branches spread over five acres and was listed as the world's largest banyan tree in the Guinness World Records in 1989.
In Hinduism, the leaf of the banyan tree is said to be the resting place for the god Krishna. In the Bhagavat Gita, Krishna said, "There is a banyan tree which has its roots upward and its branches down, and the Vedic hymns are its leaves. One who knows this tree is the knower of the Vedas." (Bg 15.1)
In Buddhism's Pali canon, the banyan (Pali: nigrodha) is referenced numerous times. Typical metaphors allude to the banyan's epiphytic nature, likening the banyan's supplanting of a host tree as comparable to the way sensual desire (kāma) overcomes humans.
Mun (also known as Munism or Bongthingism) is the traditional polytheistic, animist, shamanistic, and syncretic religion of the Lepcha people.
Sanamahism is an ethnic religion of the Meitei people of in Northeast India. It is a polytheistic and animist religion and is named after Lainingthou Sanamahi, one of the most important deities of the Meitei faith.
Chinese religions
Shendao is a term originated by Chinese folk religions influenced by, Mohist, Confucian and Taoist philosophy, referring to the divine order of nature or the Wuxing.
The Shang dynasty's state religion was practiced from 1600 BCE to 1046 BCE, and was built on the idea of spiritualizing natural phenomena.
Japan and Shinto
Shinto is the traditional Japanese folk religion and has many animist aspects. The , a class of supernatural beings, are central to Shinto. All things, including natural forces and well-known geographical locations, are thought to be home to the kami. The kami are worshipped at kamidana household shrines, family shrines, and jinja public shrines.
The Ryukyuan religion of the Ryukyu Islands is distinct from Shinto, but shares similar characteristics.
Kalash people
Kalash people of Northern Pakistan follow an ancient animistic religion identified with an ancient form of Hinduism.
The Kalash (Kalasha: , romanised: , Devanagari: ), or Kalasha, are an Indo-Aryan indigenous people residing in the Chitral District of the Khyber-Pakhtunkhwa province of Pakistan.
They are considered unique among the people of Pakistan. They are also considered to be Pakistan's smallest ethnoreligious group, and traditionally practice what authors characterise as a form of animism. During the mid-20th century an attempt was made to force a few Kalasha villages in Pakistan to convert to Islam, but the people fought the conversion and, once official pressure was removed, the vast majority resumed the practice of their own religion. Nevertheless, some Kalasha have since converted to Islam, despite being shunned afterward by their community for having done so.
The term is used to refer to many distinct people including the Väi, the Čima-nišei, the Vântä, plus the Ashkun- and Tregami-speakers. The Kalash are considered to be an indigenous people of Asia, with their ancestors migrating to Chitral Valley from another location possibly further south, which the Kalash call "Tsiyam" in their folk songs and epics.
They claim to descend from the armies of Alexander who were left behind from his armed campaign, though no evidence exists for him to have passed the area.
The neighbouring Nuristani people of the adjacent Nuristan (historically known as Kafiristan) province of Afghanistan once had the same culture and practised a faith very similar to that of the Kalash, differing in a few minor particulars.
The first historically recorded Islamic invasions of their lands were by the Ghaznavids in the 11th century while they themselves are first attested in 1339 during Timur's invasions. Nuristan had been forcibly converted to Islam in 1895–96, although some evidence has shown the people continued to practice their customs. The Kalash of Chitral have maintained their own separate cultural traditions.
Korea
Muism, the native Korean belief, has many animist aspects. The various deities, called kwisin, are capable of interacting with humans and causing problems if they are not honoured appropriately.
Philippines indigenous religions
In the indigenous Philippine folk religions, pre-colonial religions of Philippines and Philippine mythology, animism is part of their core beliefs as demonstrated by the belief in Anito and Bathala as well as their conservation and veneration of sacred Indigenous Philippine shrines, forests, mountains and sacred grounds.
Anito (lit. '[ancestor] spirit') refers to the various indigenous shamanistic folk religions of the Philippines, led by female or feminized male shamans known as babaylan. It includes belief in a spirit world existing alongside and interacting with the material world, as well as the belief that everything has a spirit, from rocks and trees to animals and humans to natural phenomena.
In indigenous Filipino belief, the Bathala is the omnipotent deity which was derived from Sanskrit word for the Hindu supreme deity bhattara, as one of the ten avatars of the Hindu god Vishnu. The omnipotent Bathala also presides over the spirits of ancestors called Anito. Anitos serve as intermediaries between mortals and the divine, such as Agni (Hindu) who holds the access to divine realms; for this reason they are invoked first and are the first to receive offerings, regardless of the deity the worshipper wants to pray to.
Abrahamic religions
Animism also has influences in Abrahamic religions.
The Old Testament and the Wisdom literature preach the omnipresence of God (Jeremiah 23:24; Proverbs 15:3; 1 Kings 8:27), and God is bodily present in the incarnation of his Son, Jesus Christ. (Gospel of John 1:14, Colossians 2:9). Animism is not peripheral to Christian identity but is its nurturing home ground, its axis mundi. In addition to the conceptual work the term animism performs, it provides insight into the relational character and common personhood of material existence.
The Christian spiritual mapping movement is based upon a similar worldview to that of animism. It involves researching and mapping the spiritual and social history of an area in order to determine the demon (territorial spirit) controlling an area and preventing evangelism, so that the demon can be defeated through spiritual warfare prayer and rituals. Both posit that an invisible spirit world is active and that it can be interacted with or controlled, with the Christian belief that such power to control the spirit world comes from God rather than being inherent to objects or places. "The animist believes that rituals and objects contain spiritual power, whereas a Christian believes that rituals and objects may convey power. Animists seek to manipulate power, whereas Christians seek to submit to God and to learn to work with his power."
With rising awareness of ecological preservation, recently theologians like Mark I. Wallace argue for animistic Christianity with a biocentric approach that understands God being present in all earthly objects, such as animals, trees, and rocks.
Pre-Islamic Arab religion
Pre-Islamic Arab religion can refer to the traditional polytheistic, animist, and in some rare cases, shamanistic, religions of the peoples of the Arabian Peninsula. The belief in jinn, invisible entities akin to spirits in the Western sense dominant in the Arab religious systems, hardly fit the description of Animism in a strict sense. The jinn are considered to be analogous to the human soul by living lives like that of humans, but they are not exactly like human souls neither are they spirits of the dead. It is unclear if belief in jinn derived from nomadic or sedentary populations.
New religious movements
Some modern pagan groups, including Eco-pagans, describe themselves as animists, meaning that they respect the diverse community of living beings and spirits with whom humans share the world and cosmos.
The New Age movement commonly demonstrates animistic traits in asserting the existence of nature spirits.
Shamanism
A shaman is a person regarded as having access to, and influence in, the world of benevolent and malevolent spirits, who typically enters into a trance state during a ritual, and practices divination and healing.
According to Mircea Eliade, shamanism encompasses the premise that shamans are intermediaries or messengers between the human world and the spirit worlds. Shamans are said to treat ailments and illnesses by mending the soul. Alleviating traumas affecting the soul or spirit restores the physical body of the individual to balance and wholeness. The shaman also enters supernatural realms or dimensions to obtain solutions to problems afflicting the community. Shamans may visit other worlds or dimensions to bring guidance to misguided souls and to ameliorate illnesses of the human soul caused by foreign elements. The shaman operates primarily within the spiritual world, which in turn affects the human world. The restoration of balance results in the elimination of the ailment.
Abram, however, articulates a less supernatural and much more ecological understanding of the shaman's role than that propounded by Eliade. Drawing upon his own field research in Indonesia, Nepal, and the Americas, Abram suggests that in animistic cultures, the shaman functions primarily as an intermediary between the human community and the more-than-human community of active agencies—the local animals, plants, and landforms (mountains, rivers, forests, winds, and weather patterns, all of which are felt to have their own specific sentience). Hence, the shaman's ability to heal individual instances of disease (or imbalance) within the human community is a byproduct of their more continual practice of balancing the reciprocity between the human community and the wider collective of animate beings in which that community is embedded.
Animist life
Non-human animals
Animism entails the belief that all living things have a soul, and thus, a central concern of animist thought surrounds how animals can be eaten, or otherwise used for humans' subsistence needs. The actions of non-human animals are viewed as "intentional, planned and purposive", and they are understood to be persons, as they are both alive, and communicate with others.
In animist worldviews, non-human animals are understood to participate in kinship systems and ceremonies with humans, as well as having their own kinship systems and ceremonies. Graham Harvey cited an example of an animist understanding of animal behavior that occurred at a powwow held by the Conne River Mi'kmaq in 1996; an eagle flew over the proceedings, circling over the central drum group. The assembled participants called out ('eagle'), conveying welcome to the bird and expressing pleasure at its beauty, and they later articulated the view that the eagle's actions reflected its approval of the event, and the Mi'kmaq's return to traditional spiritual practices.
In animism, rituals are performed to maintain relationships between humans and spirits. Indigenous peoples often perform these rituals to appease the spirits and request their assistance during activities such as hunting and healing. In the Arctic region, certain rituals are common before the hunt as a means to show respect for the spirits of animals.
Flora
Some animists also view plant and fungi life as persons and interact with them accordingly. The most common encounter between humans and these plant and fungi persons is with the former's collection of the latter for food, and for animists, this interaction typically has to be carried out respectfully. Harvey cited the example of Māori communities in New Zealand, who often offer karakia invocations to sweet potatoes as they dig up the latter. While doing so, there is an awareness of a kinship relationship between the Māori and the sweet potatoes, with both understood as having arrived in Aotearoa together in the same canoes.
In other instances, animists believe that interaction with plant and fungi persons can result in the communication of things unknown or even otherwise unknowable. Among some modern Pagans, for instance, relationships are cultivated with specific trees, who are understood to bestow knowledge or physical gifts, such as flowers, sap, or wood that can be used as firewood or to fashion into a wand; in return, these Pagans give offerings to the tree itself, which can come in the form of libations of mead or ale, a drop of blood from a finger, or a strand of wool.
The elements
Various animistic cultures also comprehend stones as persons. Discussing ethnographic work conducted among the Ojibwe, Harvey noted that their society generally conceived of stones as being inanimate, but with two notable exceptions: the stones of the Bell Rocks and those stones which are situated beneath trees struck by lightning, which were understood to have become Thunderers themselves. The Ojibwe conceived of weather as being capable of having personhood, with storms being conceived of as persons known as 'Thunderers' whose sounds conveyed communications and who engaged in seasonal conflict over the lakes and forests, throwing lightning at lake monsters. Wind, similarly, can be conceived as a person in animistic thought.
The importance of place is also a recurring element of animism, with some places being understood to be persons in their own right.
Spirits
Animism can also entail relationships being established with non-corporeal spirit entities.
Other usage
Science
In the early 20th century, William McDougall defended a form of animism in his book Body and Mind: A History and Defence of Animism (1911).
Physicist Nick Herbert has argued for "quantum animism" in which the mind permeates the world at every level:
Werner Krieglstein wrote regarding his quantum Animism:
In Error and Loss: A Licence to Enchantment, Ashley Curtis (2018) has argued that the Cartesian idea of an experiencing subject facing off with an inert physical world is incoherent at its very foundation and that this incoherence is consistent with rather than belied by Darwinism. Human reason (and its rigorous extension in the natural sciences) fits an evolutionary niche just as echolocation does for bats and infrared vision does for pit vipers, and is epistemologically on a par with, rather than superior to, such capabilities. The meaning or aliveness of the "objects" we encounter, rocks, trees, rivers, and other animals, thus depends for its validity not on a detached cognitive judgment, but purely on the quality of our experience. The animist experience, or the wolf's or raven's experience, thus become licensed as equally valid worldviews to the modern western scientific one; they are indeed more valid, since they are not plagued with the incoherence that inevitably arises when "objective existence" is separated from "subjective experience."
Socio-political impact
Harvey opined that animism's views on personhood represented a radical challenge to the dominant perspectives of modernity, because it accords "intelligence, rationality, consciousness, volition, agency, intentionality, language, and desire" to non-humans. Similarly, it challenges the view of human uniqueness that is prevalent in both Abrahamic religions and Western rationalism.
Art and literature
Animist beliefs can also be expressed through artwork. For instance, among the Māori communities of New Zealand, there is an acknowledgement that creating art through carving wood or stone entails violence against the wood or stone person and that the persons who are damaged therefore have to be placated and respected during the process; any excess or waste from the creation of the artwork is returned to the land, while the artwork itself is treated with particular respect. Harvey, therefore, argued that the creation of art among the Māori was not about creating an inanimate object for display, but rather a transformation of different persons within a relationship.
Harvey expressed the view that animist worldviews were present in various works of literature, citing such examples as the writings of Alan Garner, Leslie Silko, Barbara Kingsolver, Alice Walker, Daniel Quinn, Linda Hogan, David Abram, Patricia Grace, Chinua Achebe, Ursula Le Guin, Louise Erdrich, and Marge Piercy.
Animist worldviews have also been identified in the animated films of Hayao Miyazaki.
See also
Anecdotal cognitivism
Animatism
Anima mundi
Dayawism
Ecotheology
Hylozoism
Mana
Mauri (life force)
Kaitiaki
Panpsychism
Religion and environmentalism
Sacred trees
Shamanism
Wildlife totemization
Notes
References
Sources
Further reading
Hallowell, Alfred Irving. 1960. "Ojibwa ontology, behavior, and world view." In Culture in History, edited by S. Diamond. (New York: Columbia University Press).
Reprint: 2002. Pp. 17–49 in Readings in Indigenous Religions, edited by G. Harvey. London: Continuum.
Ingold, Tim. 2006. "Rethinking the animate, re-animating thought." Ethnos 71(1):9–20.
Käser, Lothar. 2004. Animismus. Eine Einführung in die begrifflichen Grundlagen des Welt- und Menschenbildes traditionaler (ethnischer) Gesellschaften für Entwicklungshelfer und kirchliche Mitarbeiter in Übersee. Bad Liebenzell: Liebenzeller Mission. .
mit dem verkürzten Untertitel Einführung in seine begrifflichen Grundlagen auch bei: Erlanger Verlag für Mission und Okumene, Neuendettelsau 2004,
Quinn, Daniel. [1996] 1997. The Story of B: An Adventure of the Mind and Spirit. New York: Bantam Books, and the essay "Our Religions: Are They the Religions of Humanity Itself?", usually available at Ishmael.org
Wundt, Wilhelm. 1906. Mythus und Religion, Teil II. Leipzig 1906 (Völkerpsychologie II)
External links
Anthropology of religion
Metaphysical theories
Panentheism
Philosophy of religion
Polytheism
Schools of thought
Spirituality
Transtheism | 0.764007 | 0.999608 | 0.763707 |
Sexual diversity | Sexual diversity or gender and sexual diversity (GSD), refers to all the diversities of sex characteristics, sexual orientations and gender identities, without the need to specify each of the identities, behaviors, or characteristics that form this plurality.
Overview
In the Western world, generally simple classifications are used to describe sexual orientation (heterosexuals, homosexuals and bisexuals), gender identity (transgender and cisgender), and related minorities (intersex), gathered under the acronyms LGBTQ or LGBTQIA+ (lesbian, gay, bisexual, asexual, transgender/transsexual people, and sometimes intersex people); however, other cultures have other ways of understanding the sex and gender systems. Over the last few decades, some sexology theories have emerged, such as Kinsey theory and queer theory, proposing that this classification is not enough to describe the sexual complexity in human beings and, even, in other animal species.
For example, some people may feel an intermediate sexual orientation between heterosexual and bisexual (heteroflexible) or between homosexual and bisexual (homoflexible). It may vary over time, too (sexual fluidity), or include attraction not only towards women and men, but to all the spectrum of sexes and genders (pansexual). In other words, within bisexuality there exists a huge diversity of typologies and preferences that vary from an exclusive heterosexuality to a complete homosexuality (Kinsey scale).
Sexual diversity includes intersex people, those born with a variety of intermediate features between women and men. It also includes transgender and transsexed people, genderfluid people, and so on.
Lastly, sexual diversity also includes asexual people, who feel disinterest in sexual activity; and all those who consider that their identity cannot be defined, such as queer people.
Socially, sexual diversity is claimed as the acceptance of being different but with equal rights, liberties, and opportunities within the Human Rights framework. In many countries, visibility of sexual diversity is vindicated during Pride Parades.
See also
References
LGBTQ | 0.777285 | 0.982523 | 0.7637 |
Sustainable Development Goal 2 | Sustainable Development Goal 2 (SDG 2 or Global Goal 2) aims to achieve "zero hunger". It is one of the 17 Sustainable Development Goals established by the United Nations in 2015. The official wording is: "End hunger, achieve food security and improved nutrition and promote sustainable agriculture". SDG 2 highlights the "complex inter-linkages between food security, nutrition, rural transformation and sustainable agriculture". According to the United Nations, there were up to 757 million people facing hunger in 2023 – one out of 11 people in the world, which accounts for slightly less than 10 percent of the world population. One in every nine people goes to bed hungry each night, including 20 million people currently at risk of famine in South Sudan, Somalia, Yemen and Nigeria.
SDG 2 has eight targets and 14 indicators to measure progress. The five outcome targets are: ending hunger and improving access to food; ending all forms of malnutrition; agricultural productivity; sustainable food production systems and resilient agricultural practices; and genetic diversity of seeds, cultivated plants and farmed and domesticated animals; investments, research and technology. The three means of implementation targets include: addressing trade restrictions and distortions in world agricultural markets and food commodity markets and their derivatives.
After falling for decades, under-nutrition rose after 2015, with causes including various stresses in food systems such as climate shocks, the locust crisis and the COVID-19 pandemic. Those threats indirectly reduced the purchasing power and the capacity to produce and distribute food, which affects the most vulnerable populations and furthermore has reduced their accessibility to food.
While the world was witnessing a gradual decline in under-nutrition in 2023, the double burden of malnutrition – defined as the co-existence of undernutrition together with overweight and obesity – has been on the rise over the last two decades, characterized by a sharp increase in obesity rates and with only a gradual decline in thinness and underweight. Underweight among adults and the elderly has been cut in half while obesity is on the rise in all age groups.
The world is not on track to achieve Zero Hunger by 2030. "The signs of increasing hunger and food insecurity are a warning that there is considerable work to be done to make sure the world "leaves no one behind" on the road towards a world with zero hunger." It is unlikely there will be an end to malnutrition in Africa by 2030.
Data from 2019 showed that "globally, 1 in 9 people are undernourished, the vast majority of whom live in developing countries. Under nutrition causes wasting or severe wasting of 52 million children worldwide".
Background
In September 2015, the General Assembly adopted the 2030 Agenda for Sustainable Development that included 17 Sustainable Development Goals (SDGs). Building on the principle of "leaving no one behind", the new Agenda emphasizes a holistic approach to achieving sustainable development for all. In September 2019, Heads of State and Government came together during the SDG Summit to renew their commitment to implement the 2030 Agenda for Sustainable Development. During this event, they acknowledged some progress had been made, but that overall, "the world is not on track to deliver the SDGs". This is when "the decade of action" and "delivery for sustainable development" was launched, demanding stakeholders to speed up the process and efforts of implementation.
SDG 2 aims to end all forms of malnutrition and hunger by 2030 and ensure that everyone has sufficient food throughout the year, especially children. Chronic malnutrition, which affects an estimated 155 million children worldwide, also stunts children's brain and physical development and puts them at further risk of death, disease, and lack of success as adults. Hungry people are less productive and easily prone to diseases. As such, they will be unable to improve their livelihood.
Innovations in agriculture are meant to ensure increase in food production and subsequent decrease in food loss and food waste.
A report by the International Food Policy Research Institute (IFPRI) of 2013 stated that the emphasis of the SDGs should not be on ending poverty by 2030, but on eliminating hunger and under-nutrition by 2025. The assertion is based on an analysis of experiences in China, Vietnam, Brazil, and Thailand. Three pathways to achieve this were identified: 1) agriculture-led; 2) social protection- and nutrition- intervention-led; or 3) a combination of both of these approaches.
Targets, indicators and progress
The UN has defined 8 targets and 13 indicators for SDG 2. Four of them are to be achieved by the year 2030, one by the year 2020 and three have no target years. Each of the targets also has one or more indicators to measure progress. In total there are fourteen indicators for SDG 2. The six targets include ending hunger and increasing access to food (2.1), ending all forms of malnutrition (2.2), agricultural productivity (2.3), sustainable food production systems and resilient agricultural practices (2.4), genetic diversity of seeds, cultivated plants and farmed and domesticated animals (2.5), investments, research and technology (2.a), trade restrictions and distortions in world agricultural markets (2.b) and food commodity markets and their derivatives (2.c).
Target 2.1: Universal access to safe and nutritious food
The first target of SDG 2 is Target 2.1: "By 2030 end hunger and ensure access by all people, in particular the poor and people in vulnerable situations including infants, to safe, nutritious and sufficient food all year round".
It has two indicators:
Indicator 2.1.1: Prevalence of undernourishment.
Indicator 2.1.2: Prevalence of moderate or severe food insecurity in the population, based on the Food Insecurity Experience Scale (FIES).
Food insecurity is defined by the UN FAO as the "situation when people lack secure access to sufficient amounts of safe and nutritious food for normal growth and development and an active and healthy life." The UN's FAO uses the prevalence of undernourishment as the main hunger indicator.
Target 2.2: End all forms of malnutrition
The full title of Target 2.2 is: "By 2030 end all forms of malnutrition, including achieving by 2025 the international agreed targets on stunting and wasting in children under five years of age, and address the nutritional needs of adolescent girls, pregnant and lactating women, and older persons."
It has two indicators:
Indicator 2.2.1: Prevalence of stunting (height for age <-2 standard deviation from the median of the World Health Organization (WHO) Child Growth Standards) among children under 5 years of age)".
Indicator 2.2.2: Prevalence of malnutrition (weight for height >+2 or <-2 standard deviation from the median of the WHO Child Growth Standards).
Stunted children are determined as having a height which falls below the median height-for-age of the World Health Organization's Child Growth Standards. A child is defined as "wasted" if their weight-for-height is more than two standard deviations below the median of the WHO Child Growth Standards. A child is defined as "overweight" if their weight-for-height is more than two standard deviations above the median of the WHO Child Growth Standards.
Stunting is an indicator of severe malnutrition. The impacts of stunting on child development are considered to be irreversible beyond the first 1000 days of a child's life. Stunting can have severe impacts on both cognitive and physical development throughout a person's life.
The 2017 High-level Political Forum on Sustainable Development (HLPF) Thematic review of SDG 2 reviewed progress made and predicted that there will be 130 million stunted children by 2025. Currently, there are: "59 million children that are stunted in Africa, 87 million in Asia, 6 million in Latin America, and the remaining 3 million in Oceania and developed countries."
More people are experiencing overweight and obesity problems in low- and middle-income countries.
Target 2.3: Double the productivity and incomes of small-scale food producers
The full title for Target 2.3: "By 2030 double the agricultural productivity and the incomes of small-scale food producers, particularly women, indigenous peoples, family farmers, pastoralists and fishers, including through secure and equal access to land, other productive resources and inputs, knowledge, financial services, markets, and opportunities for value addition and non-farm employment".
It has two indicators:
Indicator 2.3.1: The volume of production per labour unit by classes of farming/pastoral/forestry enterprise size.
Indicator 2.3.2: Average income of small-scale food producers, by sex and indigenous status.
Small-scale producers have is systematically lower production than larger food producers. In most countries, small-scale food producers earn less than half those of larger food producers. It is too early to determine the progress done on this SDG. According to statistics division of the department of Economic and Social Affairs at the UN, the share of small-scale producers among all food producers in Africa, Asia and Latin America ranges from 40% to 85%.
This target connects to Sustainable Development Goal 5 (Gender Equality). According to National Geographic, the pay gap between men and women in the agriculture field averages at 20-30%. When the incomes of small-scale food producers are not affected by whether the farmer is female or where they are from, farmers will be able to increase their financial stability. Being more financially stable means doubling the productivity of food. Closing the gender gap could feed 130 million people out of the 870 million undernourished people in the world. Gender equality in agriculture is essential to helping achieve zero hunger.
Target 2.4: Sustainable food production and resilient agricultural practices
The full title for Target 2.4: "By 2030 ensure sustainable food production systems and implement resilient agricultural practices that increase productivity and production, that help maintain ecosystems, that strengthen capacity for adaptation to climate change, extreme weather, drought, flooding and other disasters, and that progressively improve land and soil quality".
This target has one indicator:
Indicator 2.4.1: Proportion of agricultural area under productive and sustainable agriculture".
Target 2.5: Maintain the genetic diversity in food production
The full title for Target 2.5: "By 2020 maintain genetic diversity of seeds, cultivated plants, farmed and domesticated animals and their related wild species, including through soundly managed and diversified seed and plant banks at national, regional and international levels, and ensure access to and fair and equitable sharing of benefits arising from the utilization of genetic resources and associated traditional knowledge as internationally agreed."
It has two indicators:
Indicator 2.5.1: Number of plant and animal genetic resources for food and agriculture secured in either medium or long-term conservation facilities.
Indicator 2.5.2: Proportion of local breeds classified as being at risk, not-at-risk or at the unknown level of risk of extinction.
The FAO's Gene Bank Standards for Plant Genetic Resources is the entity that sets the benchmark for scientific and technical best practices.
This target is set for the year 2020, unlike most SDGs which have a target date of 2030.
Target 2.a: Invest in rural infrastructure, agricultural research, technology and gene banks
The full title for Target 2.a: "increase investment, including through enhanced international cooperation in rural infrastructure, agricultural research and extension services, technology development, and plant and livestock gene banks to enhance agricultural productive capacity in developing countries, in particular in the least developed countries".
It has two indicators:
Indicator 2.a.1: Agriculture orientation index for government expenditures.
Indicator 2.a.2: Total official flows (official development assistance plus other official flows) to the agriculture sector.
The "Agriculture Orientation Index" (AOI) for Government Expenditures compares the central government contribution to agriculture with the sector's contribution to GDP. An AOI larger than 1 means the agriculture section receives a higher share of government spending relative to its economic value. An AOI smaller than 1 reflects a lower orientation to agriculture.
Target 2.b.: Prevent agricultural trade restrictions, market distortions and export subsidies
The full title for Target 2.b: "Correct and prevent trade restrictions and distortions in world agricultural markets, including the parallel elimination of all forms of agricultural export subsidies and all export measures with equivalent effect, in accordance with the mandate of the Doha Development Round".
Target 2.b. has two indicators:
Indicator 2.b.1: Producer Support Estimate". The Producer Support Estimate (PSE) is "an indicator of the annual monetary value of gross transfers from consumers and taxpayers to support agricultural producers, measured at the farm gate level, arising from policy measures, regardless of their nature, objectives or impacts on farm production or income."
Indicator 2.b.2: Agricultural export subsidies". Export subsidies "increase the share of the exporter in the world market at the cost of others, tend to depress world market prices and may make them more unstable, because decisions on export subsidy levels can be changed unpredictably."
In 2015, the World Trade Organization decided to terminate the export subsidy for agricultural commodities. This includes "export credit, export credit guarantees, or insurance programs for agricultural products". The Doha Round is the latest round of trade negotiations among the WTO membership. It aims to reach major reforms of the international trading system and introduce lower trade barriers and revised trade rules.
Target 2.c. Ensure stable food commodity markets and timely access to information
The full title for Target 2.c is: "adopt measures to ensure the proper functioning of food commodity markets and their derivatives, and facilitate timely access to market information, including on food reserves, in order to help limit extreme food price volatility".
This target has one indicator: Indicator 2.c.1 is an Indicator of food price anomalies.
Food price anomalies are measured using the domestic food price volatility index. Domestic food price volatility index measures the variation in domestic food prices over time, this is measured as the weighted-average of a basket of commodities based on consumer or market prices. High values indicate a higher volatility in food prices. Extreme food price movements pose a threat to agricultural markets and to the food security and livelihoods, especially of the most vulnerable people.
The G20 Agricultural Market Information System (AMIS) offer regular updates on market prices.
Custodian agencies
Custodian agencies are in charge of monitoring the progress of the indicators:
For all Indicators under Targets 2.1, 2.3 and 2.5, and for Indicators 2.a.1 and 2.c.1: Food and Agriculture Organization of the United Nations (FAO)
Indicators 2.2.1 and 2.2.2 : United Nations Children's Fund (UNICEF), World Health Organization (WHO)
Indicator 2.2.3: World Health Organization (WHO)
For all Indicators under Targets 2.3 and 2.5, and for Indicators 2.a.1 and 2.c.1: Food and Agriculture Organization (FAO)
Indicator 2.4.1: United Nations Environment Programme (UNEP) and Food and Agriculture Organization (FAO)
Indicator 2.a.2: Organisation for Economic Co-operation and Development (OECD)
Indicator 2.b.1: United Nations World Tourism Organization (UNWTO)
Tools
The Global Hunger Index (GHI) is a tool designed to measure and track hunger at global, regional, and national levels.
The FAO Food Price Index (FFPI) is a measure of the monthly change in international prices of a basket of food commodities.
Monitoring progress
Despite the progress, research shows that more than 790 million people worldwide still suffer from hunger. There has been major progress in the fight against hunger over the last 15 years. In 2017, during a side event at the High-Level Political Forum under the theme of "Accelerating progress towards achieving SDG 2: Lessons from national implementation", a series of recommendations and actions were discussed. Stakeholders like the French UN mission, Action Against Hunger, Save The Children and Global Citizen were steering the conversation. It is unlikely there will be an end to malnutrition on the African continent by 2030.
As of 2017, only 26 of 202 UN member countries were on track to meet the SDG target to eliminate undernourishment and malnourishment, while 20 percent have made no progress at all and nearly 70 percent have no or insufficient data to determine their progress.
To achieve progress towards SDG 2 the world needs to build political will and country ownership. It also needs to improve the narrative around nutrition to make sure that it is well understood by political leaders and address gender inequality, geographic inequality and absolute poverty. It also calls for concrete actions including working at sub-national levels, increasing nutrition funding and ensuring they target the first 1000 days of life and going beyond actions that address only the immediate causes of malnutrition and look at the drivers of under-nutrition, as well as at the food system as a whole.
2019 data for world hunger is shown in the WFP Hunger Map.
Challenges
The achievement of SDG 2 has been jeopardized by a number of factors, the most serious of which happened between 2019 and 2022; with the unprecedented 2019–2021 locust infestation in Eastern Africa, the 2020 global COVID-19 pandemic, and the 2022 Russian invasion of Ukraine. The Food and Agricultural Organization (FAO) has noted that trends in food insecurity, disruption in food supply, and income contribute to "increasing the risk of child malnutrition, as food insecurity affects diet quality, including the quality of children's and women's diets, and people's health in different ways". Climate change will likely cause severe disruption to all parts of the food supply chain, and the food system itself is a major driver of climate change.
Impact of the COVID-19 pandemic
Up to 142 million people in 2020, have suffered from undernourishment as a result of the COVID-19 pandemic. Stunting and wasting children statistics are likely to worsen with the pandemic. In addition, the COVID-19 pandemic "may add between 83 and 132 million people to the total number of undernourished in the world by the end of 2020 depending on the economic growth scenario".
The COVID-19 pandemic and lockdown has placed a huge amount of pressure on agricultural production, disrupted global value and supply chain. Subsequently, this raises issues of malnutrition and inadequate food supply to households, with the poorest of them all gravely affected. This has caused more than 132 million people to suffer from undernourishment in 2020. According to recent research there could be a 14% increase in the prevalence of moderate or severe wasting among children younger than five years due to the COVID-19 pandemic.
Criticism
According to a group of researchers at Wageningen University, the SDG 2 targets ignore the importance of value chains and food systems. They note that SDG 2 addresses micronutrient and macronutrient deficiencies, but not overconsumption or the consumption of foods high in salt, fat, and sugars, ignoring the health problems associated with such diets. It calls sustainable agriculture without clarifying what sustainable agriculture entails exactly. The researchers argue that substantial number of indicators currently used for SDGs monitoring are not specifically developed for the SDGs, so the information needed for SDGs monitoring is not necessarily available and is not appropriate to reflect the interconnected nature of the SDG. The lack of connected or coordinated action from food production to consumption at all levels hinders progress on SDG 2.
Links with other SDGs
The SDGs are deeply interconnected. All goals could be affected if progress on one specific goal is not achieved.
Climate change and natural disasters are affecting food security. Disaster risk management, climate change adaptation and mitigation are essential to increase harvests quality and quantity. Targets 2.4 and 2.5 are directly linked to the environment.
Organizations and programmes
Organizations, programmes and funds that have been set up to tackle hunger and malnutrition include:
United Nations Children's Fund (UNICEF)
Food and Agricultural Organization (FAO)
World Food Programme (WFP)
International Fund for Agricultural Development (IFAD)
World Bank
United Nations Environment Programme (UNEP)
International NGOs include:
Action Against Hunger (or Action Contre La Faim (ACF) in French)
Feeding America
The Hunger Project (THP)
Sources
References
External links
UN Sustainable Development Knowledge Platform – SDG 2
“Global Goals” Campaign - SDG 2
SDG-Track.org - SDG 2
UN SDG 2 in the US
Sustainable Development Goals
2015 establishments in New York City
Projects established in 2015
Hunger relief
Food security
Sustainable agriculture | 0.768796 | 0.993369 | 0.763698 |
Problematization | Problematization is a process of stripping away common or conventional understandings of a subject matter in order to gain new insights. This method can be applied to a term, writing, opinion, ideology, identity, or person. Practitioners consider the concrete or existential elements of these subjects. Analyzed as challenges (problems), practitioners may seek to transform the situations under study. It is a method of defamiliarization of common sense.
Problematization is a critical thinking and pedagogical dialogue or process and may be considered demythicisation. Rather than taking the common knowledge (myth) of a situation for granted, problematization poses that knowledge as a problem, allowing new viewpoints, consciousness, reflection, hope, and action to emerge.
What may make problematization different from other forms of criticism is its target, the context and details, rather than the pro or con of an argument. More importantly, this criticism does not take place within the original context or argument, but draws back from it, re-evaluates it, leading to action which changes the situation. Rather than accepting the situation, one emerges from it, abandoning a focalised viewpoint.
To problematize a statement, for example, one asks simple questions:
Who is making this statement?
For whom is it intended?
Why is this statement being made here, now?
Whom does this statement benefit?
Whom does it harm?
Problematization (Foucault)
For Michel Foucault, problematization serves as the overarching concept of his work in "History of Madness".
He treats it both as an object of inquiry and a specific form of critical analysis. As an object of inquiry, problematization is described as a process of objects becoming problems by being “characterized, analyzed, and treated” as such.
As a form of analysis, problematization seeks to answer the questions of “how and why certain things (behavior; phenomena, processes) became a problem”. Foucault does not distinguish clearly problematization as an object of inquiry from problematization as a way of inquiry. Problematization as a specific form of critical analysis is a form of “re-problematization”.
History of Thought
Problematization is the core of his “history of thought” which stands in sharp contrast to "history of ideas" ("the analysis of attitudes and types of action") as well as "history of mentalities" ("the analysis of systems of representation"). The history of thought refers to an inquiry of what it is, in a given society and epoch, “what allows one to take a step back from his way of acting or reacting, to present it to oneself as an object of thought and question it as to its meaning, its conditions and its goals”. Therefore, thought is described as a form of self-detachment from one's own action that allows “to present it to oneself as an object of thought [and] to question it as to its meaning, its conditions, and its goals". Thought is the reflection of one's own action “as a problem”. According to Foucault, the notions of thought and problematization are closely linked: to problematize is to engage in “work of thought”. Crucially, then, Foucault implies that our way of reflecting upon ourselves as individuals, as political bodies, as scientific disciplines or other, has a history and, consequently, imposes specific (rather than universal or a priori) structures upon thought.
Responses To Problems
A central element in the problematization analysis are responses to problems. The analysis of a specific problematization is “the history of an answer (…) to a certain situation”. However, Foucault stresses that "most of the time different responses [...] are proposed". His analytical interest focuses on finding at the root of those diverse and possibly contrasting answers, the conditions of possibility of their simultaneous appearance, i.e. “the general form of problematization”. This sets Foucauldian problematization apart from many other approaches in that it invites researchers to view opposing scientific theories or political views, and indeed contradictory enunciations in general as responses to the same problematization rather than as the manifestations of mutually excluding discourses. It is this level of problematizations and discourses that Foucault refers to when establishing that Foucault's “history of thought” seeks to answer the question of "how [...] a particular body of knowledge [is] able to be constituted?".
Engaging in Problematization
Engaging in problematization entails questioning beliefs held to be true by society. Ultimately, this intellectual practice is “to participate in the formation of a political will”. It also carves out elements that “pose problems for politics”. At the same time, it also requires self-reflection on behalf of the intellectual, since problematization is to investigate into the ontological question of the present and to determine a distinguishing “element of the present". This element is decisive for the “process that concerns thought, knowledge, and philosophy” in which the intellectual is part of as “element and actor". By questioning the present, or “contemporaneity”, “as an event”, the analyst constitutes the event's “meaning, value, philosophical particularity” but relies at the same time on it, for he/she “find[s] both [his/her] own raison d’être and the grounds for what [he/she] says” in the event itself.
Actor-Network Theory
The term also had a different meaning when used in association with actor–network theory (ANT), and especially the "sociology of translation" to describe the initial phase of a translation process and the creation of a network. According to Michel Callon, problematization involves two elements:
Interdefinition of actors in the network
Definition of the problem/topic/action program, referred to as an obligatory passage point (OPP)
Criticism
In Literary Criticism, An Autopsy Mark Bauerlein writes: The act of problematizing has obvious rhetorical uses. It sounds rigorous and powerful as a weapon in the fight against lax and dishonest inquiry. Also, for trained critics, problematizing x is one of the easiest interpretative gestures to make. In the most basic instance, all one has to do is add quotation marks to x, to say "Walden is a 'classic'" instead of "Walden is a classic." The scarequotes cause a hesitation over the term and imply a set of other problematizing questions: what is a "classic"? what does it presuppose? in what contexts is it used? what does it do? what educational and political purposes does it serve? Instead of being a familiar predicate in scholarship, one readers casually assimilate without much notice, "classic" now stands out from the flow of discourse. The questions hover around its use and, until they are resolved, the use of "classic" is impaired. Usually, such questions yield ready answers, but their readiness does not cut into the apparent savviness of the critics asking them. This is another advantage of the term "problematize": it is a simple procedure, but it sounds like an incisive investigative pursuit.
References
External links
Postmodern theory | 0.782257 | 0.976267 | 0.763692 |
Human enhancement | Human enhancement is the natural, artificial, or technological alteration of the human body in order to enhance physical or mental capabilities.
Technologies
Existing technologies
Three forms of human enhancement currently exist: reproductive, physical, and mental. Reproductive enhancements include embryo selection by preimplantation genetic diagnosis, cytoplasmictransfer, and in vitro-generated gametes. Physical enhancements include cosmetics (plastic surgery and orthodontics), Drug-induced (doping and performance-enhancing drugs), functional (prosthetics and powered exoskeletons), Medical (implants (e.g. pacemaker) and organ replacements (e.g. bionic lenses)), and strength training (weights (e.g. barbells) and dietary supplement)). Examples of mental enhancements are nootropics, neurostimulation, and supplements that improve mental functions.
Computers, mobile phones, and Internet can also be used to enhance cognitive efficiency. Notable efforts in human augmentation are driven by the interconnected Internet of Things (IoT) devices, including wearable electronics (e.g., augmented reality glasses, smart watches, smart textile), personal drones, on-body and in-body nanonetworks.
Emerging technologies
Many different forms of human enhancing technologies are either on the way or are currently being tested and trialed. A few of these emerging technologies include: human genetic engineering (gene therapy), neurotechnology (neural implants and brain–computer interfaces), cyberware, strategies for engineered negligible senescence, nanomedicine, and 3D bioprinting. Variants of human genetic engineering with so far limited usage include the artificial creation of human-animal hybrids (where each cell has partly human and partly animal genetic contents) and human-animal chimeras (where some cells are human and some cells are animal in origin).
Speculative technologies
Some other human enhancement technologies are still speculative, such as: mind uploading, exocortex, and endogenous artificial nutrition. Mind uploading is the hypothetical process of "transferring"/"uploading" or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The exocortex can be defined as a theoretical artificial external information processing system that would augment a brain's biological high-level cognitive processes. Endogenous artificial nutrition can be similar to having a radioisotope generator that resynthesizes glucose (similarly to photosynthesis), amino acids and vitamins from their degradation products, theoretically availing for weeks without food if necessary.
Nick Bostrom listed some additional capabilities that are expected to be physically possible in theory, given a sufficient technological level, such as:
Reversal of aging
Cures for all diseases
Arbitrary sensory inputs (e.g. generating subjective experience of taste without eating anything)
Precise control of personality, mood, motivation, well-being
Nootropics
There are many substances that are purported to have promise in augmenting human cognition by various means. These substances are called nootropics and can potentially benefit individuals with cognitive decline and many different disorders, but may also be capable of yielding results in cognitively healthy persons. Generally speaking, nootropics are said to be effective for enhancing focus, learning, memory function, mood, and in some cases, physical brain development. Some examples of these include Citicoline, Huperzine A, Phosphatidylserine, Bacopa monnieri, Acetyl-L-carnitine, Uridine monophosphate, L-theanine, Rhodiola rosea, and Pycnogenol which are all forms of dietary supplement. There are also nootropic drugs such as the common racetams, e.g. piracetam (Nootropil) and omberacetam (Noopept) along with the neuroprotective Semax, and N-Acetyl Semax. There are also nootropics related to naturally occurring substances but that are either modified in a lab or are analogs such as Vinpocetine and Sulbutiamine. Some authors have explored nootropics as relationship enhancements to help couples maintain bonds over time.
Ethics
Much debate surrounds the topic of human enhancement and the means used to achieve one's enhancement goals. Ethical attitudes toward human enhancement can depend on many factors such as religious affiliation, age, gender, ethnicity, culture of origin, and nationality.
In some circles the expression "human enhancement" is roughly synonymous with human genetic engineering, but most often it is referred to the general application of the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) to improve human performance.
Since the 1990s, several academics (such as some of the fellows of the Institute for Ethics and Emerging Technologies) have risen to become advocates of the case for human enhancement while other academics (such as the members of President Bush's Council on Bioethics) have become outspoken critics.
Advocacy of the case for human enhancement is increasingly becoming synonymous with "transhumanism", a controversial ideology and movement which has emerged to support the recognition and protection of the right of citizens to either maintain or modify their own minds and bodies; so as to guarantee them the freedom of choice and informed consent of using human enhancement technologies on themselves and their children. Their common understanding of the world can be seen from a physicist perspective rather than a biological perspective. Based on the idea of technological singularity, human enhancement is merging with technological innovation that will advance post-humanism.
Neuromarketing consultant Zack Lynch argues that neurotechnologies will have a more immediate effect on society than gene therapy and will face less resistance as a pathway of radical human enhancement. He also argues that the concept of "enablement" needs to be added to the debate over "therapy" versus "enhancement".
The prospect of human enhancement has sparked public controversy. The main ethical question in the debate about human enhancement involves which legal restrictions, if any, should exist.
Dale Carrico wrote that "human enhancement" is a loaded term which has eugenic overtones because it may imply the improvement of human hereditary traits to attain a universally accepted norm of biological fitness (at the possible expense of human biodiversity and neurodiversity), and therefore can evoke negative reactions far beyond the specific meaning of the term. Michael Selgelid terms this as a phase of "neugenics" suggesting that gene enhancements occurring now have already revived the idea of eugenics in our society. Practices of prenatal diagnosis, selective abortion and in-vitro fertilization aims to improve human life allowing for parents to decide via genetic information if they want to continue or terminate the pregnancy.
A criticism of human enhancement is that it will create unfair physical or mental advantages, or unequal access to such enhancements, can and will further the gulf between the "haves" and "have-nots".
Futurist Ray Kurzweil has shown some concern that, within the century, humans may be required to merge with this technology in order to compete in the marketplace. Enhanced individuals have a better chance of being chosen for better opportunities in careers, entertainment and resources. For example, life extending technologies can increase the average individual life span, affecting the distribution of pension throughout the society. Increasing lifespan will affect human population, further dividing limited resources such as food, energy, monetary resources and habitat. Other critics of human enhancement fear that such capabilities would change, for the worse, the dynamic relations within a family. Given the choices of superior qualities, parents make their child as opposed to merely birthing it, and the newborn becomes a product of their will rather than a gift of nature to be loved unconditionally.
Effects on identity
Human enhancement technologies can impact human identity by affecting one's self-conception. The argument does not necessarily come from the idea of improving the individual but rather changing who they are and becoming someone new. Altering an individual identity affects their personal story, development and mental capabilities. The basis of this argument comes from two main points : the charge of inauthenticity and the charge of violating an individual's core characteristics. Gene therapy has the ability to alter one’s mental capacity, and through this argument, has the ability to affect their narrative identity. An individual's core characteristics may include internal psychological style, personality, general intelligence, necessity to sleep, normal aging, gender and being Homo sapiens. Technologies threaten to alter the self fundamentally to the point where the result is, essentially, a different person entirely. For example, extreme changes in personality may affect the individual's relationships because others can no longer relate to the new person.
The capability approach focuses on a normative framework that can be applied to how human enhancement technologies affects human capabilities. The ethics of this does not necessarily focus on the make up of the individual but rather what it allows individuals to do in today's society. This approach was first termed by Amartya Sen, where he mainly focused on the objectives of the approach rather than the aim for those objectives which entail resources, technological processes, and economic arrangement. The central human capabilities include life, bodily health, bodily integrity, sense, emotions, practical reason, affiliation, other species, play, and control over one's environment. This normative framework recognizes that human capabilities are always changing and technology has already played a part in this.
See also
References
Further reading
External links
Enhancement Technologies Group
Institute for Ethics and Emerging Technologies
Humanity+
RTÉ's Big Science Debate 2007
Human Enhancement Study (European Parliament STOA 2009)
Ethics + Emerging Sciences Group (Cal Poly, San Luis Obispo)
"Ethics of Human Enhancement: 25 Questions & Answers" (an NSF-funded report), August 31, 2009
NeoHumanitas: Thinking our Future. Think tank reflecting on enhancing technologies
The Case for Perfection: Ethics in the Age of Human Enhancement (PeterLang, 2016)
Future-Human.Life (NeoHumanitas, 2017)
Augmented Human International Conferences
Bioethics
Human evolution | 0.76892 | 0.993191 | 0.763685 |
Gene–environment interaction | Gene–environment interaction (or genotype–environment interaction or G×E) is when two different genotypes respond to environmental variation in different ways. A norm of reaction is a graph that shows the relationship between genes and environmental factors when phenotypic differences are continuous. They can help illustrate GxE interactions. When the norm of reaction is not parallel, as shown in the figure below, there is a gene by environment interaction. This indicates that each genotype responds to environmental variation in a different way. Environmental variation can be physical, chemical, biological, behavior patterns or life events.
Gene–environment interactions are studied to gain a better understanding of various phenomena. In genetic epidemiology, gene–environment interactions are useful for understanding some diseases. Sometimes, sensitivity to environmental risk factors for a disease are inherited rather than the disease itself being inherited. Individuals with different genotypes are affected differently by exposure to the same environmental factors, and thus gene–environment interactions can result in different disease phenotypes. For example, sunlight exposure has a stronger influence on skin cancer risk in fair-skinned humans than in individuals with darker skin.
These interactions are of particular interest to genetic epidemiologists for predicting disease rates and methods of prevention with respect to public health. The term is also used amongst developmental psychobiologists to better understand individual and evolutionary development.
Nature versus nurture debates assume that variation in a trait is primarily due to either genetic differences or environmental differences. However, the current scientific opinion holds that neither genetic differences nor environmental differences are solely responsible for producing phenotypic variation, and that virtually all traits are influenced by both genetic and environmental differences.
Statistical analysis of the genetic and environmental differences contributing to the phenotype would have to be used to confirm these as gene–environment interactions. In developmental genetics, a causal interaction is enough to confirm gene–environment interactions.
History of the definition
The history of defining gene–environment interaction dates back to the 1930s and remains a topic of debate today. The first instance of debate occurred between Ronald Fisher and Lancelot Hogben.
Fisher sought to eliminate interaction from statistical studies as it was a phenomenon that could be removed using a variation in scale. Hogben believed that the interaction should be investigated instead of eliminated as it provided information on the causation of certain elements of development.
A similar argument faced multiple scientists in the 1970s. Arthur Jensen published the study “How much can we boost IQ and scholastic achievement?”, which amongst much criticism also faced contention by scientists Richard Lewontin and David Layzer. Lewontin and Layzer argued that in order to conclude causal mechanisms, the gene–environment interaction could not be ignored in the context of the study while Jensen defended that interaction was purely a statistical phenomenon and not related to development.
Around the same time, Kenneth J. Rothman supported the use of a statistical definition for interaction while researchers Kupper and Hogan believed the definition and existence of interaction was dependent on the model being used.
The most recent criticisms were spurred by Moffitt and Caspi's studies on 5-HTTLPR and stress and its influence on depression. In contrast to previous debates, Moffitt and Caspi were now using the statistical analysis to prove that interaction existed and could be used to uncover the mechanisms of a vulnerability trait. Contention came from Zammit, Owen and Lewis who reiterated the concerns of Fisher in that the statistical effect was not related to the developmental process and would not be replicable with a difference of scale.
Definitions
There are two different conceptions of gene–environment interaction today. Tabery has labeled them biometric and developmental interaction, while Sesardic uses the terms statistical and commonsense interaction.
The biometric (or statistical) conception has its origins in research programs that seek to measure the relative proportions of genetic and environmental contributions to phenotypic variation within populations. Biometric gene–environment interaction has particular currency in population genetics and behavioral genetics. Any interaction results in the breakdown of the additivity of the main effects of heredity and environment, but whether such interaction is present in particular settings is an empirical question. Biometric interaction is relevant in the context of research on individual differences rather than in the context of the development of a particular organism.
Developmental gene–environment interaction is a concept more commonly used by developmental geneticists and developmental psychobiologists. Developmental interaction is not seen merely as a statistical phenomenon. Whether statistical interaction is present or not, developmental interaction is in any case manifested in the causal interaction of genes and environments in producing an individual's phenotype.
Epidemiological models of GxE
In epidemiology, the following models can be used to group the different interactions between gene and environment.
Model A describes a genotype that increases the level of expression of a risk factor but does not cause the disease itself. For example, the PKU gene results in higher levels of phenylalanine than normal which in turn causes mental retardation.
The risk factor in Model B in contrast has a direct effect on disease susceptibility which is amplified by the genetic susceptibility. Model C depicts the inverse, where the genetic susceptibility directly effects disease while the risk factor amplifies this effect. In each independent situation, the factor directly effecting the disease can cause disease by itself.
Model D differs as neither factor in this situation can effect disease risk, however, when both genetic susceptibility and risk factor are present the risk is increased. For example, the G6PD deficiency gene when combined with fava bean consumption results in hemolytic anemia. This disease does not arise in individuals that eat fava beans and lack G6PD deficiency nor in G6PD-deficient people who do not eat fava beans.
Lastly, Model E depicts a scenario where the environmental risk factor and genetic susceptibility can individually both influence disease risk. When combined, however, the effect on disease risk differs.
The models are limited by the fact that the variables are binary and so do not consider polygenic or continuous scale variable scenarios.
Methods of analysis
Traditional genetic designs
Adoption studies
Adoption studies have been used to investigate how similar individuals that have been adopted are to their biological parents with whom they did not share the same environment with. Additionally, adopted individuals are compared to their adoptive family due to the difference in genes but shared environment. For example, an adoption study showed that Swedish men with disadvantaged adoptive environments and a genetic predisposition were more likely to abuse alcohol.
Twin studies
Using monozygotic twins, the effects of different environments on identical genotypes could be observed. Later studies leverage biometrical modelling techniques to include the comparisons of dizygotic twins to ultimately determine the different levels of gene expression in different environments.
Family studies
Family-based research focuses on the comparison of low-risk controls to high risk children to determine the environmental effect on subjects with different levels of genetic risk. For example, a Danish study on high-risk children with mothers who had schizophrenia depicted that children without a stable caregiver were associated with an increased risk of schizophrenia.
Molecular analyses
Interaction with single genes
The often used method to detect gene–environment interactions is by studying the effect a single gene variation (candidate gene) has with respect to a particular environment. Single nucleotide polymorphisms (SNP's) are compared with single binary exposure factors to determine any effects.
Candidate studies such as these require strong biological hypotheses which are currently difficult to select given the little understanding of biological mechanisms that lead to higher risk.
These studies are also often difficult to replicate commonly due to small sample sizes which typically results in disputed results.
The polygenic nature of complex phenotypes suggests single candidate studies could be ineffective in determining the various smaller scale effects from the large number of influencing gene variants.
Interaction with multiple genes
Since the same environmental factor could interact with multiple genes, a polygenic approach can be taken to analyze GxE interactions. A polygenic score is generated using the alleles associated with a trait and their respective weights based on effect and examined in combination with environmental exposure. Though this method of research is still early, it is consistent with psychiatric disorders. As a result of the overlap of endophenotypes amongst disorders this suggests that the outcomes of gene–environment interactions are applicable across various diagnoses.
Genome-wide association studies and genome wide interaction studies
A genome wide interaction scan (GEWIS) approach examines the interaction between the environment and a large number of independent SNP's. An effective approach to this all-encompassing study occurs in two-steps where the genome is first filtered using gene-level tests and pathway based gene set analyses. The second step uses the SNP's with G–E association and tests for interaction.
The differential susceptibility hypothesis has been reaffirmed through genome wide approaches.
Controversies
Lack of replication
A particular concern with gene–environment interaction studies is the lack of reproducibility. Specifically complex traits studies have come under scrutiny for producing results that cannot be replicated. For example, studies of the 5-HTTLPR gene and stress resulting in modified risk of depression have had conflicting results.
A possible explanation behind the inconsistent results is the heavy use of multiple testing. Studies are suggested to produce inaccurate results due to the investigation of multiple phenotypes and environmental factors in individual experiments.
Additive vs multiplicative model
There are two different models for the scale of measurement that helps determine if gene–environment interaction exists in a statistical context. There is disagreement on which scale should be used. Under these analyses, if the combined variables fit either model then there is no interaction. The combined effects must either be greater for synergistic or less than for an antagonistic outcome. The additive model measures risk differences while the multiplicative model uses ratios to measure effects. The additive model has been suggested to be a better fit for predicting disease risk in a population while a multiplicative model is more appropriate for disease etiology.
Epigenetics is an example of an underlying mechanism of gene–environment effects, however, it does not conclude whether environment effects are additive, multiplicative or interactive.
Gene "×" environment "×" environment interactions
New studies have also revealed the interactive effect of multiple environment factors. For example, a child with a poor quality environment would be more sensitive to a poor environment as an adult which ultimately led to higher psychological distress scores. This depicts a three way interaction Gene x Environment x Environment. The same study suggests taking a life course approach to determining genetic sensitivity to environmental influences within the scope of mental illnesses.
Medical significance
Doctors are interested in knowing whether disease can be prevented by reducing exposure to environmental risks. Some people carry genetic factors that confer susceptibility or resistance to a certain disorder in a particular environment. The interaction between the genetic factors and environmental stimulus is what results in the disease phenotype. There may be significant public health benefits in using gene by environment interactions to prevent or cure disease.
An individual's response to a drug can result from various gene by environment interactions. Therefore, the clinical importance of pharmacogenetics and gene by environment interactions comes from the possibility that genomic, along with environmental information, will allow more accurate predictions of an individual's drug response. This would allow doctors to more precisely select a certain drug and dosage to achieve therapeutic response in a patient while minimizing side effects and adverse drug reactions. This information could also help to prevent the health care costs associated with adverse drug reactions and inconveniently prescribing drugs to patients who likely won't respond to them.
In a similar manner, an individual can respond to other environmental stimuli, factors or challenges differently according to specific genetic differences or alleles. These other factors include the diet and specific nutrients within the diet, physical activity, alcohol and tobacco use, sleep (bed time, duration), and any of a number of exposures (or exposome), including toxins, pollutants, sunlight (latitude north–south of the equator), among any number of others. The diet, for example, is modifiable and has significant impact on a host of cardiometabolic diseases, including cardiovascular disease, coronary artery disease, coronary heart disease, type 2 diabetes, hypertension, stroke, myocardial infarction, and non-alcoholic fatty liver disease. In the clinic, typically assessed risks of these conditions include blood lipids (triglyceride, and HDL, LDL and total cholesterol), glycemic traits (plasma glucose and insulin, HOMA-IR, beta cell function as HOMA-BC), obesity anthropometrics (BMI/obesity, adiposity, body weight, waist circumference, waist-to-hip ratio), vascular measures (diastolic and systolic blood pressure), and biomarkers of inflammation. Gene–environment interactions can modulate the adverse effects of an allele that confers increased risk of disease, or can exacerbate the genotype–phenotype relationship and increase risk, in a manner often referred to as nutrigenetics. A catalog of genetic variants that associate with these and related cardiometabolic phenotypes and modified by common environmental factors is available.
Conversely, a disease study using breast cancer, type 2 diabetes, and rheumatoid arthritis shows that including GxE interactions in a risk prediction model does not improve risk identification.
Examples
In Drosophila: A classic example of gene–environment interaction was performed on Drosophila by Gupta and Lewontin in 1981. In their experiment they demonstrated that the mean bristle number on Drosophila could vary with changing temperatures. As seen in the graph to the right, different genotypes reacted differently to the changing environment. Each line represents a given genotype, and the slope of the line reflects the changing phenotype (bristle number) with changing temperature. Some individuals had an increase in bristle number with increasing temperature while others had a sharp decrease in bristle number with increasing temperature. This showed that the norms of reaction were not parallel for these flies, proving that gene–environment interactions exist.
In plants: One very interesting approach about genotype by environment interaction strategies is its use in the selection of sugarcane cultivars adapted to different environments. In this article, they analyzed twenty sugarcane genotypes grown in eight different locations over two crop cycles to identify mega-environments related to higher cane yield, measured in tons of cane per hectare (TCH) and percentage of sucrose (Pol% cane) using biplot multivariate GEI models. The authors then created a novel strategy to study both yield variables in a two-way coupled strategy even though the results showed a mean negative correlation. Through coinertia analysis, it was possible to determine the best-fitted genotypes for both yield variables in all environments. The use of these novel strategies like coinertia in GEI, proved to be a great complement analysis to AMMI and GGE, especially when the yield improvement implies multiple yield variables. Seven genetically distinct yarrow plants were collected and three cuttings taken from each plant. One cutting of each genotype was planted at low, medium, and high elevations, respectively. When the plants matured, no one genotype grew best at all altitudes, and at each altitude the seven genotypes fared differently. For example, one genotype grew the tallest at the medium elevation but attained only middling height at the other two elevations. The best growers at low and high elevation grew poorly at medium elevation. The medium altitude produced the worst overall results, but still yielded one tall and two medium-tall samples. Altitude had an effect on each genotype, but not to the same degree nor in the same way. A sorghum bi-parental population was repeatedly grown in seven diverse geographic locations across years. A group of genotypes requires similar growing degree-day (GDD) to flower across all environments, while another group of genotypes need less GDD in certain environments, but higher GDD in different environments to flower. The complex flowering time patterns is attributed to the interaction of major flowering time genes (Ma1, Ma6, FT, ELF3) and an explicit environmental factor, photothermal time (PTT) capturing the interaction between temperature and photoperiod.
Phenylketonuria (PKU) is a human genetic condition caused by mutations to a gene coding for a particular liver enzyme. In the absence of this enzyme, an amino acid known as phenylalanine does not get converted into the next amino acid in a biochemical pathway, and therefore too much phenylalanine passes into the blood and other tissues. This disturbs brain development leading to mental retardation and other problems. PKU affects approximately 1 out of every 15,000 infants in the U.S. However, most affected infants do not grow up impaired because of a standard screening program used in the U.S. and other industrialized societies. Newborns found to have high levels of phenylalanine in their blood can be put on a special, phenylalanine-free diet. If they are put on this diet right away and stay on it, these children avoid the severe effects of PKU. This example shows that a change in environment (lowering Phenylalanine consumption) can affect the phenotype of a particular trait, demonstrating a gene–environment interaction.
A single nucleotide polymorphism rs1800566 in NAD(P)H Quinone Dehydrogenase 1 (NQO1) alters the risk of asthma and general lung injury upon interaction with NOx pollutants, in individuals with this mutation.
A functional polymorphism in the monoamine oxidase A (MAOA) gene promoter can moderate the association between early life trauma and increased risk for violence and antisocial behavior. Low MAOA activity is a significant risk factor for aggressive and antisocial behavior in adults who report victimization as children. Persons who were abused as children but have a genotype conferring high levels of MAOA expression are less likely to develop symptoms of antisocial behavior. These findings must be interpreted with caution, however, because gene association studies on complex traits are notorious for being very difficult to confirm.
In Drosophila eggs: Contrary to the aforementioned examples, length of egg development in Drosophila as a function of temperature demonstrates the lack of gene–environment interactions. The attached graph shows parallel reaction norms for a variety of individual Drosophila flies, showing that there is not a gene–environment interaction present between the two variables. In other words, each genotype responds similarly to the changing environment producing similar phenotypes. For all individual genotypes, average egg development time decreases with increasing temperature. The environment is influencing each of the genotypes in the same predictable manner.
See also
Biopsychosocial model
Diathesis–stress model
Differential susceptibility
Environmental sensitivity
Envirome
Epidemiology
Epigenetics
Evolutionary developmental psychology
Exposome
Gene–environment correlation
Genetic epidemiology
Genomics
Molecular epidemiology
Molecular pathological epidemiology
Molecular pathology
References
Psychological theories
Genetic epidemiology | 0.781391 | 0.977331 | 0.763678 |
Photosynthesis | Photosynthesis is a system of biological processes by which photosynthetic organisms, such as most plants, algae, and cyanobacteria, convert light energy, typically from sunlight, into the chemical energy necessary to fuel their metabolism.
Photosynthesis usually refers to oxygenic photosynthesis, a process that produces oxygen. Photosynthetic organisms store the chemical energy so produced within intracellular organic compounds (compounds containing carbon) like sugars, glycogen, cellulose and starches. To use this stored chemical energy, an organism's cells metabolize the organic compounds through cellular respiration. Photosynthesis plays a critical role in producing and maintaining the oxygen content of the Earth's atmosphere, and it supplies most of the biological energy necessary for complex life on Earth.
Some bacteria also perform anoxygenic photosynthesis, which uses bacteriochlorophyll to split hydrogen sulfide as a reductant instead of water, producing sulfur instead of oxygen. Archaea such as Halobacterium also perform a type of non-carbon-fixing anoxygenic photosynthesis, where the simpler photopigment retinal and its microbial rhodopsin derivatives are used to absorb green light and power proton pumps to directly synthesize adenosine triphosphate (ATP), the "energy currency" of cells. Such archaeal photosynthesis might have been the earliest form of photosynthesis that evolved on Earth, as far back as the Paleoarchean, preceding that of cyanobacteria (see Purple Earth hypothesis).
While the details may differ between species, the process always begins when light energy is absorbed by the reaction centers, proteins that contain photosynthetic pigments or chromophores. In plants, these proteins are chlorophylls (a porphyrin derivative that absorbs the red and blue spectrums of light, thus reflecting green) held inside chloroplasts, abundant in leaf cells. In bacteria they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two important molecules that participate in energetic processes: reduced nicotinamide adenine dinucleotide phosphate (NADPH) and ATP.
In plants, algae, and cyanobacteria, sugars are synthesized by a subsequent sequence of reactions called the Calvin cycle. In this process, atmospheric carbon dioxide is incorporated into already existing organic compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms like the reverse Krebs cycle are used to achieve the same end.
The first photosynthetic organisms probably evolved early in the evolutionary history of life using reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. The average rate of energy captured by global photosynthesis is approximately 130 terawatts, which is about eight times the total power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91–104 Pg petagrams, or a billion metric tons), of carbon into biomass per year. Photosynthesis was discovered in 1779 by Jan Ingenhousz. He showed that plants need light, not just air, soil, and water.
Photosynthesis is vital for climate processes, as it captures carbon dioxide from the air and binds it into plants, harvested produce and soil. Cereals alone are estimated to bind 3,825 Tg or 3.825 Pg of carbon dioxide every year, i.e. 3.825 billion metric tons.
Overview
Most photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon.
In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This oxygenic photosynthesis is by far the most common type of photosynthesis used by living organisms. Some shade-loving plants (sciophytes) produce such low levels of oxygen during photosynthesis that they use all of it themselves instead of releasing it to the atmosphere.
Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by bacteria, which consume carbon dioxide but do not release oxygen.
Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrates. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrates, cellular respiration is the oxidation of carbohydrates or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism.
Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments (cellular respiration in mitochondria).
The general equation for photosynthesis as first proposed by Cornelis van Niel is:
+ + → + +
Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is:
+ + → + +
This equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation:
+ + → +
Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is:
+ + → + (used to build other compounds in subsequent reactions)
Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the hydrogen carrier NADPH and the energy-storage molecule ATP. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide.
Most organisms that use oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation.
Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis.
Photosynthetic membranes and organelles
In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb.
In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system.
Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors.
These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex.
Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to minimize heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place.
Light-dependent reactions
In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is taken up by a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases oxygen.
The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is:
Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above-ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms.
Z scheme
In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic.
In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram at right). The absorption of a photon by the antenna complex loosens an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That loosened electron is taken up by the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), a chemiosmotic potential is generated by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the coenzyme NADP with an H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends.
The cyclic reaction is similar to that of the non-cyclic but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction.
Water photolysis
Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the Z-scheme, requires an external source of electrons to reduce its oxidized chlorophyll a reaction center. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by the energy of four successive charge-separation reactions of photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that is oxidized by the energy of P680. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Kok's S-state diagrams). The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen and its energy for cellular respiration, including photosynthetic organisms.
Light-independent reactions
Calvin cycle
In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is
Carbon fixation produces the three-carbon sugar intermediate, which is then converted into the final carbohydrate products. The simple carbon sugars photosynthesis produces are then used to form other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the carbon and energy from plants is passed through a food chain.
The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (five out of six molecules) of the glyceraldehyde 3-phosphate produced are used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch, and cellulose, as well as glucose and fructose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids.
Carbon concentrating mechanisms
On land
In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) and decrease in carbon fixation. Some plants have evolved mechanisms to increase the concentration in the leaves under these conditions.
Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases fixation and, thus, the photosynthetic capacity of the leaf. plants can produce more sugar than plants in conditions of high light and temperature. Many important crop plants are plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use carbon fixation, compared to 3% that use carbon fixation; however, the evolution of in over sixty plant lineages makes it a striking example of convergent evolution. C2 photosynthesis, which involves carbon-concentration by selective breakdown of photorespiratory glycine, is both an evolutionary precursor to and a useful carbon-concentrating mechanism in its own right.
Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to metabolism, which spatially separates the fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from plants, and fix the at night, when their stomata are open. CAM plants store the mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. CAM is used by 16,000 species of plants.
Calcium-oxalate-accumulating plants, such as Amaranthus hybridus and Colobanthus quitensis, show a variation of photosynthesis where calcium oxalate crystals function as dynamic carbon pools, supplying carbon dioxide (CO2) to photosynthetic cells when stomata are partially or totally closed. This process was named alarm photosynthesis. Under stress conditions (e.g., water deficit), oxalate released from calcium oxalate crystals is converted to CO2 by an oxalate oxidase enzyme, and the produced CO2 can support the Calvin cycle reactions. Reactive hydrogen peroxide (H2O2), the byproduct of oxalate oxidase reaction, can be neutralized by catalase. Alarm photosynthesis represents a photosynthetic variant to be added to the well-known C4 and CAM pathways. However, alarm photosynthesis, in contrast to these pathways, operates as a biochemical pump that collects carbon from the organ interior (or from the soil) and not from the atmosphere.
In water
Cyanobacteria possess carboxysomes, which increase the concentration of around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome, releases CO2 from dissolved hydrocarbonate ions (HCO). Before the CO2 can diffuse out, RuBisCO concentrated within the carboxysome quickly sponges it up. HCO ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate around RuBisCO.
Order and kinetics
The overall process of photosynthesis takes place in four stages:
Efficiency
Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%.
Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) reemitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers.
Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature, and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices.
Scientists are studying photosynthesis in hopes of developing plants with increased yield.
The efficiency of both light and dark reactions can be measured, but the relationship between the two can be complex. For example, the light reaction creates ATP and NADPH energy molecules, which C3 plants can use for carbon fixation or photorespiration. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions.
Chlorophyll fluorescence of photosystem II can measure the light reaction, and infrared gas analyzers can measure the dark reaction. An integrated chlorophyll fluorometer and gas exchange system can investigate both light and dark reactions when researchers use the two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2 and of ΔH2O using reliable methods. CO2 is commonly measured in /(m2/s), parts per million, or volume per million; and H2O is commonly measured in /(m2/s) or in . By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation (PAR), it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and "Ci" or intracellular CO2. However, it is more common to use chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used parameters FV/FM and Y(II) or F/FM' can be measured in a few seconds, allowing the investigation of larger plant populations.
Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response.
Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC, the estimation of CO2 concentration at the site of carboxylation in the chloroplast, to replace Ci. CO2 concentration in the chloroplast becomes possible to estimate with the measurement of mesophyll conductance or gm using an integrated system.
Photosynthesis measurement systems are not designed to directly measure the amount of light the leaf absorbs, but analysis of chlorophyll fluorescence, P700- and P515-absorbance, and gas exchange measurements reveal detailed information about, e.g., the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments, even wavelength dependency of the photosynthetic efficiency can be analyzed.
A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an alga, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure called a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form accessible to the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time.
Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances. Obstacles in the form of destructive interference cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks.
Evolution
Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies also suggest that photosynthesis may have begun about 3.4 billion years ago, though the first direct evidence of photosynthesis comes from thylakoid membranes preserved in 1.75-billion-year-old cherts.
Oxygenic photosynthesis is the main source of oxygen in the Earth's atmosphere, and its earliest appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around two billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic, using water as an electron donor, which is oxidized to molecular oxygen in the photosynthetic reaction center.
Symbiosis and the origin of chloroplasts
Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges, and sea anemones. Scientists presume that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks, such as Elysia viridis and Elysia chlorotica, also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins they need to survive.
An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles.
Photosynthetic eukaryotic lineages
Symbiotic and kleptoplastic organisms excluded:
The glaucophytes and the red and green algae—clade Archaeplastida (uni- and multicellular)
The cryptophytes—clade Cryptista (unicellular)
The haptophytes—clade Haptista (unicellular)
The dinoflagellates and chromerids in the superphylum Myzozoa, and Pseudoblepharisma in the phylum Ciliophora—clade Alveolata (unicellular)
The ochrophytes—clade Stramenopila (uni- and multicellular)
The chlorarachniophytes and three species of Paulinella in the phylum Cercozoa—clade Rhizaria (unicellular)
The euglenids—clade Excavata (unicellular)
Except for the euglenids, which are found within the Excavata, all of these belong to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids, which are surrounded by two membranes, through primary endosymbiosis in two separate events, by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". The only known exception is the ciliate Pseudoblepharisma tenue, which in addition to its plastids that originated from green algae also has a purple sulfur bacterium as symbiont. In dinoflagellates and euglenids the plastids are surrounded by three membranes, and in the remaining lines by four. A nucleomorph, remnants of the original algal nucleus located between the inner and outer membranes of the plastid, is present in the cryptophytes (from a red alga) and chlorarachniophytes (from a green alga).
Some dinoflagellates that lost their photosynthetic ability later regained it again through new endosymbiotic events with different algae.
While able to perform photosynthesis, many of these eukaryotic groups are mixotrophs and practice heterotrophy to various degrees.
Photosynthetic prokaryotic lineages
Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as electron donors. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time.
With a possible exception of Heimdallarchaeota, photosynthesis is not found in archaea. Haloarchaea are phototrophic and can absorb energy from the sun, but do not harvest carbon from the atmosphere and are therefore not photosynthetic. Instead of chlorophyll they use rhodopsins, which convert light-energy to ion gradients but cannot mediate electron transfer reactions.
In bacteria eight photosynthetic lineages are currently known:
Cyanobacteria, the only prokaryotes performing oxygenic photosynthesis and the only prokaryotes that contain two types of photosystems (type I (RCI), also known as Fe-S type, and type II (RCII), also known as quinone type). The seven remaining prokaryotes have anoxygenic photosynthesis and use versions of either type I or type II.
Chlorobi (green sulfur bacteria) Type I
Heliobacteria Type I
Chloracidobacterium Type I
Proteobacteria (purple sulfur bacteria and purple non-sulfur bacteria) Type II (see: Purple bacteria)
Chloroflexota (green non-sulfur bacteria) Type II
Gemmatimonadota Type II
Eremiobacterota Type II
Cyanobacteria and the evolution of photosynthesis
The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae). The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae.
Experimental history
Discovery
Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century.
Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil a plant was using and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself.
Joseph Priestley, a chemist and minister, discovered that when he isolated a volume of air under an inverted jar and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that a plant could restore the air the candle and the mouse had "injured."
In 1779, Jan Ingenhousz repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours.
In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which organisms use photosynthesis to produce food (such as glucose) was outlined.
Refinements
Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria, he was the first to demonstrate that photosynthesis is a light-dependent redox reaction in which hydrogen reduces (donates its atoms as electrons and protons to) carbon dioxide.
Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigments. These include phycobilins, which are the red and blue pigments of red and blue algae, respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta is equal in both PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII systems, which in turn powers the photochemistry.
Robert Hill thought that a complex of reactions consisted of an intermediate to cytochrome b6 (now a plastoquinone), and that another was from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. In the Hill reaction:
2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2
A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water.
Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, but many scientists refer to it as the Calvin-Benson, Benson-Calvin, or even Calvin-Benson-Bassham (or CBB) Cycle.
Nobel Prize–winning scientist Rudolph A. Marcus was later able to discover the function and significance of the electron transport chain.
Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits CO2, activated by the respiration.
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
In 1954, Daniel I. Arnon et al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.
Louis N. M. Duysens and Jan Amesz discovered that chlorophyll "a" will absorb one light, oxidize cytochrome f, while chlorophyll "a" (and other pigments) will absorb another light but will reduce this same oxidized cytochrome, stating the two light reactions are in series.
Development of the concept
In 1893, the American botanist Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. The term photosynthesis is derived from the Greek phōs (φῶς, gleam) and sýnthesis (σύνθεσις, arranging together), while another word that he designated was photosyntax, from sýntaxis (σύνταξις, configuration). Over time, the term photosynthesis came into common usage. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term.
C3 : C4 photosynthesis research
In the late 1940s at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae Chlorella in a fraction of a second in light resulted in a three carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants have the same photosynthetic capacities, that are light saturated at less than 50% of sunlight.
Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and not be saturated at near full sunlight. This higher rate in maize was almost double of those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocots and dicots uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i.e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated a Citation Classic in 1986. These species were later termed C4 plants as the first stable compound of CO2 fixation in light has four carbons as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the three-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants.
Factors
There are four main factors influencing photosynthesis and several corollary factors. The four main are:
Light irradiance and wavelength
Water absorption
Carbon dioxide concentration
Temperature.
Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Light intensity (irradiance), wavelength and temperature
The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life.
The radiation climate within plant communities is extremely variable, in both time and space.
In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation.
At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance.
At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased.
These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, Cyanobacteria have a light-harvesting complex called Phycobilisome. This complex is made up of a series of proteins with different pigments which surround the reaction center.
Carbon dioxide levels and photorespiration
As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars.
RuBisCO oxygenase activity is disadvantageous to plants for several reasons:
One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle.
Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis.
Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen.
A highly simplified summary is:
2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3
The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide.
See also
Jan Anderson (scientist)
Artificial photosynthesis
Calvin-Benson cycle
Carbon fixation
Cellular respiration
Chemosynthesis
Daily light integral
Hill reaction
Integrated fluorometer
Light-dependent reaction
Organic reaction
Photobiology
Photoinhibition
Photosynthetic reaction center
Photosynthetically active radiation
Photosystem
Photosystem I
Photosystem II
Quantasome
Quantum biology
Radiosynthesis
Red edge
Vitamin D
References
Further reading
Books
Papers
External links
A collection of photosynthesis pages for all levels from a renowned expert (Govindjee)
In depth, advanced treatment of photosynthesis, also from Govindjee
Science Aid: Photosynthesis Article appropriate for high school science
Metabolism, Cellular Respiration and Photosynthesis – The Virtual Library of Biochemistry and Cell Biology
Overall examination of Photosynthesis at an intermediate level
Overall Energetics of Photosynthesis
The source of oxygen produced by photosynthesis Interactive animation, a textbook tutorial
Photosynthesis – Light Dependent & Light Independent Stages
Khan Academy, video introduction
Agronomy
Biological processes
Botany
Cellular respiration
Ecosystems
Metabolism
Plant nutrition
Plant physiology
Quantum biology | 0.763955 | 0.999631 | 0.763673 |
Corrective and preventive action | Corrective and preventive action (CAPA or simply corrective action) consists of improvements to an organization's processes taken to eliminate causes of non-conformities or other undesirable situations. It is usually a set of actions, laws or regulations required by an organization to take in manufacturing, documentation, procedures, or systems to rectify and eliminate recurring non-conformance. Non-conformance is identified after systematic evaluation and analysis of the root cause of the non-conformance. Non-conformance may be a market complaint or customer complaint or failure of machinery or a quality management system, or misinterpretation of written instructions to carry out work. The corrective and preventive action is designed by a team that includes quality assurance personnel and personnel involved in the actual observation point of non-conformance. It must be systematically implemented and observed for its ability to eliminate further recurrence of such non-conformation. The Eight disciplines problem solving method, or 8D framework, can be used as an effective method of structuring a CAPA.
Corrective action: Action taken to eliminate the causes of non-conformities or other undesirable situations, so as to prevent recurrence.
Preventive action: Action taken to prevent the occurrence of such non-conformities, generally as a result of a risk analysis.
In certain markets and industries, CAPA may be required as part of the quality management system, such as the Medical Devices and Pharmaceutical industries in the United States. In this case, failure to adhere to proper CAPA handling is considered a violation of US Federal regulations on good manufacturing practices. As a consequence, a medicine or medical device can be termed as adulterated or substandard if the company has failed to investigate, record and analyze the root cause of a non-conformance, and failed to design and implement an effective CAPA.
CAPA is used to bring about improvements to an organization's processes, and is often undertaken to eliminate causes of non-conformities or other undesirable situations. CAPA is a concept within good manufacturing practice (GMP), Hazard Analysis and Critical Control Points/Hazard Analysis and Risk-based Preventive Controls (HACCP/HARPC) and numerous ISO business standards. It focuses on the systematic investigation of the root causes of identified problems or identified risks in an attempt to prevent their recurrence (for corrective action) or to prevent occurrence (for preventive action).
Corrective actions are implemented in response to customer complaints, unacceptable levels of product non-conformance, issues identified during an internal audit, as well as adverse or unstable trends in product and process monitoring such as would be identified by statistical process control (SPC). Preventive actions are implemented in response to the identification of potential sources of non-conformity.
To ensure that corrective and preventive actions are effective, the systematic investigation of the root causes of failure is pivotal. CAPA is part of the overall quality management system (QMS).
Concepts
Clearly identified sources of data that identify problems to investigate
Root cause analysis that identifies the cause of a discrepancy or deviation, and suggest corrective actions
A common misconception is that the purpose of preventive action is to avert the occurrence of a similar potential problem. This process is all part of corrective action because it is a process of determining such similarities that should take place in the event of a discrepancy.
Preventive action is any proactive method used to determine potential discrepancies before they occur and to ensure that they do not happen (thereby including, for example, preventive maintenance, management review or other common forms of risk avoidance). Corrective and preventive actions include stages for investigation, action, review, and further action is required. It can be seen that both fit into the PDCA (plan-do-check-act) philosophy as determined by the Deming-Shewhart cycle.
Investigations to root cause may conclude that no corrective or preventive actions are required, and additionally may suggest simple corrections to a problem with no identified systemic root cause. When multiple investigations end in no corrective action, a new problem statement with expanded scope may be generated, and a more thorough investigation to root cause performed.
Implementation of corrective and preventive actions is the path towards improvement and effectiveness of Quality Management Systems. Corrective actions are nothing but actions based on problem identification. The problem or a non-conformance can be identified internally through staff suggestions, management reviews, document reviews or internal audits. External leads to finding the root cause of the problem can include Customer complaints and suggestions; customer rejections; non-conformities raised in customer or third-party audits; recommendations by auditors.
A root cause is the identification and investigation of the source of the problem where the person(s), system, process, or external factor is identified as the cause of the nonconformity. The root cause analysis can be done via 5 Whys or other methods, e.g. an Ishikawa diagram.
Correction is the action to eliminate a detected nonconformity or nonconformance.
Preventive action includes the prediction of problems and attempts to avoid such occurrences (fail-safe) through self-initiated actions and analysis related to the processes or products. This can be initiated with the help of active participation by staff members and workers through improvement teams, improvement meetings, opportunities for improvement during internal audits, management review, customer feedback and deciding own goals quantized in terms of business growth, reducing rejections, utilizing the equipment effectively, etc.
Medical devices and FDA compliance
To comply with the United States Food and Drug Administration's code FDA 21 CFR 820.100 medical device companies need to establish a CAPA process within their QMS. This part of the system may be paper or digital, but it is something that is looked for during an FDA visit. In 2015 there were over 450 issues found with the CAPA systems for medical device companies. To have an FDA-compliant QMS system required the ability to capture, review, approve, control, and retrieve closed-loop processes.
A corrective action can also be a field correction, an action taken to correct problems with non-conforming products. An example is the pharmaceutical company Avanos Medical, which in 2022 conducted a voluntary field correction after reports of 60 injuries and 23 patient deaths related to misplaced nasogastric feeding tubes while using their CORTRAK* 2 Enteral Access System.
The voluntary field correction led Avanos Medical to recall the product. The FDA identified it as a Class I recall, the most severe type of recall.
Examples of corrective actions
Error Proofing
Visible or Audible Alarms
Process Redesign
Product Redesign
Define and Implement Action Plan
Training or enhancement or modification of existing training programs
Improvements to maintenance schedules
Improvements to material handling or storage
In some cases, a combination of such actions may be necessary to fully correct the problem.
See also
Eight disciplines problem solving
Good documentation practice
Good automated manufacturing practice (GAMP)
References
External links
Quality Systems Approach to Pharmaceutical CGMP Regulations (FDA)
ISO standards
Drug manufacturing
Quality management
Change management
Prevention | 0.769988 | 0.991782 | 0.76366 |
Adaptive behavior | Adaptive behavior is behavior that enables a person (usually used in the context of children) to cope in their environment with greatest success and least conflict with others. This is a term used in the areas of psychology and special education. Adaptive behavior relates to everyday skills or tasks that the "average" person is able to complete, similar to the term life skills.
Nonconstructive or disruptive social or personal behaviors can sometimes be used to achieve a constructive outcome. For example, a constant repetitive action could be re-focused on something that creates or builds something. In other words, the behavior can be adapted to something else.
In contrast, maladaptive behavior is a type of behavior that is often used to reduce one's anxiety, but the result is dysfunctional and non-productive coping. For example, avoiding situations because you have unrealistic fears may initially reduce your anxiety, but it is non-productive in alleviating the actual problem in the long term. Maladaptive behavior is frequently used as an indicator of abnormality or mental dysfunction, since its assessment is relatively free from subjectivity. However, many behaviors considered moral can be maladaptive, such as dissent or abstinence.
Adaptive behavior reflects an individual's social and practical competence to meet the demands of everyday living.
Behavioral patterns change throughout a person's development, life settings and social constructs, evolution of personal values, and the expectations of others. It is important to assess adaptive behavior in order to determine how well an individual functions in daily life: vocationally, socially and educationally.
Examples
A child born with cerebral palsy will most likely have a form of hemiparesis or hemiplegia (the weakening, or loss of use, of one side of the body). In order to adapt to one's environment, the child may use these limbs as helpers, in some cases even adapt the use of their mouth and teeth as a tool used for more than just eating or conversation.
Frustration from lack of the ability to verbalize one's own needs can lead to tantrums. In addition, it may lead to the use of signs or sign language to communicate needs.
Core problems
Limitations in self-care skills and social relationships, as well as behavioral excesses, are common characteristics of individuals with mental disabilities. Individuals with mental disabilities—who require extensive supports—are often taught basic self-care skills such as dressing, eating, and hygiene. Direct instruction and environmental supports, such as added prompts and simplified routines, are necessary to ensure that deficits in these adaptive areas do not limit one's quality of life.
Most children with milder forms of mental disabilities learn how to take care of their basic needs, but they often require training in self-management skills to achieve the levels of performance necessary for eventual independent living. Making and sustaining personal relationships present significant challenges for many persons with mental disabilities. Limited cognitive processing skills, poor language development, and unusual or inappropriate behaviors can seriously impede interactions with others. Teaching students with mental disabilities appropriate social and interpersonal skills is an important function of special education. Students with mental disabilities often exhibit behavior problems than students who do not have the similar disabilities. Some behaviors observed by students with mental disabilities are difficulty accepting criticism, limited self-control, and inappropriate behaviors. The greater the severity of the mental disabilities, generally the higher the incidence of behavioral problems.
Problems with assessing long-term and short-term adaptation
One problem with assessments of adaptive behavior is that a behavior that appears adaptive in the short run can be maladaptive in the long run and vice versa. For example, in the case of a group with rules that insist on drinking harmful amounts of alcohol both abstinence and moderate drinking (moderate as defined by actual health effects, not by socially constructed rules) may seem maladaptive if assessments are strictly short term, but an assessment that focuses on long-term survival would instead find that it was adaptive and that it was obedience under the drinking rule that was maladaptive. Such differences between short term effects and long-term effects in the context of harmful consequences of short-term compliance with destructive rules are argued by some researchers to show that assessments of adaptive behavior are not as unproblematic as is often assumed by psychiatry.
Adaptive behaviors in education
In education, adaptive behavior is defined as that which (1) meets the needs of the community of stakeholders (parents, teachers, peers, and later employers) and (2) meets the needs of the learner, now and in the future. Specifically, these behaviors include such things as effective speech, self-help, using money, cooking, and reading, for example.
Training in adaptive behavior is a key component of any educational program, but is critically important for children with special needs. The US Department of Education has allocated billions of dollars ($12.3 billion in 2008) for special education programs aimed at improving educational and early intervention outcomes for children with disabilities.
In 2001, the United States National Research Council published a comprehensive review of interventions for children and adults diagnosed with autism. The review indicates that interventions based on applied behavior analysis have been effective with these groups.
Adaptive behavior includes socially responsible and independent performance of daily activities. However, the specific activities and skills needed may differ from setting to setting. When a student is going to school, school and academic skills are adaptive. However, some of those same skills might be useless or maladaptive in a job settings, so the transition between school and job needs careful attention.
Specific skills
Adaptive behavior includes the age-appropriate behaviors necessary for people to live independently and to function safely and appropriately in daily life. Adaptive behaviors include life skills such as grooming, dressing, safety, food handling, working, money management, cleaning, making friends, social skills, and the personal responsibility expected of their age, social group and wealth group. Specifically relevant are community access skills and peer access and retention skills, and behaviors which act as barriers to such access. These are itemised below.
Community access skills
Bus riding
Independent walking
Coin summation
Ordering food in a restaurant
Vending machine use
Eating in public places
Pedestrian safety
Peer access and retention
Clothing selection skills
Appropriate mealtime behaviors
Toy play skills and playful activities
Oral hygiene and tooth brushing
Soccer play
Adaptive behaviors are considered to change due to the persons culture and surroundings. Professors have to delve into the students technical and comprehension skills to measure how adaptive their behavior is.
Barriers to access to peers and communities
Diurnal bruxism
Controlling rumination and vomiting
Pica
Adaptive skills
Every human being must learn a set of skills that is beneficial for the environments and communities they live in. Adaptive skills are stepping stones toward accessing and benefiting from local or remote communities. This means that, in urban environments, to go to the movies, a child will have to learn to navigate through the town or take the bus, read the movie schedule, and pay for the movie. Adaptive skills allow for safer exploration because they provide the learner with an increased awareness of their surroundings and of changes in context, that require new adaptive responses to meet the demands and dangers of that new context. Adaptive skills may generate more opportunities to engage in meaningful social interactions and acceptance. Adaptive skills are socially acceptable and desirable at any age and regardless of gender (with the exception of sex specific biological differences such as menstrual care skills).
Learning adaptive skills
Adaptive skills encompass a range of daily situations and they usually start with a task analysis. The task analysis will reveal all the steps necessary to perform the task in the natural environment. The use of behavior analytic procedures has been documented, with children, adolescents and adults, under the guidance of behavior analysts and supervised behavioral technicians. The list of applications has a broad scope and it is in continuous expansion as more research is carried out in applied behavior analysis (see Journal of Applied Behavior Analysis, The Analysis of Verbal Behavior).
Practopoietic theory
According to practopoietic theory, creation of adaptive behavior involves special, poietic interactions among different levels of system organization. These interactions are described on the basis of cybernetic theory in particular, good regulator theorem. In practopoietic systems, lower levels of organization determine the properties of higher levels of organization, but not the other way around. This ensures that lower levels of organization (e.g., genes) always possess cybernetically more general knowledge than the higher levels of organization—knowledge at a higher level being a special case of the knowledge at the lower level. At the highest level of organization lies the overt behavior. Cognitive operations lay in the middle parts of that hierarchy, above genes and below behavior. For behavior to be adaptive, at least three adaptive traverses are needed.
See also
Adaptive Behavior – journal
Character
Evolutionary mismatch
Vineland Social Maturity Scale
References
External links
BACB (Behavior Analyst Certification Board)
Human behavior
Behavioral concepts
Developmental psychology
Evolutionary psychology | 0.774698 | 0.985742 | 0.763652 |
Psychology of religion | Psychology of religion consists of the application of psychological methods and interpretive frameworks to the diverse contents of religious traditions as well as to both religious and irreligious individuals. The various methods and frameworks can be summarized according to the classic distinction between the natural-scientific and human-scientific approaches. The first cluster amounts to objective, quantitative, and preferably experimental procedures for testing hypotheses about causal connections among the objects of one's study. In contrast, the human-scientific approach accesses the human world of experience using qualitative, phenomenological, and interpretive methods. This approach aims to discern meaningful, rather than causal, connections among the phenomena one seeks to understand.
Psychologists of religion pursue three major projects:
systematic description, especially of religious contents, attitudes, experiences, and expressions
explanation of the origins of religion, both in the history of the human race and in individual lives, taking into account a diversity of influences
mapping out the consequences of religious attitudes and conduct, both for the individual and for society at large.
The psychology of religion first arose as a self-conscious discipline in the late 19th century, but all three of these tasks have a history going back many centuries before that.
Overview
The challenge for the psychology of religion is essentially threefold:
to provide a thoroughgoing description of the objects of investigation, whether they be shared religious content (e.g., a tradition's ritual observances) or individual experiences, attitudes, or conduct;
to account in psychological terms for the rise of such phenomena, whether they be in individual lives or not;
to clarify the outcomesthe fruits, as William James put itof these phenomena, for individuals, and the larger society. These fruits may be both positive and negative.
The first, descriptive task naturally requires a clarification of one's termsabove all, the word religion. Historians of religion have long underscored the problematic character of this term. They note that its usage over the centuries has changed in significant ways, generally in the direction of reification. The early psychologists of religion were fully aware of these difficulties, typically acknowledging that the definitions they chose were to some degree arbitrary. With the rise of positivistic trends in psychology over the 20th century, especially the demand that all phenomena be operationalized by quantitative procedures, psychologists of religion developed a multitude of scales, most of them developed for use by Protestant Christians.<ref>Hill, P. C., and Hood, R. W., Jr. (Eds.). (1999). Measures of Religiosity." Birmingham, AL: Religious Education Press.</ref> Factor analysis was also brought into play by both psychologists and sociologists of religion, to establish a fixed core of dimensions and a corresponding set of scales. The justification and adequacy of these efforts, especially in the light of constructivist and other postmodern viewpoints, remains a matter of debate.
In the last several decades, especially among clinical psychologists, a preference for the terms "spirituality" and "spiritual" has emerged, along with efforts to distinguish them from "religion" and "religious." Especially in the United States, "religion" has for many become associated with sectarian institutions and their obligatory creeds and rituals, thus giving the word a negative cast; "spirituality," in contrast, is positively constructed as deeply individual and subjective, as a universal capacity to apprehend and accord one's life with higher realities. In fact, "spirituality" has likewise undergone an evolution in the West, from a time when it was essentially a synonym for religion in its original, subjective meaning.
Today, efforts are ongoing to "operationalize" these terms, with little regard for their history in their Western context, and with the apparent realist assumption that underlying them are fixed qualities identifiable using empirical procedures.
Schnitker and Emmons theorized that the understanding of religion as a search for meaning makes implications in the three psychological areas of motivation, cognition and social relationships. The cognitive aspects relate to God and a sense of purpose, the motivational ones to the need to control, and the religious search for meaning is also woven into social communities.
History
Edwin Diller Starbuck
Edwin Diller Starbuck is considered a pioneer of the psychology of religion and his book Psychology of Religion (1899) has been described as the first book in the genre. This book had the endorsement of William James who wrote a preface to it. Starbuck's work would influence James' own book The Varieties of Religious Experience, with James thanking him in the preface for having "made over to me his large collection of manuscript material". In the book itself James mentions Starbuck's name 46 times and cites him on dozens of occasions.
William James
American psychologist and philosopher William James (1842–1910) is regarded by most psychologists of religion as the founder of the field. He served as president of the American Psychological Association, and wrote one of the first psychology textbooks. In the psychology of religion, James' influence endures. His Varieties of Religious Experience is considered to be the classic work in the field, and references to James' ideas are common at professional conferences.
James distinguished between institutional religion and personal religion. Institutional religion refers to the religious group or organization and plays an important part in a society's culture. Personal religion, in which the individual has mystical experience, can be experienced regardless of the culture. James was most interested in understanding personal religious experience.
In studying personal religious experiences, James made a distinction between healthy-minded and sick-souled religiousness. Individuals predisposed to healthy-mindedness tend to ignore the evil in the world and focus on the positive and the good. James used examples of Walt Whitman and the "mind-cure" religious movement to illustrate healthy-mindedness in The Varieties of Religious Experience. In contrast, individuals predisposed to having a sick-souled religion are unable to ignore evil and suffering and need a unifying experience, religious or otherwise, to reconcile good and evil. James included quotations from Leo Tolstoy and John Bunyan to illustrate the sick soul.
William James' hypothesis of pragmatism stems from the efficacy of religion. If an individual believes in and performs religious activities, and those actions happen to work, then that practice appears the proper choice for the individual. However, if the processes of religion have little efficacy, then there is no rationality for continuing the practice.
Other early theorists
G.W.F. Hegel
Georg Wilhelm Friedrich Hegel (1770–1831) described all systems of religion, philosophy, and social science as expressions of the basic urge of consciousness to learn about itself and its surroundings, and record its findings and hypotheses. Thus, religion is only a form of that search for knowledge, within which humans record various experiences and reflections. Others, compiling and categorizing these writings in various ways, form the consolidated worldview as articulated by that religion, philosophy, social science, etc. His work The Phenomenology of Spirit was a study of how various types of writing and thinking draw from and re-combine with the individual and group experiences of various places and times, influencing the current forms of knowledge and worldviews that are operative in a population. This activity is the functioning of an incomplete group mind, where each is accessing the recorded wisdom of others. His works often include detailed descriptions of the psychological motivations involved in thought and behavior, e.g., the struggle of a community or nation to know itself and thus correctly govern itself. In Hegel's system, Religion is one of the major repositories of wisdom to be used in these struggles, representing a huge body of recollections from humanity's past in various stages of its development.
Sigmund Freud
Sigmund Freud (1856–1939) gave explanations of the genesis of religion in his various writings. In Totem and Taboo, he applied the idea of the Oedipus complex (involving unresolved sexual feelings of, for example, a son toward his mother and hostility toward his father) and postulated its emergence in the primordial stage of human development.
In Moses and Monotheism, Freud reconstructed biblical history by his general theory. His ideas were also developed in The Future of an Illusion. When Freud spoke of religion as an illusion, he maintained that it "is a fantasy structure from which a man must be set free if he is to grow to maturity."
Freud views the idea of God as being a version of the father image, and religious belief as at bottom infantile and neurotic. Authoritarian religion, Freud believed, is dysfunctional and alienates man from himself.
Carl Jung
The Swiss psychoanalyst Carl Jung (1875–1961) adopted a very different posture, one that was more sympathetic to religion and more concerned with a positive appreciation of religious symbolism. Jung considered the question of the metaphysical existence of God to be unanswerable by the psychologist and adopted a kind of agnosticism.
Jung postulated, in addition to the personal unconscious (roughly adopting Freud's concept), the collective unconscious, which is the repository of human experience and which contains "archetypes" (i.e. basic images that are universal in that they recur regardless of culture). The irruption of these images from the unconscious into the realm of consciousness he viewed as the basis of religious experience and often of artistic creativity. Some of Jung's writings have been devoted to elucidating some of the archetypal symbols, and include his work in comparative mythology.
Alfred Adler
Austrian psychiatrist Alfred Adler (1870–1937), who parted ways with Freud, emphasized the role of goals and motivation in his Individual Psychology. One of Adler's most famous ideas is that we try to compensate for inferiorities that we perceive in ourselves. A lack of power often lies at the root of feelings of inferiority. One way that religion enters into this picture is through our beliefs in God, which are characteristic of our tendency to strive for perfection and superiority. For example, in many religions, God is considered to be perfect and omnipotent, and commands people likewise to be perfect. If we, too, achieve perfection, we become one with God. By identifying with God in this way, we compensate for our imperfections and feelings of inferiority.
Our ideas about God are important indicators of how we view the world. According to Adler, these ideas have changed over time, as our vision of the world – and our place in it – has changed. Consider this example that Adler offers: the traditional belief that people were placed deliberately on earth as God's ultimate creation is being replaced with the idea that people have evolved by natural selection. This coincides with a view of God not as a real being, but as an abstract representation of nature's forces. In this way, our view of God has changed from one that was concrete and specific to one that is more general. From Adler's vantage point, this is a relatively ineffective perception of God because it is so general that it fails to convey a strong sense of direction and purpose.
An important thing for Adler is that God (or the idea of God) motivates people to act and that those actions do have real consequences for ourselves and others. Our view of God is important because it embodies our goals and directs our social interactions.
Compared to science, another social movement, religion is more efficient because it motivates people more effectively. According to Adler, only when science begins to capture the same religious fervor, and promotes the welfare of all segments of society, will the two be more equal in peoples' eyes.
Gordon Allport
In his 1950 book The Individual and His Religion, Gordon Allport (1897–1967) illustrates how people may use religion in different ways. He makes a distinction between Mature religion andImmature religion. Mature religious sentiment is how Allport characterized the person whose approach to religion is dynamic, open-minded, and able to maintain links between inconsistencies. In contrast, immature religion is self-serving and generally represents the negative stereotypes that people have about religion.
More recently, this distinction has been encapsulated in the terms "intrinsic religion", referring to a genuine, heartfelt devout faith, and "extrinsic religion", referring to a more utilitarian use of religion as a means to an end, such as church attendance to gain social status. These dimensions of religion were measured on the Religious Orientation Scale of Allport and Ross. The third form of religious orientation has been described by Daniel Batson. This refers to treatment of religion as an open-ended search.
More specifically, it has been seen by Batson as comprising a willingness to view religious doubts positively, acceptance that religious orientation can change and existential complexity, the belief that one's religious beliefs should be shaped from personal crises that one has experienced in one's life. Batson refers to extrinsic, intrinsic and quests respectively as religion-as-means, religion-as-end, and religion-as-quest, and measures these constructs on the Religious Life Inventory.
Erik H. Erikson
Erik Erikson (1902–1994) is best known for his theory of psychological development, which has its roots in the psychoanalytic importance of identity in personality. His biographies of Gandhi and Martin Luther reveal Erikson's positive view of religion. He considered religions to be important influences in successful personality development because they are the primary way that cultures promote the virtues associated with each stage of life. Religious rituals facilitate this development. Erikson's theory has not benefited from systematic empirical study, but it remains an influential and well-regarded theory in the psychological study of religion.
Erich Fromm
The American scholar Erich Fromm (1900–1980) modified the Freudian theory and produced a more complex account of the functions of religion. In his book Psychoanalysis and Religion he responded to Freud's theories by explaining that part of the modification is viewing the Oedipus complex as based not so much on sexuality as on a "much more profound desire", namely, the childish desire to remain attached to protecting figures. The right religion, in Fromm's estimation, can, in principle, foster an individual's highest potentialities, but religion in practice tends to relapse into being neurotic.
According to Fromm, humans need a stable frame of reference. Religion fills this need. In effect, humans crave answers to questions that no other source of knowledge has an answer to, which only religion may seem to answer. However, a sense of free will must be given for religion to appear healthy. An authoritarian notion of religion appears detrimental.
Rudolf Otto
Rudolf Otto (1869–1937) was a German Protestant theologian and scholar of comparative religion. Otto's most famous work, The Idea of the Holy (published first in 1917 as ), defines the concept of the holy as that which is numinous. Otto explained the numinous as a "non-rational, non-sensory experience or feeling whose primary and immediate object is outside the self." It is a mystery that is both fascinating (fascinans) and terrifying at the same time; A mystery that causes trembling and fascination, attempting to explain that inexpressible and perhaps supernatural emotional reaction of wonder drawing us to seemingly ordinary and/or religious experiences of grace. This sense of emotional wonder appears evident at the root of all religious experiences. Through this emotional wonder, we suspend our rational mind for non-rational possibilities.The Idea of the Holy also set out a paradigm for the study of religion that focuses on the need to realize the religious as a non-reducible, original category in its own right. This paradigm was under much attack between approximately 1950 and 1990 but has made a strong comeback since then.
Modern thinkers
Autobiographal accounts of 20th-century psychology of religion as a field have been supplied by numerous modern psychologists of religion, primarily based in Europe, but also by several US-based psychologists such as Ralph W. Hood and Donald Capps.
Allen Bergin
Allen Bergin is noted for his 1980 paper "Psychotherapy and Religious Values," which is known as a landmark in scholarly acceptance that religious values do, in practice, influence psychotherapy.Slife, B.D. & Whoolery, M. (2003). "Understanding disciplinary significance: The story of Allen Bergin's 1980 article on values". In R. Sternberg (Ed.) The anatomy of impact: What has made the great works of psychology great? Washington, D.C.: American Psychological Association. He received the Distinguished Professional Contributions to Knowledge award from the American Psychological Association in 1989 and was cited as challenging "psychological orthodoxy to emphasize the importance of values and religion in therapy."
Robert A. Emmons
Robert A. Emmons offered a theory of "spiritual strivings" in his 1999 book, The Psychology of Ultimate Concerns. With support from empirical studies, Emmons argued that spiritual strivings foster personality integration because they exist at a higher level of the personality.
Ralph W. Hood Jr.
Ralph W. Hood Jr. is a professor of psychology at the University of Tennessee at Chattanooga. He is a former editor of the Journal for the Scientific Study of Religion and a former co-editor of the Archive for the Psychology of Religion and The International Journal for the Psychology of Religion. He is Past President of Division 36 of the American Psychological Association and a recipient of its William James Award. He has published several hundred articles and book chapters on the psychology of religion and has authored, co-authored, or edited thirteen volumes, all dealing with the psychology of religion.
Kenneth Pargament
Kenneth Pargament is noted for his book Psychology of Religion and Coping (1997), as well as for a 2007 book on religion and psychotherapy, and a sustained research program on religious coping. He is professor of psychology at Bowling Green State University (Ohio, US), and
has published more than 100 papers on the subject of religion and spirituality in psychology. Pargament led the design of a questionnaire called the "RCOPE" to measure Religious Coping strategies. Pargament has distinguished between three types of styles for coping with stress:
Collaborative, in which people co-operate with God to deal with stressful events;
Deferring, in which people leave everything to God; and
Self-directed, in which people do not rely on God and try exclusively to solve problems by their own efforts. He also describes four major stances toward religion that have been adopted by psychotherapists in their work with clients, which he calls the religiously rejectionist, exclusivist, constructivist, and pluralist stances.Brian J. Zinnbauer & Kenneth I. Pargament (2000). Working with the sacred: Four approaches to religious and spiritual issues in counseling. Journal of Counseling & Development, v78 n2, pp. 162–171.
James Hillman
James Hillman, at the end of his book Re-Visioning Psychology, reverses James' position of viewing religion through psychology, urging instead that we view psychology as a variety of religious experience. He concludes: "Psychology as religion implies imagining all psychological events as effects of Gods in the soul."
Julian Jaynes
Julian Jaynes, primarily in his book The Origin of Consciousness in the Breakdown of the Bicameral Mind, proposed that religion (and some other psychological phenomena such as hypnosis and schizophrenia) is a remnant of a relatively recent time in human development, prior to the advent of consciousness. Jaynes hypothesized that hallucinated verbal commands helped non-conscious early man to perform tasks promoting human survival. Starting about 10,000 BCE, selective pressures favored the hallucinated verbal commands for social control, and they came to be perceived as an external, rather than internal, voice commanding the person to take some action. These were hence often explained as originating from invisible gods, spirits, and ancestors.
Hypotheses on the role of religion
There are three primary hypotheses on the role of religion in the modern world.
Secularization
The first hypothesis, secularization, holds that science and technology will take the place of religion. Secularization supports the separation of religion from politics, ethics, and psychology. Taking this position even further, Taylor explains that secularization denies transcendence, divinity, and rationality in religious beliefs.
Religious transformation
Challenges to the secularization hypothesis led to significant revisions, resulting in the religious transformation hypothesis. This perspective holds that general trends towards individualism and social disintegration will produce changes in religion, making religious practice more individualized and spiritually focused. This in turn is expected to produce more spiritual seeking, although not exclusive to religious institutions. Eclecticism, which draws from multiple religious/spiritual systems and New Age movements are also predicted to result.
Cultural divide
In response to the religious transformation hypothesis, Ronald Inglehart piloted the renewal of the secularization hypothesis. His argument hinges on the premise that religion develops to fill the human need for security. Therefore, the development of social and economic security in Europe explains its corresponding secularization due to a lack of need for religion. However, religion continues in the third world where social and economic insecurity is rampant. The overall effect is expected to be a growing cultural disparity.
The idea that religiosity arises from the human need for security has also been furthered by studies examining religious beliefs as a compensatory mechanism of control. These studies are motivated by the idea that people are invested in maintaining beliefs in order and structure to prevent beliefs in chaos and randomness.
In the experimental setting, researchers have also tested compensatory control in regard to individuals' perceptions of external systems, such as religion or government. For example, Kay and colleagues found that in a laboratory setting, individuals are more likely to endorse broad external systems (e.g., religion or sociopolitical systems) that impose order and control on their lives when they are induced with lowered levels of personal control. In this study, researchers suggest that when a person's personal control is lessened, their motivation to believe in order is threatened, resulting in compensation of this threat through adherence to other external sources of control.
Psychometric approaches to religion
Since the 1960s psychologists of religion have used the methodology of psychometrics to assess ways in which a person may be religious. An example is the Religious Orientation Scale of Allport and Ross, which measures how respondents stand on intrinsic and extrinsic religion as described by Allport.
More recent questionnaires include the Age-Universal I-E Scale of Gorsuch and Venable, the Religious Life Inventory of Batson, Schoenrade and Ventis, and the Spiritual Experiences Index-Revised of Genia. The first provides an age-independent measure of Allport and Ross's two religious orientations. The second measures three forms of religious orientation: religion as means (intrinsic), religion as end (extrinsic), and religion as quest. The third assesses spiritual maturity using two factors: Spiritual Support and Spiritual Openness.
Religious orientations and religious dimensions
Some questionnaires, such as the Religious Orientation Scale, relate to different religious orientations, such as intrinsic and extrinsic religiousness, referring to different motivations for religious allegiance. A rather different approach, taken, for example, by Glock and Stark (1965), has been to list different dimensions of religion rather than different religious orientations, which relates to how an individual may manifest different forms of being religious. Glock and Stark's typology described five dimensions of religion – the doctrinal, the intellectual, the ethical-consequential, the ritual, and the experiential. In later works, these authors subdivided the ritual dimension into devotional and public ritual, and also clarified that their distinction of religion along multiple dimensions was not identical to distinguishing religious orientations. Although some psychologists of religion have found it helpful to take a multidimensional approach to religion for the purpose of psychometric scale design, there has been, as Wulff explains, considerable controversy about whether religion should really be seen as multidimensional.
Questionnaires to assess religious experience
What we call religious experiences can differ greatly. Some reports exist of supernatural happenings that it would be difficult to explain from a rational, scientific point of view. On the other hand, there also exist the sort of testimonies that simply seem to convey a feeling of peace or oneness – something which most of us, religious or not, may possibly relate to. In categorizing religious experiences it is perhaps helpful to look at them as explicable through one of two theories: the objectivist thesis or the subjectivist thesis.
An objectivist would argue that the religious experience is a proof of God's existence. However, others have criticised the reliability of religious experiences. The English philosopher Thomas Hobbes asked how it was possible to tell the difference between talking to God in a dream, and dreaming about talking to God.
The Subjectivist view argues that it is not necessary to think of religious experiences as evidence for the existence of an actual being whom we call God. From this point of view, the important thing is the experience itself and the effect that it has on the individual.
Developmental approaches to religion
Many have looked at stage models, like those of Jean Piaget and Lawrence Kohlberg, to explain how children develop ideas about God and religion in general.
James Fowler's model
The best-known stage model of spiritual or religious development is that of James W. Fowler, a developmental psychologist at the Candler School of Theology, in his Stages of Faith. He follows Piaget and Kohlberg and has proposed a holistic staged development of faith (or spiritual development) across the lifespan. These stages of faith development were along the lines of Piaget's theory of cognitive development and Kohlberg's stages of moral development.
The book-length study contains six stages of faith development proposed by James Fowler:
Stage 0 – "Primal or Undifferentiated" faith (birth to two years), is characterized by an early learning of the safety of their environment (i.e. warm, safe and secure vs. hurt, neglect and abuse). If consistent nurture is experienced, one will develop a sense of trust and safety about the universe and the divine. Conversely, negative experiences will cause one to develop distrust about the universe and the divine. Transition to the next stage begins with integration of thought and language which facilitates the use of symbols in speech and play.
Stage 1 – "Intuitive-Projective" faith (ages of three to seven), is characterized by the psyche's unprotected exposure to the Unconscious, and marked by a relative fluidity of thought patterns. Religion is learned mainly through experiences, stories, images, and the people that one comes in contact with.
Stage 2 – "Mythic-Literal" faith (mostly in school children), is characterized by persons have a strong belief in the justice and reciprocity of the universe, and their deities are almost always anthropomorphic. During this time metaphors and symbolic language are often misunderstood and are taken literally.
Stage 3 – "Synthetic-Conventional" faith (arising in adolescence; aged 12 to adulthood), is characterized by conformity to authority and the religious development of a personal identity. Any conflicts with one's beliefs are ignored at this stage due to the fear of threat from inconsistencies.
Stage 4 – "Individuative-Reflective" faith (usually mid-twenties to late thirties), is a stage of angst and struggle. The individual takes personal responsibility for his or her beliefs and feelings. As one is able to reflect on one's own beliefs, there is an openness to a new complexity of faith, but this also increases the awareness of conflicts in one's belief.
Stage 5 – "Conjunctive" faith (mid-life crisis), acknowledges paradox and transcendence relating reality behind the symbols of inherited systems. The individual resolves conflicts from previous stages by a complex understanding of a multidimensional, interdependent "truth" that cannot be explained by any particular statement.
Stage 6 – "Universalizing" faith: The individual would treat any person with compassion as he or she views people as from a universal community, and should be treated with universal principles of love and justice.
Fowler's model has inspired a considerable body of empirical research into faith development, although little of such research was ever conducted by Fowler himself. Gary Leak's Faith Development Scale (FDS) has been subject to factor analysis by Leak.
Other hypotheses
Other theorists in developmental psychology have suggested that religiosity comes naturally to young children. Specifically, children may have a natural-born conception of mind-body dualism, which lends itself to beliefs that the mind may live on after the body dies. In addition, children have a tendency to see agency and human design where there is not, and prefer a creationist explanation of the world even when raised by parents who do not.
Researchers have also investigated attachment system dynamics as a predictor of the religious conversion experience throughout childhood and adolescence. One hypothesis is the correspondence hypothesis, which posits that individuals with secure parental attachment are more likely to experience a gradual conversion experience. Under the correspondence hypothesis, internal working models of a person's attachment figure is thought to perpetuate his or her perception of God as a secure base. Another hypothesis relating attachment style to the conversion experience is the compensation hypothesis, which states that individuals with insecure attachments are more likely to have a sudden conversion experience as they compensate for their insecure attachment relationship by seeking a relationship with God.
Researchers have tested these hypotheses using longitudinal studies and individuals' self narratives of their conversion experience. For example, one study investigating attachment styles and adolescent conversions at Young Life religious summer camps resulted in evidence supporting the correspondence hypothesis through analysis of personal narratives and a prospective longitudinal follow-up of Young Life campers, with mixed results for the compensation hypothesis.
James Alcock summarizes a number of components of what he calls the "God engine," a "number of automatic processes and cognitive biases [that] combine to make supernatural belief the automatic default." These include magical thinking, agency detection, theory of mind that leads to dualism, the notion that "objects and events [serve] an intentional purpose," etc.
Evolutionary and cognitive psychology of religion
Evolutionary psychology is based on the hypothesis that, just like the cardiac, pulmonary, urinary, and immune systems, cognition has a functional structure with a genetic basis, and therefore appeared through natural selection. Like other organs and tissues, this functional structure should be universally shared among humans and should solve important problems of survival and reproduction. Evolutionary psychologists seek to understand cognitive processes by understanding the survival and reproductive functions they might serve.
Pascal Boyer is one of the leading figures in the cognitive psychology of religion, a new field of inquiry that is less than fifteen years old, which accounts for the psychological processes that underlie religious thought and practice. In his book Religion Explained, Boyer shows that there is no simple explanation for religious consciousness. Boyer is mainly concerned with explaining the various psychological processes involved in the acquisition and transmission of ideas concerning the gods. Boyer builds on the ideas of cognitive anthropologists Dan Sperber and Scott Atran, who first argued that religious cognition represents a by-product of various evolutionary adaptations, including folk psychology, and purposeful violations of innate expectations about how the world is constructed (for example, bodiless beings with thoughts and emotions) that make religious cognitions striking and memorable.
Religious persons acquire religious ideas and practices through social exposure. The child of a Zen Buddhist will not become an evangelical Christian or a Zulu warrior without the relevant cultural experience. While mere exposure does not cause a particular religious outlook (a person may have been raised a Roman Catholic but leave the church), nevertheless some exposure seems required – this person will never invent Roman Catholicism out of thin air. Boyer says cognitive science can help us to understand the psychological mechanisms that account for these manifest correlations and in so doing enable us to better understand the nature of religious belief and practice.
Boyer moves outside the leading currents in mainstream cognitive psychology and suggests that we can use evolutionary biology to unravel the relevant mental architecture. Our brains are, after all, biological objects, and the best naturalistic account of their development in nature is Darwin's theory of evolution. To the extent that mental architecture exhibits intricate processes and structures, it is plausible to think that this is the result of evolutionary processes working over vast periods of time. Like all biological systems, the mind is optimised to promote survival and reproduction in the evolutionary environment. On this view all specialised cognitive functions broadly serve those reproductive ends.
For Steven Pinker the universal propensity toward religious belief is a genuine scientific puzzle. He thinks that adaptationist explanations for religion do not meet the criteria for adaptations. An alternative explanation is that religious psychology is a by-product of many parts of the mind that originally evolved for other purposes.
Religion and prayer
Religious practice often manifests itself in some form of prayer. Recent studies have focused specifically on the effects of prayer on health. Measures of prayer and the above measures of spirituality evaluate different characteristics and should not be considered synonymous.
Prayer is fairly prevalent in the United States. About 55% of Americans report praying daily. However, the practice of prayer is more prevalent and practiced more consistently among Americans who perform other religious practices. There are four primary types of prayer in the West. Poloma and Pendleton, utilized factor analysis to delineate these four types of prayer: meditative (more spiritual, silent thinking), ritualistic (reciting), petitionary (making requests to God), and colloquial (general conversing with God). Further scientific study of prayer using factor analysis has revealed three dimensions of prayer. Ladd and Spilka's first factor was awareness of self, inward reaching. Their second and third factors were upward reaching (toward God) and outward reaching (toward others). This study appears to support the contemporary model of prayer as connection (whether to the self, higher being, or others).
Dein and Littlewood (2008) suggest that an individual's prayer life can be viewed on a spectrum ranging from immature to mature. A progression on the scale is characterized by a change in the perspective of the purpose of prayer. Rather than using prayer as a means of changing the reality of a situation, a more mature individual will use prayer to request assistance in coping with immutable problems and draw closer to God or others. This change in perspective has been shown to be associated with an individual's passage through adolescence.
Prayer appears to have health implications. Empirical studies suggest that mindfully reading and reciting the Psalms (from scripture) can help a person calm down and focus. Prayer is also positively correlated with happiness and religious satisfaction. A study conducted by Francis, Robbins, Lewis, and Barnes investigated the relationship between self-reported prayer frequency and measures of psychoticism and neuroticism according to the abbreviated form of the Revised Eysenck Personality Questionnaire (EPQR-A). The study included a sample size of 2306 students attending Protestant and Catholic schools in the highly religious culture of Northern Ireland. The data shows a negative correlation between prayer frequency and psychoticism. The data also shows that, in Catholic students, frequent prayer has a positive correlation to neuroticism scores. Ladd and McIntosh suggest that prayer-related behaviors, such as bowing the head and clasping the hands together in an almost fetal position, are suggestive of "social touch" actions. Prayer in this manner may prepare an individual to carry out positive pro-social behavior after praying, due to factors such as increased blood flow to the head and nasal breathing. Overall, slight health benefits have been found fairly consistently across studies.
Three main pathways to explain this trend have been offered: placebo effect, focus and attitude adjustment, and activation of healing processes. These offerings have been expanded by Breslin and Lewis (2008) who have constructed a five pathway model between prayer and health with the following mediators: physiological, psychological, placebo, social support, and spiritual. The spiritual mediator is a departure from the rest in that its potential for empirical investigation is not currently feasible. Although the conceptualizations of chi, the universal mind, divine intervention, and the like breach the boundaries of scientific observation, they are included in this model as possible links between prayer and health so as to not unnecessarily exclude the supernatural from the broader conversation of psychology and religion.
Religion and ritual
Another significant form of religious practice is ritual. Religious rituals encompass a wide array of practices, but can be defined as the performance of similar actions and vocal expressions based on prescribed tradition and cultural norms.
Scheff suggests that ritual provides catharsis, emotional purging, through distancing. This emotional distancing enables an individual to experience feelings with an amount of separation, and thus with less intensity. However, the conception of religious ritual as an interactive process has since matured and become more scientifically established. From this view, ritual offers a means to catharsis through behaviors that foster connection with others, allowing for emotional expression. This focus on connection contrasts to the separation that seems to underlie Scheff's view.
Additional research suggests a social component of ritual. For instance, findings suggest that ritual performance indicates group commitment and prevents the uncommitted from gaining membership benefits. Ritual may aid in emphasizing moral values that serve as group norms and regulate societies. It may also strengthen commitment to moral convictions and the likelihood of upholding these social expectations. Thus, performance of rituals may foster social-group stability.
Robert Sapolsky sees a similarity between the rituals accompanying obsessive–compulsive disorder and religious rituals. According to him, religious ritual reduces the tension and anxiety associated with the disorder and provides relief resulting from practicing in a social community.
Religion and personal functioning
Religion and health
There is considerable literature on the relationship between religion and health. More than 3000 empirical studies have examined relationships between religion and health, including more than 1200 in the 20th century, and more than 2000 additional studies between 2000 and 2009.
Psychologists consider that religion may benefit both physical and mental health in various ways, including encouraging healthy lifestyles, providing social support networks and encouraging an optimistic outlook on life; prayer and meditation may also benefit physiological functioning. Nevertheless, religion is not a unique source of health and well-being, and there are benefits to nonreligiosity as well. Haber, Jacob and Spangler have considered how different dimensions of religiosity may relate to health benefits in different ways.
Religion and personality
Some studies have examined whether there is a "religious personality." Research on the five factor model of personality suggests that people who identify as religious are more likely to be agreeable and conscientious. Similarly, people who identify as spiritual are more likely to be extrovert and open, although this varies based on the type of spirituality endorsed. For example, people endorsing fundamentalist religious beliefs are more likely to measure low on the Openness factor.
Religion and prejudice
To investigate the salience of religious beliefs in establishing group identity, researchers have also conducted studies looking at religion and prejudice. Some studies have shown that greater religious attitudes may be significant predictors of negative attitudes towards racial or social outgroups. These effects are often conceptualized under the framework of intergroup bias, where religious individuals favor members of their ingroup (ingroup favoritism) and exhibit disfavor towards members of their outgroup (outgroup derogation). Evidence supporting religious intergroup bias has been supported in multiple religious groups, including non-Christian groups, and is thought to reflect the role of group dynamics in religious identification. Many studies regarding religion and prejudice implement religious priming both in the laboratory and in naturalistic settings with evidence supporting the perpetuation of ingroup favoritism and outgroup derogation in individuals who are high in religiosity.
Recently, reparative or conversion therapy, a religiously motivated process intended to change an individual's sexuality, has been the subject of scrutiny and has been condemned by some governments, LGBT charities, and therapy/counselling professional bodies.
Religion and drugs
The American psychologist James H. Leuba (1868–1946), in A Psychological Study of Religion, accounts for mystical experience psychologically and physiologically, pointing to analogies with certain drug-induced experiences. Leuba argued forcibly for a naturalistic treatment of religion, which he considered to be necessary if religious psychology were to be looked at scientifically. Shamans all over the world and in different cultures have traditionally used drugs, especially psychedelics, for their religious experiences. In these communities the absorption of drugs leads to dreams (visions) through sensory distortion. The psychedelic experience is often compared to non-ordinary forms of consciousness such as those experienced in meditation, and mystical experiences. Ego dissolution is often described as a key feature of the psychedelic experience.
William James was also interested in mystical experiences from a drug-induced perspective, leading him to make some experiments with nitrous oxide and even peyote. He concludes that while the revelations of the mystic hold true, they hold true only for the mystic; for others they are certainly ideas to be considered, but hold no claim to truth without personal experience of such.
Religion and mental illness
Although many researchers have brought evidence for a positive role that religion plays in health, others have shown that religious beliefs, practices, and experiences may be linked to mental illnesses of various kinds (mood disorders, personality disorders, and psychiatric disorders). In 2012 a team of psychiatrists, behavioral psychologists, neurologists, and neuropsychiatrists from the Harvard Medical School published research which suggested the development of a new diagnostic category of psychiatric disorders related to religious delusion and hyperreligiosity.
They compared the thoughts and behaviors of the most important figures in the Bible (Abraham, Moses, Jesus Christ, and Paul) with patients affected by mental disorders related to the psychotic spectrum using different clusters of disorders and diagnostic criteria (DSM-IV-TR), and concluded that these Biblical figures "may have had psychotic symptoms that contributed inspiration for their revelations", such as schizophrenia, schizoaffective disorder, manic depression, delusional disorder, delusions of grandeur, auditory-visual hallucinations, paranoia, Geschwind syndrome (Paul especially), and abnormal experiences associated with temporal lobe epilepsy (TLE). The authors suggest that Jesus sought to condemn himself to death ("suicide by proxy").
The research went further and also focused on social models of psychopathology, analyzing new religious movements and charismatic cult leaders such as David Koresh, leader of the Branch Davidians, and Marshall Applewhite, founder of the Heaven's Gate cult. The researchers concluded that "If David Koresh and Marshall Applewhite are appreciated as having psychotic-spectrum beliefs, then the premise becomes untenable that the diagnosis of psychosis must rigidly rely upon an inability to maintain a social group. A subset of individuals with psychotic symptoms appears able to form intense social bonds and communities despite having an extremely distorted view of reality. The existence of a better socially functioning subset of individuals with psychotic-type symptoms is corroborated by research indicating that psychotic-like experiences, including both bizarre and non-bizarre delusion-like beliefs, are frequently found in the general population. This supports the idea that psychotic symptoms likely lie on a continuum."
Religion and psychotherapy
Clients' religious beliefs are increasingly being considered in psychotherapy with the goal of improving service and effectiveness of treatment. A resulting development was theistic psychotherapy. Conceptually, it consists of theological principles, a theistic view of personality, and a theistic view of psychotherapy. Following an explicit minimizing strategy, therapists attempt to minimize conflict by acknowledging their religious views while being respectful of client's religious views. This is argued to up the potential for therapists to directly utilize religious practices and principles in therapy, such as prayer, forgiveness, and grace. In contrast to such an approach, psychoanalyst Robin S. Brown argues for the extent to which our spiritual commitments remain unconscious. Drawing from the work of Jung, Brown suggests that "our biases can only be suspended in the extent to which they are no longer our biases".
Pastoral psychology
One application of the psychology of religion is in pastoral psychology, the use of psychological findings to improve the pastoral care provided by pastors and other clergy, especially in how they support ordinary members of their congregations. Pastoral psychology is also concerned with improving the practice of chaplains in healthcare and in the military. One major concern of pastoral psychology is to improve the practice of pastoral counseling. Pastoral psychology is a topic of interest for professional journals such as the Journal of Psychology and Christianity and the Journal of Psychology and Theology''. In 1984, Thomas Oden severely criticized mid-20th-century pastoral care and the pastoral psychology that guided it as having entirely abandoned its classical/traditional sources, and having become overwhelmingly dominated by modern psychological influences from Freud, Rogers, and others. More recently, others have described pastoral psychology as a field that experiences a tension between psychology and theology.
See also
References
Works cited
Further reading
External links
Division 36: Society for the Psychology of Religion and Spirituality on the American Psychological Association's official website
Society for the Psychology of Religion and Spirituality, official website
International Association for the Scientific Study of Religion
International Association for the Psychology of Religion
Centre for Psychology of Religion, Institute IPSY, Université catholique de Louvain (Belgium)
Psychology of religion, Department of Historical, Philosophical and Religious studies, Umeå University (Sweden)
Misplaced Faith?: A theory of supernatural belief as misattribution with Luke Galen
Religiosity and Emotion
Psychology of religion pages
Psychology of Religious Doubt
Psychology of religion in Germany
Religion and mental health | 0.770698 | 0.990849 | 0.763646 |
Quaternary sector of the economy | The quaternary sector of the economy is based upon the economic activity that is associated with either the intellectual or knowledge-based economy. This consists of information technology; media; research and development; information-based services such as information-generation and information-sharing; and knowledge-based services such as consultation, entertainment, broadcasting, mass media, telecommunication, education, information technology, financial planning, blogging, and designing.
Other definitions describe the quaternary sector as pure services. This may consist of the entertainment industry, to describe media and culture, and government. This may be classified into an additional quinary sector.
The term reflects the analysis of the three-sector model of the economy, in which the primary sector produces raw materials used by the secondary sector to produce goods, which are then distributed to consumers by the tertiary sector.
Contrary to this implied sequence, however, the quaternary sector does not process the output of the tertiary sector. It has only limited and indirect connections to the industrial economy characterized by the three-sector model.
In a modern economy, the generation, analysis and dissemination of information is important enough to warrant a separate sector instead of being a part of the tertiary sector. This sector evolves in well-developed countries where the primary and secondary sectors are a minority of the economy, and requires a highly educated workforce.
For example, the tertiary and quaternary sectors form the largest part of the UK economy, employing 76% of the workforce.
See also
Indigo Era
References
+4
National accounts | 0.769466 | 0.992429 | 0.76364 |
Empowerment | Empowerment is the degree of autonomy and self-determination in people and in communities. This enables them to represent their interests in a responsible and self-determined way, acting on their own authority. It is the process of becoming stronger and more confident, especially in controlling one's life and claiming one's rights. Empowerment as action refers both to the process of self-empowerment and to professional support of people, which enables them to overcome their sense of powerlessness and lack of influence, and to recognize and use their resources.
As a term, empowerment originates from American community psychology and is associated with the social scientist Julian Rappaport (1981).
In social work, empowerment forms a practical approach of resource-oriented intervention. In the field of citizenship education and democratic education, empowerment is seen as a tool to increase the responsibility of the citizen. Empowerment is a key concept in the discourse on promoting civic engagement. Empowerment as a concept, which is characterized by a move away from a deficit-oriented towards a more strength-oriented perception, can increasingly be found in management concepts, as well as in the areas of continuing education and self-help.
Definitions
Robert Adams points to the limitations of any single definition of 'empowerment', and the danger that academic or specialist definitions might take away the word and the connected practices from the very people they are supposed to belong to. Still, he offers a minimal definition of the term:
'Empowerment: the capacity of individuals, groups and/or communities to take control of their circumstances, exercise power and achieve their own goals, and the process by which, individually and collectively, they are able to help themselves and others to maximize the quality of their lives.'
One definition for the term is "an intentional, ongoing process centered in the local community, involving mutual respect, critical reflection, caring, and group participation, through which people lacking an equal share of resources gain greater access to and control over those resources".
Rappaport's (1984) definition includes: "Empowerment is viewed as a process: the mechanism by which people, organizations, and communities gain mastery over their lives."
Sociological empowerment often addresses members of groups that social discrimination processes have excluded from decision-making processes through – for example – discrimination based on disability, race, ethnicity, religion, or gender. Empowerment as a methodology is also associated with feminism.
Process
Empowerment is the process of obtaining basic opportunities for marginalized people, either directly by those people, or through the help of non-marginalized others who share their own access to these opportunities. It also includes actively thwarting attempts to deny those opportunities. Empowerment also includes encouraging, and developing the skills for, self-sufficiency, with a focus on eliminating the future need for charity or welfare in the individuals of the group. This process can be difficult to start and to implement effectively.
Strategy
One empowerment strategy is to assist marginalized people to create their own nonprofit organization, using the rationale that only the marginalized people, themselves, can know what their own people need most, and that control of the organization by outsiders can actually help to further entrench marginalization. Charitable organizations lead from outside of the community, for example, can disempower the community by entrenching a dependence charity or welfare. A nonprofit organization can target strategies that cause structural changes, reducing the need for ongoing dependence. Red Cross, for example, can focus on improving the health of indigenous people, but does not have authority in its charter to install water-delivery and purification systems, even though the lack of such a system profoundly, directly and negatively impacts health. A nonprofit composed of the indigenous people, however, could ensure their own organization does have such authority and could set their own agendas, make their own plans, seek the needed resources, do as much of the work as they can, and take responsibility – and credit – for the success of their projects (or the consequences, should they fail).
The process of which enables individuals/groups to fully access personal or collective power, authority and influence, and to employ that strength when engaging with other people, institutions or society. In other words, "Empowerment is not giving people power, people already have plenty of power, in the wealth of their knowledge and motivation, to do their jobs magnificently. We define empowerment as letting this power out." It encourages people to gain the skills and knowledge that will allow them to overcome obstacles in life or work environment and ultimately, help them develop within themselves or in the society.
To empower a female "...sounds as though we are dismissing or ignoring males, but the truth is, both genders desperately need to be equally empowered." Empowerment occurs through improvement of conditions, standards, events, and a global perspective of life.
Criticism
Before there can be the finding that a particular group requires empowerment and that therefore their self-esteem needs to be consolidated on the basis of awareness of their strengths, there needs to be a deficit diagnosis usually carried out by experts assessing the problems of this group. The fundamental asymmetry of the relationship between experts and clients is usually not questioned by empowerment processes. It also needs to be regarded critically, in how far the empowerment approach is really applicable to all patients/clients. It is particularly questionable whether [mentally ill] people in acute crisis situations are in a position to make their own decisions. According to Albert Lenz, people behave primarily regressive in acute crisis situations and tend to leave the responsibility to professionals. It must be assumed, therefore, that the implementation of the empowerment concept requires a minimum level of communication and reflectivity of the persons involved.
Another criticism is that empowerment implies that the drive for change comes from an external person. For example, in healthcare, a patient being encouraged by their doctor to track their symptoms and adjust their medication accordingly would be empowerment, where as a patient deciding on their own that they wanted to improve their medication regimen and thus started tracking would be an example of self-empowerment. A recently coined term, self-empowerment "describes patients’ and informal caregivers’ power to perform activities that are not mandated by health care and to take control over their own lives and self-management with increased self-efficacy and confidence".
In social work and community psychology
In social work, empowerment offers an approach that allows social workers to increase the capacity for self-help of their clients. For example, this allows clients not to be seen as passive, helpless 'victims' to be rescued but instead as a self-empowered person fighting abuse/ oppression; a fight, in which the social worker takes the position of a facilitator, instead of the position of a 'rescuer'.
Marginalized people who lack self-sufficiency become, at a minimum, dependent on charity, or welfare. They lose their self-confidence because they cannot be fully self-supporting. The opportunities that denied them also deprive them of the pride of accomplishment which others, who have those opportunities, can develop for themselves. This in turn can lead to psychological, social and even mental health problems. "Marginalized" here refers to the overt or covert trends within societies whereby those perceived as lacking desirable traits or deviating from the group norms tend to be excluded by wider society and ostracized as undesirables.
In health promotion practice and research
As a concept, and model of practice, empowerment is also used in health promotion research and practice. The key principle is for individuals to gain increased control over factors that influence their health status.
To empower individuals and to obtain more equity in health, it is also important to address health-related behaviors.
Studies suggest that health promotion interventions aiming at empowering adolescents should enable active learning activities, use visualizing tools to facilitate self-reflection, and allow the adolescents to influence intervention activities.
In economics
According to Robert Adams, there is a long tradition in the UK and the USA respectively to advance forms of self-help that have developed and contributed to more recent concepts of empowerment. For example, the free enterprise economic theories of Milton Friedman embraced self-help as a respectable contributor to the economy. Both the Republicans in the US and the Conservative government of Margaret Thatcher built on these theories. 'At the same time, the mutual aid aspects of the concept of self-help retained some currency with socialists and democrats.'
In economic development, the empowerment approach focuses on mobilizing the self-help efforts of the poor, rather than providing them with social welfare. Economic empowerment is also the empowering of previously disadvantaged sections of the population, for example, in many previously colonized African countries.
Consumer empowerment
A consumer empowerment strategy was put in place in the United Kingdom by the 2010-2015 coalition government. The strategy, produced by the Department for Business, Innovation and Skills and the Behavioural Insights Team at the UK Cabinet Office, sought to introduce voluntary measures and "nudges" which could help consumers "find and adopt the best choices for their circumstances and needs". Activities promoted by the strategy included the midata programme under the direction of Professor Nigel Shadbolt, annual credit card usage statements, collective purchasing schemes, and presentational work on Energy Performance Certificates, motor vehicle sales literature and food hygiene ratings, so that consumers can make better use of the information they contain.
Customer empowerment
Companies that empower their customers have the potential to create superior products at reduced costs and risks, provided that customers are willing and able to contribute valuable input in the new product development process. Businesses that involve and empower customers in the process of creating new products can sometimes have a competitive edge over traditional firms that do not give their customers such involvement. This advantage is evident in the fact that consumers generally prefer the former. When customers have the authority to choose which products are brought to market, they exhibit increased demand for the chosen products, even when they are objectively of the same quality. This apparently irrational phenomenon can be explained by the heightened sense of psychological ownership that consumers develop for the selected products. Two conditions limit this effect: (1) it diminishes when the joint decision-making outcome does not align with consumers' preferences and (2) when consumers lack confidence in their ability to make informed decisions.
Increasingly engaged corporate directors
The World Pensions Council (WPC) has argued that large institutional investors such as pension funds and endowments are exercising a greater influence on the process of adding and replacing corporate directors – as they are themselves steered to do so by their own board members (pension trustees).
This could eventually put more pressure on the CEOs of publicly listed companies, as “more than ever before, many [North American], UK and European Union pension trustees speak enthusiastically about flexing their fiduciary muscles for the UN’s Sustainable Development Goals”, and other ESG-centric investment practices
Legal
Legal empowerment happens when marginalised people or groups use the legal mobilisation i.e., law, legal systems and justice mechanisms to improve or transform their social, political or economic situations. Legal empowerment approaches are interested in understanding how they can use the law to advance interests and priorities of the marginalised.
According to 'Open society foundations' (an NGO) "Legal empowerment is about strengthening the capacity of all people to exercise their rights, either as individuals or as members of a community. Legal empowerment is about grass root justice, about ensuring that law is not confined to books or courtrooms, but rather is available and meaningful to ordinary people.
Lorenzo Cotula in his book ' Legal Empowerment for Local Resource Control ' outlines the fact that legal tools for securing local resource rights are enshrined in legal system, does not necessarily mean that local resource users are in position to use them and benefit from them. The state legal system is constrained by a range of different factors – from lack of resources to cultural issues. Among these factors economic, geographic, linguistic and other constraints on access to courts, lack of legal awareness as well as legal assistance tend to be recurrent problems.
In many context, marginalised groups do not trust the legal system owing to the widespread manipulation that it has historically been subjected to by the more powerful. 'To what extent one knows the law, and make it work for themselves with 'para legal tools', is legal empowerment; assisted utilizing innovative approaches like legal literacy and awareness training, broadcasting legal information, conducting participatory legal discourses, supporting local resource user in negotiating with other agencies and stake holders and to strategies combining use of legal processes with advocacy along with media engagement, and socio legal mobilisation.
Sometimes groups are marginalized by society at large, with governments participating in the process of marginalization. Equal opportunity laws which actively oppose such marginalization, are supposed to allow empowerment to occur. These laws made it illegal to restrict access to schools and public places based on race. They can also be seen as a symptom of minorities' and women's empowerment through lobbying.
Gender
Gender empowerment conventionally refers to the empowerment of women, which is a significant topic of discussion in regards to development and economics nowadays. It also points to approaches regarding other marginalized genders in a particular political or social context. This approach to empowerment is partly informed by feminism and employed legal empowerment by building on international human rights. Empowerment is one of the main procedural concerns when addressing human rights and development. The Human Development and Capabilities Approach, The Millennium Development Goals, and other credible approaches/goals point to empowerment and participation as a necessary step if a country is to overcome the obstacles associated with poverty and development. The UN Sustainable Development Goals (SDG 5) targets gender equality and women's empowerment for the global development agenda.
In workplace management
According to Thomas A. Potterfield, many organizational theorists and practitioners regard employee empowerment as one of the most important and popular management concepts of our time.
Ciulla discusses an inverse case: that of bogus empowerment.
In management
In the sphere of management and organizational theory, "empowerment" often refers loosely to processes for giving subordinates (or workers generally) greater discretion and resources: distributing control in order to better serve both customers and the interests of employing organizations. It also giving employees the authority to take initiatives, make their own decisions, find and execute solutions.
Data from survey research using confirmatory factor analysis, empowerment can be captures through four dimensions, namely meaning, competence, self-determination, and impact; whereas some exploratory factor analysis identifies only three dimensions, namely meaning, competence, and influence (a conflation of self-determination and impact).
One account of the history of workplace empowerment in the United States recalls the clash of management styles in railroad construction in the American West in the mid-19th century, where "traditional" hierarchical East-Coast models of control encountered individualistic pioneer workers, strongly supplemented by methods of efficiency-oriented "worker responsibility" brought to the scene by Chinese laborers. In this case, empowerment at the level of work teams or brigades achieved a notable (but short-lived) demonstrated superiority. See the views of Robert L. Webb.
Since the 1980s and 1990s, empowerment has become a point of interest in management concepts and business administration. In this context, empowerment involves approaches that promise greater participation and integration to the employee in order to cope with their tasks as independently as possible and responsibly can. A strength-based approach known as "empowerment circle" has become an instrument of organizational development. Multidisciplinary empowerment teams aim for the development of quality circles to improve the organizational culture, strengthening the motivation and the skills of employees. The target of subjective job satisfaction of employees is pursued through flat hierarchies, participation in decisions, opening of creative effort, a positive, appreciative team culture, self-evaluation, taking responsibility (for results), more self-determination and constant further learning. The optimal use of existing potential and abilities can supposedly be better reached by satisfied and active workers. Here, knowledge management contributes significantly to implement employee participation as a guiding principle, for example through the creation of communities of practice.
However, it is important to ensure that the individual employee has the skills to meet their allocated responsibilities and that the company's structure sets up the right incentives for employees to reward their taking responsibilities. Otherwise there is a danger of being overwhelmed or even becoming lethargic.
Implications for company culture
Empowerment of employees requires a culture of trust in the organization and an appropriate information and communication system. The aim of these activities is to save control costs, that become redundant when employees act independently and in a self-motivated fashion.
In the book Empowerment Takes More Than a Minute, the authors illustrate three keys that organizations can use to open the knowledge, experience, and motivation power that people already have. The three keys that managers must use to empower their employees are:
Share information with everyone
Create autonomy through boundaries
Replace the old hierarchy with self-directed work teams
According to Stewart, in order to guarantee a successful work environment, managers need to exercise the "right kind of authority" (p. 6). To summarize, "empowerment is simply the effective use of a manager’s authority", and subsequently, it is a productive way to maximize all-around work efficiency.
These keys are hard to put into place and it is a journey to achieve empowerment in the workplace. It is important to train employees and makes sure they have trust in what empowerment will bring to a company.
The implementation of the concept of empowerment in management has also been criticized for failing to live up to its claims.
In artificial intelligence
Empowerment in the study of artificial intelligence is an information-theoretic quantity that measures the perceived capacity of an agent to influence its environment. Empowerment is an approach to modelling intrinsic motivation where advantageous actions are chosen by agent with just knowledge of the structure of the environment, rather than satisfying an externally imposed need as in homeostasis.
Experiments have shown that artificial agents acting to maximise their empowerment, in the absence of a defined goal, exhibit advantageous exploratory behaviour that, in a range of simulated environments, resembles intelligent behaviour in living things.
"Age of Popular Empowerment"
Marshall McLuhan insisted that the development of electronic media would eventually weaken the hierarchical structures that underpin central governments, large corporation, academia and, more generally, rigid, “linear-Cartesian” forms of social organization.
From that perspective, new, “electronic forms of awareness” driven by information technology would empower citizen, employees and students by disseminating in near-real-time vast amounts of information once reserved to a small number of experts and specialists. Citizens would be bound to ask for substantially more say in the management of government affairs, production, consumption, and education
World Pensions Council (WPC) economist Nicolas Firzli has argued that rapidly rising cultural tides, notably new forms of online engagement and increased demands for ESG-driven public policies and managerial decisions are transforming the way governments and corporation interact with citizen-consumers in the “Age of Empowerment”
See also
References
Further reading
Adams, Robert. Empowerment, participation and social work. New York: Palgrave Macmillan, 2008.
Christens, Brian. Community Power and Empowerment. Oxford: Oxford University Press, 2019.
Humphries, Beth. Critical Perspectives on Empowerment. Birmingham: Venture, 1996.
Rappaport, Julian, Carolyn F. Swift, and Robert Hess. Studies in Empowerment: Steps toward Understanding and Action. New York: Haworth, 1984.
Schutz, Aaron. Empowerment: A Primer. New York: Routledge, 2019.
Thomas, K. W. and Velthouse, B. A. (1990) "Cognitive Elements of Empowerment: An 'Interpretive' Model of Intrinsic Task Motivation". Academy of Management Review, Vol 15, No. 4, 666–681.
Wilkinson, A. 1998. Empowerment: theory and practice. Personnel Review. [online]. Vol. 27, No. 1, 40–56. Accessed February 16, 2004.
Empower Employment in India
Law and economics
Culture
Social work
Egalitarianism
Management
Majority–minority relations
Power (social and political) concepts | 0.768477 | 0.993703 | 0.763637 |
Humanities | Humanities are academic disciplines that study aspects of human society and culture, including certain fundamental questions asked by humans. During the Renaissance, the term 'humanities' referred to the study of classical literature and language, as opposed to the study of religion or 'divinity.' The study of the humanities was a key part of the secular curriculum in universities at the time. Today, the humanities are more frequently defined as any fields of study outside of natural sciences, social sciences, formal sciences (like mathematics), and applied sciences (or professional training). They use methods that are primarily critical, speculative, or interpretative and have a significant historical element—as distinguished from the mainly empirical approaches of science.
The humanities include the studies of philosophy, religion, history, language arts (literature, writing, oratory, rhetoric, poetry, etc.), performing arts (theater, music, dance, etc.), and visual arts (painting, sculpture, photography, filmmaking, etc.).
Some definitions of the humanities encompass law and religion due to their shared characteristics, such as the study of language and culture. However, these definitions are not universally accepted, as law and religion are often considered professional subjects rather than humanities subjects. Professional subjects, like some social sciences, are sometimes classified as being part of both the liberal arts and professional development education, whereas humanities subjects are generally confined to the traditional liberal arts education. Although sociology, anthropology, archaeology, linguistics and psychology share some similarities with the humanities, these are often considered social sciences. Similarly, disciplines such as finance, business administration, political science, economics, and global studies have closer ties to the social sciences rather than the humanities.
Scholars in the humanities are called humanities scholars or sometimes humanists. The term humanist also describes the philosophical position of humanism, which antihumanist scholars in the humanities reject. Renaissance scholars and artists are also known as humanists. Some secondary schools offer humanities classes usually consisting of literature, history, foreign language, and art.
Human disciplines like history and language mainly use the comparative method and comparative research. Other methods used in the humanities include hermeneutics, source criticism, esthetic interpretation, and speculative reason.
Etymology
The word humanities comes from the Renaissance Latin phrase studia humanitatis, which translates to study of humanity. This phrase was used to refer to the study of classical literature and language, which was seen as an important aspect of a refined education in the Renaissance. In its usage in the early 15th century, the studia humanitatis was a course of studies that consisted of grammar, poetry, rhetoric, history, and moral philosophy, primarily derived from the study of Latin and Greek classics. The word humanitas also gave rise to the Renaissance Italian neologism umanisti, whence "humanist", "Renaissance humanism".
Fields
Classics
Classics, in the Western academic tradition, refers to the studies of the cultures of classical antiquity, namely Ancient Greek and Latin and the Ancient Greek and Roman cultures. Classical studies is considered one of the cornerstones of the humanities; however, its popularity declined during the 20th century. Nevertheless, the influence of classical ideas on many humanities disciplines, such as philosophy and literature, remains strong.
History
History is systematically collected information about the past. When used as the name of a field of study, history refers to the study and interpretation of the record of humans, societies, institutions, and any topic that has changed over time.
Traditionally, the study of history has been considered a part of the humanities. In modern academia, history can occasionally be classified as a social science, though this definition is contested.
Language
While the scientific study of language is known as linguistics and is generally considered a social science, a natural science or a cognitive science, the study of languages is also central to the humanities. A good deal of twentieth- and twenty-first-century philosophy has been devoted to the analysis of language and to the question of whether, as Wittgenstein claimed, many of our philosophical confusions derive from the vocabulary we use; literary theory has explored the rhetorical, associative, and ordering features of language; and historical linguists have studied the development of languages across time. Literature, covering a variety of uses of language including prose forms (such as the novel), poetry and drama, also lies at the heart of the modern humanities curriculum. College-level programs in a foreign language usually include study of important works of the literature in that language, as well as the language itself.
Law
In everyday language, law refers to a rule that is enforced by a governing institution, as opposed to a moral or ethical rule that is not subject to formal enforcement. The study of law can be seen as either a social science or a humanities discipline, depending on one's perspective. Some see it as a social science because of its objective and measurable nature, while others view it as a humanities discipline because of its focus on values and interpretation. Law is not always enforceable, especially in the international relations context. Law has been defined in various ways, such as "a system of rules", "an interpretive concept" for achieving justice, "an authority" to mediate between people's interests, or "the command of a sovereign" backed by the threat of punishment.
However one likes to think of law, it is a completely central social institution. Legal policy is shaped by the practical application of ideas from many social science and humanities disciplines, including philosophy, history, political science, economics, anthropology, and sociology. Law is politics, because politicians create them. Law is philosophy, because moral and ethical persuasions shape their ideas. Law tells many of history's stories, because statutes, case law and codifications build up over time. Law is also economics, because any rule about contract, tort, property law, labour law, company law and many more can have long-lasting effects on how productivity is organised and the distribution of wealth. The noun law derives from the Old English word lagu, meaning something laid down or fixed, and the adjective legal comes from the Latin word LEX.
Literature
Literature is a term that does not have a universally accepted definition, but which has variably included all written work; writing that possesses literary merit; and language that emphasizes its own literary features, as opposed to ordinary language. Etymologically the term derives from the Latin word literatura/litteratura which means "writing formed with letters", although some definitions include spoken or sung texts. Literature can be classified as fiction or non-fiction; poetry or prose. It can be further distinguished according to major forms such as the novel, short story or drama; and works are often categorised according to historical periods, or according to their adherence to certain aesthetic features or expectations (genre).
Philosophy
Philosophy—etymologically, the "love of wisdom"—is generally the study of problems concerning matters such as existence, knowledge, justification, truth, justice, right and wrong, beauty, validity, mind, and language. Philosophy is distinguished from other ways of addressing these issues by its critical, generally systematic approach and its reliance on reasoned argument, rather than experiments (experimental philosophy being an exception).
Philosophy used to be a very comprehensive term, including what have subsequently become separate disciplines, such as physics. (As Immanuel Kant noted, "Ancient Greek philosophy was divided into three sciences: physics, ethics, and logic.") Today, the main fields of philosophy are logic, ethics, metaphysics, and epistemology. Still, it continues to overlap with other disciplines. The field of semantics, for example, brings philosophy into contact with linguistics.
Since the early twentieth century, philosophy in English-speaking universities has moved away from the humanities and closer to the formal sciences, becoming much more analytic. Analytic philosophy is marked by emphasis on the use of logic and formal methods of reasoning, conceptual analysis, and the use of symbolic and/or mathematical logic, as contrasted with the Continental style of philosophy. This method of inquiry is largely indebted to the work of philosophers such as Gottlob Frege, Bertrand Russell, G.E. Moore and Ludwig Wittgenstein.
Religion
Religious Studies is commonly regarded as a social science. Based on current knowledge, it seems that all known cultures, both in the past and present, have some form of belief system or religious practice. While there may be isolated individuals or groups who do not practice any form of religion, it is not known if there has ever been a society that was entirely devoid of religious belief. The definition of religion is not universal, and different cultures may have different ideas about what constitutes religion. Religion may be characterized with a community since humans are social animals. Rituals are used to bound the community together. Social animals require rules. Ethics is a requirement of society, but not a requirement of religion. Shinto, Daoism, and other folk or natural religions do not have ethical codes. While some religions do include the concept of deities, others do not. Therefore, the supernatural does not necessarily require the existence of deities. Rather, it can be broadly defined as any phenomena that cannot be explained by science or reason. Magical thinking creates explanations not available for empirical verification. Stories or myths are narratives being both didactic and entertaining. They are necessary for understanding the human predicament. Some other possible characteristics of religion are pollutions and purification, the sacred and the profane, sacred texts, religious institutions and organizations, and sacrifice and prayer. Some of the major problems that religions confront, and attempts to answer are chaos, suffering, evil, and death.
The non-founder religions are Hinduism, Shinto, and native or folk religions. Founder religions are Judaism, Christianity, Islam, Confucianism, Daoism, Mormonism, Jainism, Zoroastrianism, Buddhism, Sikhism, and the Baháʼí Faith. Religions must adapt and change through the generations because they must remain relevant to the adherents. When traditional religions fail to address new concerns, then new religions will emerge.
Performing arts
The performing arts differ from the visual arts in that the former uses the artist's own body, face, and presence as a medium, and the latter uses materials such as clay, metal, or paint, which can be molded or transformed to create some art object. Performing arts include acrobatics, busking, comedy, dance, film, magic, music, opera, juggling, marching arts, such as brass bands, and theatre.
Artists who participate in these arts in front of an audience are called performers, including actors, comedians, dancers, musicians, and singers. Performing arts are also supported by workers in related fields, such as songwriting and stagecraft. Performers often adapt their appearance, such as with costumes and stage makeup, etc. There is also a specialized form of fine art in which the artists perform their work live to an audience. This is called Performance art. Most performance art also involves some form of plastic art, perhaps in the creation of props. Dance was often referred to as a plastic art during the Modern dance era.
Musicology
Musicology as an academic discipline can take a number of different paths, including historical musicology, music literature, ethnomusicology and music theory. Undergraduate music majors generally take courses in all of these areas, while graduate students focus on a particular path. In the liberal arts tradition, musicology is also used to broaden skills of non-musicians by teaching skills, including concentration and listening.
Theatre
Theatre (or theater) (Greek "theatron", θέατρον) is the branch of the performing arts concerned with acting out stories in front of an audience using combinations of speech, gesture, music, dance, sound and spectacle — indeed any one or more elements of the other performing arts. In addition to the standard narrative dialogue style, theatre takes such forms as opera, ballet, mime, kabuki, classical Indian dance, Chinese opera, mummers' plays, and pantomime.
Dance
Dance (from Old French dancier, perhaps from Frankish) generally refers to human movement either used as a form of expression or presented in a social, spiritual or performance setting. Dance is also used to describe methods of non-verbal communication (see body language) between humans or animals (bee dance, mating dance), and motion in inanimate objects (the leaves danced in the wind). Choreography is the process of creating dances, and the people who create choreography are known as choreographers. Choreographers use movement, music, and other elements to create expressive and artistic dances. They may work alone or with other artists to create new works, and their work can be presented in a variety of settings, from small dance studios to large theaters.
Definitions of what constitutes dance are dependent on social, cultural, aesthetic, artistic, and moral constraints and range from functional movement (such as Folk dance) to codified, virtuoso techniques such as ballet.
Visual art
History of visual arts
The great traditions in art have a foundation in the art of one of the ancient civilizations, such as Ancient Japan, Greece and Rome, China, India, Greater Nepal, Mesopotamia and Mesoamerica.
Ancient Greek art saw a veneration of the human physical form and the development of equivalent skills to show musculature, poise, beauty and anatomically correct proportions. Ancient Roman art depicted gods as idealized humans, shown with characteristic distinguishing features (e.g., Zeus' thunderbolt).
The emphasis on spiritual and religious themes in Byzantine and Gothic art of the Middle Ages reflected the dominance of the church. However, in the Renaissance, a renewed focus on the physical world was reflected in art forms that depicted the human body and landscape in a more naturalistic and three-dimensional way.
Eastern art has generally worked in a style akin to Western medieval art, namely a concentration on surface patterning and local colour (meaning the plain colour of an object, such as basic red for a red robe, rather than the modulations of that colour brought about by light, shade and reflection). A characteristic of this style is that the local colour is often defined by an outline (a contemporary equivalent is the cartoon). This is evident in, for example, the art of India, Tibet and Japan.
Religious Islamic art forbids iconography, and expresses religious ideas through geometry instead. The physical and rational certainties depicted by the 19th-century Enlightenment were shattered not only by new discoveries of relativity by Einstein and of unseen psychology by Freud, but also by unprecedented technological development. Increasing global interaction during this time saw an equivalent influence of other cultures into Western art.
Media types
Drawing
Drawing is a means of making a picture, using a wide variety of tools and techniques. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface. Common tools are graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools that simulate the effects of these are also used. The main techniques used in drawing are: line drawing, hatching, crosshatching, random hatching, scribbling, stippling, and blending. A computer aided designer who excels in technical drawing is referred to as a draftsman or draughtsman.
Painting
Literally, painting is the practice of applying pigment suspended in a carrier (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas or a wall. However, when used in an artistic sense, it means the use of this activity in combination with drawing, composition and other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting has been used throughout history to express spiritual and religious ideas, from mythological scenes on pottery to the frescoes of the Sistine Chapel, to body art.
Colour is highly subjective, but has observable psychological effects, although these can differ from one culture to the next. Black is associated with mourning in the West, but elsewhere white may be. Some painters, theoreticians, writers and scientists, including Goethe, Kandinsky, Isaac Newton, have written their own colour theories. Moreover, the use of language is only a generalization for a colour equivalent. The word "red", for example, can cover a wide range of variations on the pure red of the spectrum. Unlike music, where notes such as C or C# are universally accepted, there is no formalized register of colors. However, the Pantone system is widely used in the printing and design industry to standardize color reproduction.
Modern artists have extended the practice of painting considerably to include, for example, collage. This began with cubism and is not painting in strict sense. Some modern painters incorporate different materials such as sand, cement, straw or wood for their texture. Examples of these are the works of Jean Dubuffet or Anselm Kiefer. Modern and contemporary art has moved away from the historic value of craft in favour of concept (conceptual art); this has led some e.g. Joseph Kosuth to say that painting, as a serious art form, is dead, although this has not deterred the majority of artists from continuing to practise it either as whole or part of their work.
Sculpture involves creating three-dimensional forms out of various materials. These typically include malleable substances like clay and metal but may also extend to material that is cut or shaved down to the desired form, like stone and wood.
History
In the West, the history of the humanities can be traced to ancient Greece, as the basis of a broad education for citizens. During Roman times, the concept of the seven liberal arts evolved, involving grammar, rhetoric and logic (the trivium), along with arithmetic, geometry, astronomy and music (the quadrivium). These subjects formed the bulk of medieval education, with the emphasis being on the humanities as skills or "ways of doing".
A major shift occurred with the Renaissance humanism of the fifteenth century, when the humanities began to be regarded as subjects to study rather than practice, with a corresponding shift away from traditional fields into areas such as literature and history (studia humaniora). In the 20th century, this view was in turn challenged by the postmodernist movement, which sought to redefine the humanities in more egalitarian terms suitable for a democratic society since the Greek and Roman societies in which the humanities originated were elitist and aristocratic.
A distinction is usually drawn between the social sciences and the humanities. Classicist Allan Bloom writes in The Closing of the American Mind (1987):
Today
Education and employment
For many decades, there has been a growing public perception that a humanities education inadequately prepares graduates for employment. The common belief is that graduates from such programs face underemployment and incomes too low for a humanities education to be worth the investment.
Humanities graduates find employment in a wide variety of management and professional occupations. In Britain, for example, over 11,000 humanities majors found employment in the following occupations:
Education (25.8%)
Management (19.8%)
Media/Literature/Arts (11.4%)
Law (11.3%)
Finance (10.4%)
Civil service (5.8%)
Not-for-profit (5.2%)
Marketing (2.3%)
Medicine (1.7%)
Other (6.4%)
Many humanities graduates may find themselves with no specific career goals upon graduation, which can lead to lower incomes in the early stages of their career. On the other hand, graduates from more career-oriented programs often find jobs more quickly. However, the long-term career prospects of humanities graduates may be similar to those of other graduates, as research shows that by five years after graduation, they generally find a career path that appeals to them.
There is empirical evidence that graduates from humanities programs earn less than graduates from other university programs. However, the empirical evidence also shows that humanities graduates still earn notably higher incomes than workers with no postsecondary education, and have job satisfaction levels comparable to their peers from other fields. Humanities graduates also earn more as their careers progress; ten years after graduation, the income difference between humanities graduates and graduates from other university programs is no longer statistically significant. Humanities graduates can boost their incomes if they obtain advanced or professional degrees.
Humanities majors are sought after in many areas of business, specifically for their critical thinking and problem solving skills. While often considered "soft skills", Humanities majors gain skills such as, "include persuasive written and oral communication, creative problem-solving, teamwork, decision-making, self-management, and critical analysis".
In the United States
The Humanities Indicators
The Humanities Indicators, unveiled in 2009 by the American Academy of Arts and Sciences, are the first comprehensive compilation of data about the humanities in the United States, providing scholars, policymakers and the public with detailed information on humanities education from primary to higher education, the humanities workforce, humanities funding and research, and public humanities activities. Modeled after the National Science Board's Science and Engineering Indicators, Humanities Indicators are a source of reliable benchmarks to guide analysis of the state of the humanities in the United States.
The Humanities in American Life
The 1980 United States Rockefeller Commission on the Humanities described the humanities in its report, The Humanities in American Life:
Through the humanities we reflect on the fundamental question: What does it mean to be human? The humanities offer clues but never a complete answer. They reveal how people have tried to make moral, spiritual, and intellectual sense of a world where irrationality, despair, loneliness, and death are as conspicuous as birth, friendship, hope, and reason.
In liberal arts education
The Commission on the Humanities and Social Sciences 2013 report, The Heart of the Matter, supports the notion of a broad "liberal arts education", which includes study in disciplines from the natural sciences to the arts as well as the humanities.
Many colleges provide such an education; some require it. The University of Chicago and Columbia University were among the first schools to require an extensive core curriculum in philosophy, literature, and the arts for all students. Other colleges with nationally recognized, mandatory programs in the liberal arts are Fordham University, St. John's College, Saint Anselm College and Providence College. Prominent proponents of liberal arts in the United States have included Mortimer J. Adler and E. D. Hirsch, Jr.
As a major
In 1950, 1.2% of Americans aged 22 had earned a degree in the humanities. By 2010, this figure had risen to 2.6%. This represents a doubling of the number of Americans with degrees in the humanities over a 60-year period. The increase in the number of Americans with humanities degrees is in part due to the overall rise in college enrollment in the United States. In 1940, 4.6% of Americans had a four-year degree, but by 2016, this figure had risen to 33.4%. This means that the total number of Americans with college degrees has increased significantly, resulting in a greater number of people with degrees in the humanities as well. The proportion of degrees awarded in the humanities has declined in recent decades, even as the overall number of people with humanities degrees has increased. In 1954, 36 percent of Harvard undergraduates majored in the humanities, but in 2012, only 20 percent took that course of study. As recently as 1993, the humanities accounted for 15% of the bachelor's degrees awarded by colleges and universities in the United States. As of 2022, they accounted for less than 9%.
In the digital age
Researchers in the humanities have developed numerous large- and small-scale digital corporations, such as digitized collections of historical texts, along with the digital tools and methods to analyze them. Their aim is both to uncover new knowledge about corpora and to visualize research data in new and revealing ways. Much of this activity occurs in a field called the digital humanities.
STEM
Politicians in the United States currently espouse a need for increased funding of the STEM fields, science, technology, engineering, mathematics. Federal funding represents a much smaller fraction of funding for humanities than other fields such as STEM or medicine. The result was a decline of quality in both college and pre-college education in the humanities field.
Three-term Louisiana Governor, Edwin Edwards acknowledged the importance of the humanities in a 2014 video address to the academic conference, Revolutions in Eighteenth-Century Sociability. Edwards said:
Without the humanities to teach us how history has succeeded or failed in directing the fruits of technology and science to the betterment of our tribe of homo sapiens, without the humanities to teach us how to frame the discussion and to properly debate the uses-and the costs-of technology, without the humanities to teach us how to safely debate how to create a more just society with our fellow man and woman, technology and science would eventually default to the ownership of—and misuse by—the most influential, the most powerful, the most feared among us.
In Europe
The value of the humanities debate
The contemporary debate in the field of critical university studies centers around the declining value of the humanities. As in America, there is a perceived decline in interest within higher education policy in research that is qualitative and does not produce marketable products. This threat can be seen in a variety of forms across Europe, but much critical attention has been given to the field of research assessment in particular. For example, the UK [Research Excellence Framework] has been subject to criticism due to its assessment criteria from across the humanities, and indeed, the social sciences. In particular, the notion of "impact" has generated significant debate.
Philosophical history
Citizenship and self-reflection
Since the late 19th century, a central justification for the humanities has been that it aids and encourages self-reflection—a self-reflection that, in turn, helps develop personal consciousness or an active sense of civic duty.
Wilhelm Dilthey and Hans-Georg Gadamer centered the humanities' attempt to distinguish itself from the natural sciences in humankind's urge to understand its own experiences. This understanding, they claimed, ties like-minded people from similar cultural backgrounds together and provides a sense of cultural continuity with the philosophical past.
Scholars in the late 20th and early 21st centuries extended that "narrative imagination" to the ability to understand the records of lived experiences outside of one's own individual social and cultural context. Through that narrative imagination, it is claimed, humanities scholars and students develop a conscience more suited to the multicultural world we live in. That conscience might take the form of a passive one that allows more effective self-reflection or extend into active empathy that facilitates the dispensation of civic duties a responsible world citizen must engage in. There is disagreement, however, on the level of influence humanities study can have on an individual and whether or not the understanding produced in humanistic enterprise can guarantee an "identifiable positive effect on people".
Humanistic theories and practices
There are three major branches of knowledge: natural sciences, social sciences, and the humanities. Technology is the practical extension of the natural sciences, as politics is the extension of the social sciences. Similarly, the humanities have their own practical extension, sometimes called "transformative humanities" (transhumanities) or "culturonics" (Mikhail Epstein's term):
Nature – natural sciences – technology – transformation of nature
Society – social sciences – politics – transformation of society
Culture – human sciences – culturonics – transformation of culture
Technology, politics and culturonics are designed to transform what their respective disciplines study: nature, society, and culture. The field of transformative humanities includes various practicies and technologies, for example, language planning, the construction of new languages, like Esperanto, and invention of new artistic and literary genres and movements in the genre of manifesto, like Romanticism, Symbolism, or Surrealism.
Truth and meaning
The divide between humanistic study and natural sciences informs arguments of meaning in humanities as well. What distinguishes the humanities from the natural sciences is not a certain subject matter, but rather the mode of approach to any question. Humanities focuses on understanding meaning, purpose, and goals and furthers the appreciation of singular historical and social phenomena—an interpretive method of finding "truth"—rather than explaining the causality of events or uncovering the truth of the natural world. Apart from its societal application, narrative imagination is an important tool in the (re)production of understood meaning in history, culture and literature.
Imagination, as part of the tool kit of artists or scholars, helps create meaning that invokes a response from an audience. Since a humanities scholar is always within the nexus of lived experiences, no "absolute" knowledge is theoretically possible; knowledge is instead a ceaseless procedure of inventing and reinventing the context a text is read in. Poststructuralism has problematized an approach to the humanistic study based on questions of meaning, intentionality, and authorship. In the wake of the death of the author proclaimed by Roland Barthes, various theoretical currents such as deconstruction and discourse analysis seek to expose the ideologies and rhetoric operative in producing both the purportedly meaningful objects and the hermeneutic subjects of humanistic study. This exposure has opened up the interpretive structures of the humanities to criticism that humanities scholarship is "unscientific" and therefore unfit for inclusion in modern university curricula because of the very nature of its changing contextual meaning.
Pleasure, the pursuit of knowledge and scholarship
Some, like Stanley Fish, have claimed that the humanities can defend themselves best by refusing to make any claims of utility. (Fish may well be thinking primarily of literary study, rather than history and philosophy.) Any attempt to justify the humanities in terms of outside benefits such as social usefulness (say increased productivity) or in terms of ennobling effects on the individual (such as greater wisdom or diminished prejudice) is ungrounded, according to Fish, and simply places impossible demands on the relevant academic departments. Furthermore, critical thinking, while arguably a result of humanistic training, can be acquired in other contexts. And the humanities do not even provide any more the kind of social cachet (what sociologists sometimes call "cultural capital") that was helpful to succeed in Western society before the age of mass education following World War II.
Instead, scholars like Fish suggest that the humanities offer a unique kind of pleasure, a pleasure based on the common pursuit of knowledge (even if it is only disciplinary knowledge). Such pleasure contrasts with the increasing privatization of leisure and instant gratification characteristic of Western culture; it thus meets Jürgen Habermas' requirements for the disregard of social status and rational problematization of previously unquestioned areas necessary for an endeavor which takes place in the bourgeois public sphere. In this argument, then, only the academic pursuit of pleasure can provide a link between the private and the public realm in modern Western consumer society and strengthen that public sphere that, according to many theorists, is the foundation for modern democracy.
Others, like Mark Bauerlein, argue that professors in the humanities have increasingly abandoned proven methods of epistemology (I care only about the quality of your arguments, not your conclusions.) in favor of indoctrination (I care only about your conclusions, not the quality of your arguments.). The result is that professors and their students adhere rigidly to a limited set of viewpoints, and have little interest in, or understanding of, opposing viewpoints. Once they obtain this intellectual self-satisfaction, persistent lapses in learning, research, and evaluation are common.
Romanticization and rejection
Implicit in many of these arguments supporting the humanities are the makings of arguments against public support of the humanities. Joseph Carroll asserts that we live in a changing world, a world where "cultural capital" is replaced with scientific literacy, and in which the romantic notion of a Renaissance humanities scholar is obsolete. Such arguments appeal to judgments and anxieties about the essential uselessness of the humanities, especially in an age when it is seemingly vitally important for scholars of literature, history and the arts to engage in "collaborative work with experimental scientists or even simply to make "intelligent use of the findings from empirical science."
Despite many humanities based arguments against the humanities some within the exact sciences have called for their return. In 2017, Science popularizer Bill Nye retracted previous claims about the supposed 'uselessness' of philosophy. As Bill Nye states, "People allude to Socrates and Plato and Aristotle all the time, and I think many of us who make those references don't have a solid grounding," he said. "It's good to know the history of philosophy." Scholars, such as biologist Scott F. Gilbert, make the claim that it is in fact the increasing predominance, leading to exclusivity, of scientific ways of thinking that need to be tempered by historical and social context. Gilbert worries that the commercialization that may be inherent in some ways of conceiving science (pursuit of funding, academic prestige etc.) need to be examined externally. Gilbert argues:
See also
Art school
Discourse analysis
Outline of the humanities (humanities topics)
Great Books
Great Books programs in Canada
Liberal arts
Social sciences
Humanities, arts, and social sciences
Human science
The Two Cultures
List of academic disciplines
Public humanities
STEAM fields
Tinbergen's four questions
Environmental humanities
References
External links
Society for the History of the Humanities
Institute for Comparative Research in Human and Social Sciences (ICR) – Japan (archived 15 April 2016)
The American Academy of Arts and Sciences – US
Humanities Indicators – US
National Humanities Center – US (archived 7 July 2007)
The Humanities Association – UK
National Humanities Alliance
National Endowment for the Humanities – US
Australian Academy of the Humanities
National
American Academy Commission on the Humanities and Social Sciences
"Games and Historical Narratives" by Jeremy Antley – Journal of Digital Humanities
Film about the Value of the Humanities
Humans
Main topic articles
Society | 0.764496 | 0.998875 | 0.763636 |
Parenting | Parenting or child rearing promotes and supports the physical, cognitive, social, emotional, and educational development from infancy to adulthood. Parenting refers to the intricacies of raising a child and not exclusively for a biological relationship.
The most common caretakers in parenting are the biological parents of the child in question. However, a caretaker may be an older sibling, step-parent, grandparent, legal guardian, aunt, uncle, other family members, or a family friend. Governments and society may also have a role in child-rearing or upbringing. In many cases, orphaned or abandoned children receive parental care from non-parent or non-blood relations. Others may be adopted, raised in foster care, or placed in an orphanage. Parenting skills vary, and a parent or surrogate with good parenting skills may be referred to as a good parent.
Parenting styles vary by historical period, race/ethnicity, social class, preference, and a few other social features. Additionally, research supports that parental history, both in terms of attachments of varying quality and parental psychopathology, particularly in the wake of adverse experiences, can strongly influence parental sensitivity and child outcomes. Parenting may have long-term impacts on adoptive children as well, as recent research has shown that warm adoptive parenting reduces internalizing and externalizing problems of the adoptive children over time.
Factors that affect decisions
Social class, wealth, culture and income have a very strong impact on what methods of child rearing parents use. Cultural values play a major role in how a parent raises their child. However, parenting is always evolving, as times, cultural practices, social norms, and traditions change. Studies on these factors affecting parenting decisions have shown just that.
In psychology, the parental investment theory suggests that basic differences between males and females in parental investment have great adaptive significance and lead to gender differences in mating propensities and preferences.
Styles
A parenting style is indicative of the overall emotional climate in the home. Developmental psychologist Diana Baumrind proposed three main parenting styles in early child development: authoritative, authoritarian, and permissive. These parenting styles were later expanded to four to include an uninvolved style. These four styles involve combinations of acceptance and responsiveness, and also involve demand and control. Research has found that parenting style is significantly related to a child's subsequent mental health and well-being. In particular, authoritative parenting is positively related to mental health and satisfaction with life, and authoritarian parenting is negatively related to these variables. With authoritarian and permissive parenting on opposite sides of the spectrum, most conventional modern models of parenting fall somewhere in between. Although it is influential, Baumrind's typology has received significant criticism for containing overly broad categorizations and an imprecise and overly idealized description of authoritative parenting.
Authoritative parenting
Described by Baumrind as the "just right" style, it combines medium level demands on the child and a medium level responsiveness from the parents. Authoritative parents rely on positive reinforcement and infrequent use of punishment. Parents are more aware of a child's feelings and capabilities and support the development of a child's autonomy within reasonable limits. There is a give-and-take atmosphere involved in parent-child communication, and both control and support are balanced. Some research has shown that this style of parenting is more beneficial than the too-hard authoritarian style or the too-soft permissive style. These children score higher in terms of competence, mental health, and social development than those raised in permissive, authoritarian, or neglectful homes. However, Dr. Wendy Grolnick has critiqued Baumrind's use of the term "firm control" in her description of authoritative parenting and argued that there should be clear differentiation between coercive power assertion (which is associated with negative effects on children) and the more positive practices of structure and high expectations.
Authoritarian parenting
Authoritarian parents are very rigid and strict. High demands are placed on the child, but there is little responsiveness to them. Parents who practice authoritarian-style parenting have a non-negotiable set of rules and expectations strictly enforced and require rigid obedience. When the rules are not followed, punishment is often used to promote and ensure future compliance. There is usually no explanation of punishment except that the child is in trouble for breaking a rule. This parenting style is strongly associated with corporal punishment, such as spanking. This type of parenting seems to be seen more often in working-class families than in the middle class. In 1983, Diana Baumrind found that children raised in an authoritarian-style home were less cheerful, moodier, and more vulnerable to stress. In many cases, these children also demonstrated passive hostility. This parenting style can negatively impact the educational success and career path, while a firm and reassuring parenting style impact positively.
Permissive parenting
Permissive parenting has become a more popular parenting method for middle-class families than working-class families roughly since the end of WWII. In these settings, a child's freedom and autonomy are highly valued, and parents rely primarily on reasoning and explanation. Parents are undemanding, and thus there tends to be little if any punishment or explicit rules in this parenting style. These parents say that their children are free from external constraints and tend to be highly responsive to whatever it is that the child wants. Children of permissive parents are generally happy but sometimes show low levels of self-control and self-reliance because they lack structure at home. Author Alfie Kohn criticized the study and categorization of permissive parenting, arguing that it serves to "blur the differences between 'permissive' parents who were really just confused and those who were deliberately democratic."
Uninvolved parenting
An uninvolved or neglectful parenting style is when parents are often emotionally or physically absent. They have little to no expectations from the child and regularly have no communication. They are not responsive to a child's needs and have little to no behavioral expectations. They may consider their children to be "emotionally priceless" and may not engage with them and believe they are giving the child its personal space. If present, they may provide what the child needs for survival with little to no engagement. There is often a large gap between parents and children with this parenting style. Children with little or no communication with their own parents tend to be victimized by other children and may exhibit deviant behavior themselves. Children of uninvolved parents suffer in social competence, academic performance, psychosocial development, and problematic behavior.
Intrusive parenting
Intrusive parenting is when parents use "parental control and inhibition of adolescents' thoughts, feelings, and emotional expression through the use of love withdrawal, guilt induction, and manipulative tactics" for protecting them from the possible pitfalls, without knowing it can deprive/disturb the adolescents' development and growth period. Intrusive parents may try to set unrealistic expectations on their children by overestimating their intellectual capability and underestimating their physical capability or developmental capability, like enrolling them into more extracurricular activities or enrolling them into certain classes without understanding their child's passion, and it may eventually lead children not taking ownership of activities or develop behavioral problems. Children, especially adolescents might become victims and be "unassertive, avoid confrontation, being eager to please others, and suffer from low self-esteem." They may compare their children to others, like friends and family, and also force their child to be codependent—to a point where the children feel unprepared when they go into the world. Research has shown that this parenting style can lead to "greater under-eating behaviors, risky cyber behaviors, substance use, and depressive symptoms among adolescents."
Unconditional parenting
Unconditional parenting refers to a parenting approach that is focused on the whole child, emphasizes working with a child to solve problems, and views parental love as a gift. It contrasts with conditional parenting, which focuses on the child's behavior, emphasizes controlling children using rewards and punishments, and views parental love as a privilege to be earned. The concept of unconditional parenting was popularized by author Alfie Kohn in his 2005 book Unconditional Parenting: Moving from Rewards and Punishments to Love and Reason. Kohn differentiates unconditional parenting from what he sees as the caricature of permissive parenting by arguing that parents can be anti-authoritarian and opposed to exerting control while also recognizing the value of respectful adult guidance and a child's need for non-coercive structure in their lives.
Trustful parenting
Trustful parenting is a child-centered parenting style in which parents trust their children to make decisions, play and explore on their own, and learn from their own mistakes. Research professor Peter Gray argues that trustful parenting was the dominant parenting style in prehistoric hunter-gatherer societies. Gray contrasts trustful parenting with "directive-domineering" parenting, which emphasizes controlling children to train them in obedience (historically involving using child labor to teach subservience to lords and masters), and "directive-protective" parenting, which involves controlling children to protect them from harm. Gray argues that the directive-domineering approach became the predominant parenting style with the spread of agriculture and industry, while the directive-protective approach took over as the dominant approach in the late 20th century.
Practices
A parenting practice is a specific behavior that a parent uses in raising a child. These practices are used to socialize children. Kuppens et al. found that "researchers have identified overarching parenting dimensions that reflect similar parenting practices, mostly by modeling the relationships among these parenting practices using factor analytic techniques." For example, many parents read aloud to their offspring in the hopes of supporting their linguistic and intellectual development. In cultures with strong oral traditions, such as Indigenous American communities and New Zealand Maori communities, storytelling is a critical parenting practice for children.
Parenting practices reflect the cultural understanding of children. Parents in individualistic countries like Germany spend more time engaged in face-to-face interaction with babies and more time talking to the baby about the baby. Parents in more communal cultures, such as West African cultures, spend more time talking to the baby about other people and more time with the baby facing outwards so that the baby sees what the mother sees.
Skills and behaviors
Parenting skills and behaviors assist parents in leading children into healthy adulthood and development of the child's social skills. The cognitive potential, social skills, and behavioral functioning a child acquires during the early years are positively correlated with the quality of their interactions with their parents.
According to the Canadian Council on Learning, children benefit (or avoid poor developmental outcomes) when their parents:
Communicate truthfully about events: Authenticity from parents who explain can help their children understand what happened and how they are involved;
Maintain consistency: Parents that regularly institute routines can see benefits in their children's behavioral patterns;
Utilize resources available to them, reaching out into the community and building a supportive social network;
Take an interest in their child's educational and early developmental needs (e.g., Play that enhances socialization, autonomy, cohesion, calmness, and trust.); and
Keep open communication lines about what their child is seeing, learning, and doing, and how those things are affecting them.
Parenting skills are widely thought to be naturally present in parents; however, there is substantial evidence to the contrary. Those who come from a negative or vulnerable childhood environment frequently (and often unintentionally) mimic their parents' behavior during interactions with their own children. Parents with an inadequate understanding of developmental milestones may also demonstrate problematic parenting. Parenting practices are of particular importance during marital transitions like separation, divorce, and remarriage; if children fail to adequately adjust to these changes, they are at risk of negative outcomes (e.g. increased rule-breaking behavior, problems with peer relationships, and increased emotional difficulties).
Research classifies competence and skills required in parenting as follows:
Parent-child relationship skills: quality time spent, positive communications, and delighted show of affection.
Encouraging desirable behavior: praise and encouragement, nonverbal attention, facilitating engaging activities.
Teaching skills and behaviors: being a good example, incidental teaching, human communication of the skill with role-playing and other methods, communicating logical incentives and consequences.
Managing misbehavior: establishing firm ground rules and limits, directing discussion, providing clear and calm instructions, communicating and enforcing appropriate consequences, using restrictive tactics like quiet time and time out with an authoritative stance rather than an authoritarian one.
Anticipating and planning: advanced planning and preparation for readying the child for challenges, finding out engaging and age-appropriate developmental activities, preparing the token economy for self-management practice with guidance, holding follow-up discussions, identifying possible negative developmental trajectories.
Self-regulation skills: monitoring behaviors (own and children's), setting developmentally appropriate goals, evaluating strengths and weaknesses and setting practice tasks, monitoring and preventing internalizing and externalizing behaviors.
Mood and coping skills: reframing and discouraging unhelpful thoughts (diversions, goal orientation, and mindfulness), stress and tension management (own and children's), developing personal coping statements and plans for high-risk situations, building mutual respect and consideration between members of the family through collaborative activities and rituals.
Partner support skills: improving personal communication, giving and receiving constructive feedback and support, avoiding negative family interaction styles, supporting and finding hope in problems for adaptation, leading collaborative problem solving, promoting relationship happiness and cordiality.
Consistency is considered the "backbone" of positive parenting skills and "overprotection" the weakness.
The Arbinger Institute adds to these skills and methods of parenting with what the authors of The Parenting Pyramid claims are methods to "parent for things to go right," or in other words steps that should be taken to ensure good positive relationships are occurring in the home which can help children be more willing to listen. Their methods are described as The Parenting Pyramid. The Parenting Pyramid starting at the foundational level and working up to the top:
Ways of being
Relationship with spouse
Relationship with child
Teaching
and finally, Corrections
Believing that as parents are focused on this order of establishing their homes and parenting styles, then if a parent has to encourage different behaviors from children this correction will come from a better place and therefore the children may be more receptive to such feedback, compared to if a parent attempts to correct behaviors before focusing on the previous steps.
Parent training
Parent psychosocial health can have a significant impact on the parent-child relationship. Group-based parent training and education programs have proven to be effective at improving short-term psychosocial well-being for parents. There are many different types of training parents can take to support their parenting skills. Some groups include Parent-Child Interaction Therapy (PCIT), Parents Management Training (PMT), Positive Parenting Program (Triple P), The Incredible Years, and Behavioral and Emotional Skills Training (BEST). PCIT works with both parents and children in teaching skills to interact more positively and productive. PMT is focused for children aged 3–13, in which parents are the main trainee. They are taught skills to help deal with challenging behaviors from their children. Triple P focus on equipping parents with the information they need to increase confidence and self-sufficiency in managing their children's behavior. The Incredible Years focuses in age infancy-age 12, in which they are broken into small-group-based training in different areas. BEST introduces effective behavior management techniques in one day rather than over the course of a few weeks. Courses are offered to families based on effective training to support additional needs, behavioral guidelines, communication and many others to give guidance throughout learning how to be a parent.
Cultural values
Parents around the world want what they believe is best for their children. However, parents in different cultures have different ideas of what is best. For example, parents in hunter–gatherer societies or those who survive through subsistence agriculture are likely to promote practical survival skills from a young age. Many such cultures begin teaching children to use sharp tools, including knives, before their first birthdays. In some Indigenous American communities, child work provides children the opportunity to absorb cultural values of collaborative participation and prosocial behavior through observation and activity alongside adults. These communities value respect, participation, and non-interference, the Cherokee principle of respecting autonomy by withholding unsolicited advice. Indigenous American parents also try to encourage curiosity in their children via a permissive parenting style that enables them to explore and learn through observation of the world.
Differences in cultural values cause parents to interpret the same behaviors in different ways. For instance, European Americans prize intellectual understanding, especially in a narrow "book learning" sense, and believe that asking questions is a sign of intelligence. Italian parents value social and emotional competence and believe that curiosity demonstrates good interpersonal skills. Dutch parents, however, value independence, long attention spans, and predictability; in their eyes, asking questions is a negative behavior, signifying a lack of independence.
Even so, parents around the world share specific prosocial behavioral goals for their children. Hispanic parents value respect and emphasize putting family above the individual. Parents in East Asia prize order in the household above all else. In some cases, this gives rise to high levels of psychological control and even manipulation on the part of the head of the household. The Kipsigis people of Kenya value children who are innovative and wield that intelligence responsibly and helpfully—a behavior they call ng/om. Other cultures, such as in Sweden and Spain, value sociality and happiness as well.
Indigenous American cultures
It is common for parents in many Indigenous American communities to use different parenting tools such as storytelling —like myths— Consejos (Spanish for "advice"), educational teasing, nonverbal communication, and observational learning to teach their children important values and life lessons.
Storytelling is a way for Indigenous American children to learn about their identity, community, and cultural history. Indigenous myths and folklore often personify animals and objects, reaffirming the belief that everything possesses a soul and deserves respect. These stories also help preserve the language and are used to reflect certain values or cultural histories.
The Consejo is a narrative form of advice-giving. Rather than directly telling the child what to do in a particular situation, the parent might instead tell a story about a similar situation. The main character in the story is intended to help the child see their decision's implications without directly deciding for them; this teaches the child to be decisive and independent while still providing some guidance.
The playful form of teasing is a parenting method used in some Indigenous American communities to keep children out of danger and guide their behavior. This parenting strategy uses stories, fabrications, or empty threats to guide children in making safe, intelligent decisions. For example, a parent may tell a child that there is a monster that jumps on children's backs if they walk alone at night. This explanation can help keep the child safe because instilling that fear creates greater awareness and lessens the likelihood that they will wander alone into trouble.
In Navajo families, a child's development is partly focused on the importance of "respect" for all things. "Respect" consists of recognizing the significance of one's relationship with other things and people in the world. Children largely learn about this concept via nonverbal communication between parents and other family members. For example, children are initiated at an early age into the practice of an early morning run under any weather conditions. On this run, the community uses humor and laughter with each other, without directly including the child—who may not wish to get up early and run—to encourage the child to participate and become an active member of the community. Parents also promote participation in the morning runs by placing their child in the snow and having them stay longer if they protest.
Indigenous American parents often incorporate children into everyday life, including adult activities, allowing the child to learn through observation. This practice is known as LOPI, Learning by Observing and Pitching In, where children are integrated into all types of mature daily activities and encouraged to observe and contribute in the community. This inclusion as a parenting tool promotes both community participation and learning.
One notable example appears in some Mayan communities: young girls are not permitted around the hearth for an extended period of time, since corn is sacred. Although this is an exception to their cultural preference for incorporating children into activities, including cooking, it is a strong example of observational learning. Mayan girls can only watch their mothers making tortillas for a few minutes at a time, but the sacredness of the activity captures their interest. They will then go and practice their mother's movements on other objects, such as kneading thin pieces of plastic like a tortilla. From this practice, when a girl comes of age, she is able to sit down and make tortillas without having ever received any explicit verbal instruction.
However, in many cases oppressive circumstances such as forced conversion, land loss, and displacement led to diminishment of traditional Native American parenting techniques.
Immigrants in the United States: Ethnic-racial socialization
Due to the increasing racial and ethnic diversity in the United States, ethnic-racial socialization research has gained some attention. Parental ethnic-racial socialization is a way of passing down cultural resources to support children of color's psychosocial wellness. The goals of ethnic-racial socialization are: to pass on a positive view of one's ethnic group and to help children cope with racism. Through a meta-analysis of published research on ethnic-racial socialization, ethnic-racial socialization positively affects psychosocial well-being. This meta-analytic review focuses on research relevant to four indicators of psychosocial skills and how they are influenced by developmental stage, race and ethnicity, research designs, and the differences between parent and child self-reports. The dimensions of ethnic-racial socialization that are considered when looking for correlations with psychosocial skills are cultural socialization, preparation for bias, promotion of mistrust, and egalitarianism.
Ethnic-racial socialization dimensions are defined as follows: cultural socialization is the process of passing down cultural customs, preparation for bias ranges from positive or negative reactions to racism and discrimination, promotion of mistrust conditions synergy when dealing with other races, and egalitarianism puts similarities between races first. Psychosocial competencies are defined as follows: self-perceptions involve perceived beliefs of academic and social capabilities, interpersonal relationships deal with the quality of relationships, externalizing behaviors deal with observable troublesome behavior, and internalizing behavior deals with emotional intelligence regulation. The multiple ways these domains and competencies interact show small correlations between ethnic-racial socialization and psychosocial wellness, but this parenting practice needs further research.
This meta-analysis showed that developmental stages affect how children perceived ethnic-racial socialization. Cultural socialization practices appear to affect children similarly across developmental stages except for preparation for bias and promotion of mistrust which are encouraged for older-aged children. Existing research shows ethnic-racial socialization serves African Americans positively against discrimination. Cross-sectional studies were predicted to have greater effect sizes because correlations are inflated in these kinds of studies. Parental reports of ethnic-racial socialization influence are influenced by "intentions", so child reports tend to be more accurate.
Among other conclusions derived from this meta-analysis, cultural socialization and self-perceptions had a small positive correlation. Cultural socialization and promotion of mistrust had a small negative correlation, and interpersonal relationships positively impacted cultural socialization and preparation for bias. In regard to developmental stages, ethnic-racial socialization had a small but positive correlation with self-perceptions during childhood and early adolescence. Based on study designs, there were no significant differences, meaning that cross-sectional studies and longitudinal studies both showed small positive correlations between ethnic-racial socialization and self-perceptions. Reporter differences between parents and children showed positive correlations between ethnic-racial socialization when associated with internalizing behavior and interpersonal relationships. These two correlations showed a greater effect size with child reports compared to parent reports.
The meta-analysis on previous research shows only correlations, so there is a need for experimental studies that can show causation amongst the different domains and dimensions. Children's behavior and adaptation to this behavior may indicate a bidirectional effect that can also be addressed by an experimental study. There is evidence to show that ethnic-racial socialization can help children of color obtain social-emotional skills that can help them navigate through racism and discrimination, but further research needs to be done to increase the generalizability of existing research.
Across the lifespan
Pre-pregnancy
Family planning is the decision-making process surrounding whether to become parents or not, and when the right time would be, including planning, preparing, and gathering resources. Prospective parents may assess (among other matters) whether they have access to sufficient financial resources, whether their family situation is stable, and whether they want to undertake the responsibility of raising a child. Worldwide, about 40% of all pregnancies are not planned, and more than 30 million babies are born each year as a result of unplanned pregnancies.
Reproductive health and preconception care affect pregnancy, reproductive success, and the physical and mental health of both mother and child. A woman who is underweight, whether due to poverty, eating disorders, or illness, is less likely to have a healthy pregnancy and give birth to a healthy baby than a woman who is healthy. Similarly, a woman who is obese has a higher risk of difficulties, including gestational diabetes. Other health problems, such as infections and iron-deficiency anemia, can be detected and corrected before conception.
Pregnancy and prenatal parenting
During pregnancy, the unborn child is affected by many decisions made by the parents, particularly choices linked to their lifestyle. The health, activity level, and nutrition available to the mother can affect the child's development before birth. Some mothers, especially in relatively wealthy countries, overeat and spend too much time resting. Other mothers, especially if they are poor or abused, may be overworked and may not be able to eat enough, or may not be able to afford healthful foods with sufficient iron, vitamins, and protein, for the unborn child to develop properly.
Newborns and infants
Newborn parenting is where the responsibilities of parenthood begin. A newborn's basic needs are food, sleep, comfort, and cleaning, which the parent provides. An infant's only form of communication is crying, while there is some argument that infants have different types of cries for being hungry or in pain, that has largely been refuted. Newborns and young infants require feedings every few hours, which is disruptive to adult sleep cycles. They respond enthusiastically to soft stroking, cuddling, and caressing. Gentle rocking back and forth often calms a crying infant, as do massages and warm baths. Newborns may comfort themselves by sucking their thumb or by using a pacifier. The need to suckle is instinctive and allows newborns to feed. Breastfeeding is the recommended method of feeding by all major infant health organizations. If breastfeeding is not possible or desired, bottle feeding is a common alternative. Other alternatives include feeding breastmilk or formula with a cup, spoon, feeding syringe, or nursing supplement.
The forming of attachments is considered the foundation of the infant's capacity to form and conduct relationships throughout life. Attachment is not the same as love or affection, although they often go together. Attachments develop immediately, and a lack of attachment or a seriously disrupted attachment has the potential to cause severe damage to a child's health and well-being. Physically, one may not see symptoms or indications of a disorder, but the child may be affected emotionally. Studies show that children with secure attachments have the ability to form successful relationships, express themselves on an interpersonal basis, and have higher self-esteem. Conversely children who have neglectful or emotionally unavailable caregivers can exhibit behavioral problems such as post-traumatic stress disorder or oppositional defiant disorder. Oppositional-defiant disorder is a pattern of disobedient and rebellious behavior toward authority figures.
Toddlers
Toddlers are small children between 12 and 36 months old who are much more active than infants and become challenged with learning how to do simple tasks by themselves. At this stage, parents are heavily involved in showing the small child how to do things rather than just doing things for them; it is normal for the toddler to mimic the parents. Toddlers need help to build their vocabulary, increase their communication skills, and manage their emotions. Toddlers will also begin to understand social etiquette, such as being polite and taking turns.
Toddlers are very curious about the world around them and are eager to explore it. They seek greater independence and responsibility and may become frustrated when things do not go the way that they want or expect. Tantrums begin at this stage, which is sometimes referred to as the 'Terrible Twos'. Tantrums are often caused by the child's frustration over the particular situation, and are sometimes caused, simply because they are not able to communicate properly. Parents of toddlers are expected to help guide and teach the child, establish basic routines (such as washing hands before meals or brushing teeth before bed), and increase the child's responsibilities. It is also normal for toddlers to be frequently frustrated. It is an essential step to their development. They will learn through experience, trial, and error. This means that they need to experience being frustrated when something does not work for them in order to move on to the next stage. When the toddler is frustrated, they will often misbehave with actions like screaming, hitting or biting. Parents need to be careful when reacting to such behaviors; giving threats or punishments is usually not helpful and might only make the situation worse. Research groups led by Daniel Schechter, Alytia Levendosky, and others have shown that parents with histories of maltreatment and violence exposure often have difficulty helping their toddlers and preschool-age children with the very same emotionally dysregulated behaviors which can remind traumatized parents of their adverse experiences and associated mental states.
Regarding gender differences in parenting, data from the US in 2014 states that, on an average day, among adults living in households with children under age 6, women spent 1.0 hours providing physical care (such as bathing or feeding a child) to household children. By contrast, men spent 23 minutes providing physical care.
Child
Younger children start to become more independent and begin to build friendships. They are able to reason and can make their own decisions in many hypothetical situations. Young children demand constant attention but gradually learn how to deal with boredom and begin to be able to play independently. They enjoy helping and also feeling useful and capable. Parents can assist their children by encouraging social interactions and modeling proper social behaviors. A large part of learning in the early years comes from being involved in activities and household duties. Parents who observe their children in play or join with them in child-driven play have the opportunity to glimpse into their children's world, learn to communicate more effectively with their children, and are given another setting to offer gentle, nurturing guidance. Parents also teach their children health, hygiene, and eating habits through instruction and by example.
Parents are expected to make decisions about their child's education. Parenting styles in this area diverge greatly at this stage, with some parents they choose to become heavily involved in arranging organized activities and early learning programs. Other parents choose to let the child develop with few organized activities.
Children begin to learn responsibility and consequences for their actions with parental assistance. Some parents provide a small allowance that increases with age to help teach children the value of money and how to be responsible.
Parents who are consistent and fair with their discipline, who openly communicate and offer explanations to their children, and who do not neglect the needs of their children in any way often find they have fewer problems with their children as they mature.
When child conduct problems are encountered, behavioral and cognitive-behavioral group-based parenting interventions have been found to be effective at improving child conduct, parenting skills, and parental mental health.
Adolescents
Parents often feel isolated and alone when parenting adolescents. Adolescence can be a time of high risk for children, where newfound freedoms can result in decisions that drastically open up or close off life opportunities. There are also large changes that occur in the brain during adolescence; the emotional center of the brain is now fully developed, but the rational frontal cortex has not matured fully and still is not able to keep all of those emotions in check. Adolescents tend to increase the amount of time spent with peers of the opposite gender; however, they still maintain the amount of time spent with those of the same gender—and do this by decreasing the amount of time spent with their parents.
Although adolescents look to peers and adults outside the family for guidance and models for how to behave, parents can remain influential in their development. Studies have shown that parents can have a significant impact, for instance, on how much teens drink. Other studies show that parents continued presence in provides stability and nurture to their developing adolescents.
During adolescence children begin to form their identity and start to test and develop the interpersonal and occupational roles that they will assume as adults. Therefore, it is important that parents treat them as young adults. Parental issues at this stage of parenting include dealing with rebelliousness related to a greater desire to partake in risky behaviors. In order to prevent risky behaviors, it is important for the parents to build a trusting relationship with their children. This can be achieved through behavioral control, parental monitoring, consistent discipline, parental warmth and support, inductive reasoning, and strong parent-child communication.
When a trusting relationship is built up, adolescents are more likely to approach their parents for help when faced with negative peer pressure. Helping children build a strong foundation will ultimately help them resist negative peer pressure. Not only will a positive relationship between adolescent and parent benefit when faced with peer pressure, it will help with identity-processing in early adolescents.
Research by Berzonsky et al. found that adolescents that were open and trusting of their parents were given more freedom and their parents were less likely to track them and control their behavior.
Adults
Parenting does not usually end when a child turns 18. Support may be needed in a child's life well beyond the adolescent years and can continue into middle and later adulthood. Parenting can be a lifelong process. Parents may provide financial support to their adult children, which can also include providing an inheritance after death. The life perspective and wisdom given by a parent can benefit their adult children in their own lives. Becoming a grandparent is another milestone and has many similarities with parenting. Roles can be reversed in some ways when adult children become caregivers to their elderly parents.
Assistance
Parents may receive assistance with caring for their children through child care programs.
Article 25.2 of the Universal Declaration of Human Rights declares that:
Childbearing and happiness
Data from the British Household Panel Survey and the German Socio-Economic Panel suggests that having up to two children increases happiness in the years around the birth, and mostly only for those who have postponed childbearing. However, having a third child is not shown to increase happiness. Data from a private opinion American survey, called Success Index, suggests that parenting is deemed important for people, especially for those aged 65 and older as compared to those aged 18 to 35. According to the survey, being a parent is now an integral part of the new American Dream.
See also
References
Further reading
External links
Childhood | 0.767954 | 0.994323 | 0.763595 |
Subsets and Splits