title
stringlengths 3
69
| text
stringlengths 901
96.7k
| relevans
float64 0.76
0.83
| popularity
float64 0.94
1
| ranking
float64 0.76
0.83
|
---|---|---|---|---|
Ekistics | Ekistics is the science of human settlements including regional, city, community planning and dwelling design. Its major incentive was the emergence of increasingly large and complex conurbations, tending even to a worldwide city. The study involves every kind of human settlement, with particular attention to geography, ecology, human psychology, anthropology, culture, politics, and occasionally aesthetics.
As a scientific mode of study, ekistics currently relies on statistics and description, organized in five ekistic elements or principles: nature, anthropos, society, shells, and networks. It is generally a more scientific field than urban planning, and has considerable overlap with some of the less restrained fields of architectural theory.
In application, conclusions are drawn aimed at achieving harmony between the inhabitants of a settlement and their physical and socio-cultural environments.
Etymology
The term ekistics was coined by Constantinos Apostolos Doxiadis in 1942. The word is derived from the Greek adjective more particularly from the neuter plural . The ancient Greek adjective meant . It was derived from , an ancient Greek noun meaning . This may be regarded as deriving indirectly from another ancient Greek noun, , meaning , and especially (used by Plato), or . All these words grew from the verb , , and were ultimately derived from the noun , .
The Shorter Oxford English Dictionary contains a reference to an ecist, oekist or oikist, defining him as: "the founder of an ancient Greek ... colony". The English equivalent of oikistikē is ekistics (a noun). In addition, the adjectives ekistic and ekistical, the adverb ekistically, and the noun ekistician are now also in current use.
Scope
In terms of outdoor recreation, the term ekistic relationship is used to describe one's relationship with the natural world and how they view the resources within it.
The notion of ekistics implies that understanding the interaction between and within human groups—infrastructure, agriculture, shelter, function (job)—in conjunction with their environment directly affects their well-being (individual and collective). The subject begins to elucidate the ways in which collective settlements form and how they inter-relate. By doing so, humans begin to understand how they 'fit' into a species, i.e. Homo sapiens, and how Homo sapiens 'should' be living in order to manifest our potential—at least as far as this species is concerned (as the text stands now). Ekistics in some cases argues that in order for human settlements to expand efficiently and economically we must reorganize the way in which the villages, towns, cities, metropolises are formed.
As Doxiadis put it, "... This field (ekistics) is a science, even if in our times it is usually considered a technology and an art, without the foundations of a science - a mistake for which we pay very heavily." Having recorded very successfully the destructions of the ekistic wealth in Greece during WWII, Doxiadis became convinced that human settlements are subjectable to systematic investigation. Doxiadis, being aware of the unifying power of systems thinking and particularly of the biological and evolutionary reference models as used by many famous biologists-philosophers of his generation, especially Sir Julian Huxley (1887–1975), Theodosius Dobzhansky (1900–75), Dennis Gabor (1900–79), René Dubos (1901–82), George G. Simpson (1902–84), and Conrad Waddington (1905–75), used the biological model to describe the "ekistic behavior" of anthropos (the five principles) and the evolutionary model to explain the morphogenesis of human settlements (the eleven forces, the hierarchical structure of human settlements, dynapolis, ecumenopolis). Finally, he formulated a general theory which considers human settlements as living organisms capable of evolution, an evolution that might be guided by Man using "ekistic knowledge".
Units
Doxiadis believed that the conclusion from biological and social experience was clear: to avoid chaos we must organize our system of life from anthropos (individual) to ecumenopolis (global city) in hierarchical levels, represented by human settlements. So he articulated a general hierarchical scale with fifteen levels of ekistic units:
anthropos – 1
room – 2
house – 5
housegroup (hamlet) – 40
small neighborhood (village) – 250
neighborhood – 1,500
small polis (town) – 10,000
polis (city) – 75,000
small metropolis – 500,000
metropolis – 4 million
small megalopolis – 25 million
megalopolis – 150 million
small eperopolis – 750 million
eperopolis – 7.5 billion
ecumenopolis – 50 billion
The population figures above are for Doxiadis' ideal future ekistic units for the year 2100, at which time he estimated (in 1968) that Earth would achieve zero population growth at a population of 50,000,000,000 with human civilization being powered by fusion energy.
Publications
The Ekistics and the New Habitat, printed from 1957 to 2006 and began calling for new papers to be published online in 2019.
Ekistics is a 1968 book by Konstantinos Doxiadis, often titled Introduction to Ekistics.
See also
Arcology
Conurbation
Consolidated city-county
Global city
Human ecosystem
Megacity
Megalopolis (term)
Metropolitan area
Permaculture
Principles of intelligent urbanism
Further reading
Doxiadis, Konstantinos Ekistics 1968
References
External links
The Institute of Ekistics
World Society for Ekistics
Ekistic Units
City of the Future
Urban studies and planning terminology
Architectural terminology | 0.784039 | 0.986321 | 0.773314 |
Fundamental rights | Fundamental rights are a group of rights that have been recognized by a high degree of protection from encroachment. These rights are specifically identified in a constitution, or have been found under due process of law. The United Nations' Sustainable Development Goal 17, established in 2015, underscores the link between promoting human rights and sustaining peace.
List of important rights
Some universally recognised rights that are seen as fundamental, i.e., contained in the United Nations Universal Declaration of Human Rights, the U.N. International Covenant on Civil and Political Rights, or the U.N. International Covenant on Economic, Social and Cultural Rights, include the following:
Self-determination
Liberty
Due process of law
Freedom of movement
Right to privacy
Freedom of thought
Freedom of conscience
Freedom of religion
Freedom of expression
Freedom of assembly
Freedom of association
Specific jurisdictions
Canada
In Canada, the Charter of Rights and Freedoms outlines four Fundamental Freedoms. These are freedom of:
Conscience and religion
Thought, belief, opinion, and expression, including freedom of the press and other media of communication
Peaceful assembly
Association.
Europe
On a European level, fundamental rights are protected in three laws:
The Charter of Fundamental Rights of the European Union
The Fundamental Freedoms of the European Union
The European Convention on Human Rights
Japan
In Japan, fundamental rights protected by the Constitution of Japan include:
Civil liberties, including the right to liberty and the right to freedom of expression, thought, conscience and religion
Social rights, including the right to receive education and the right to maintain the minimum standards of wholesome and cultured living
India
There are six fundamental rights recognized in the Constitution of India:
the right to equality (Articles 14-18):
Article 14: Equality before law
Article 15: Prohibition of discrimination on grounds of religion, race, caste, sex, or place of birth
Article 16: Equality of opportunity in matters of public employment
Article 17: Abolition of untouchability
Article 18: Abolition of titles
the right to freedom (Article 19, 22):
Article 19: Protection of certain rights regarding freedom of speech, expression, assembly, association, movement, and residence
Article 20: Protection in respect of conviction for offenses
Article 21: Protection of life and personal liberty
Article 21A: Right to education
the right against exploitation (Articles 23-24):
Article 23: Prohibition of trafficking in human beings and forced labor
Article 24: Prohibition of child labor
the right to freedom of religion (Articles 25-28):
Article 25: Freedom of conscience and free profession, practice, and propagation of religion
Article 26: Freedom to manage religious affairs
Article 27: Freedom from payment of taxes for promotion of any particular religion
Article 28: Freedom from attending religious instruction or worship in certain educational institutions
cultural and educational rights (Articles 29-30):
Article 29: Protection of interests of minorities
Article 30: Right of minorities to establish and administer educational institutions
the right to constitutional remedies (Article 32 and 226):
Article 32: Right to move the Supreme Court for the enforcement of Fundamental Rights
Article 226: Power of High Courts to issue certain writs for the enforcement of Fundamental Rights
United States
Though many fundamental rights are also widely considered human rights, the classification of a right as "fundamental" invokes specific legal tests courts use to determine the constrained conditions under which the United States government and various state governments may limit these rights. In such legal contexts, courts determine whether rights are fundamental by examining the historical foundations of those rights and by determining whether their protection is part of a longstanding tradition. In particular, courts look to whether the right is "so rooted in the traditions and conscience of our people as to be ranked as fundamental." Individual states may guarantee other rights as fundamental. That is, States may add to fundamental rights but can never diminish and rarely infringe upon fundamental rights by legislative processes. Any such attempt, if challenged, may involve a "strict scrutiny" review in court.
In American constitutional law, fundamental rights have special significance under the U.S. Constitution. Those rights enumerated in the U.S. Constitution are recognized as "fundamental" by the U.S. Supreme Court. According to the Supreme Court, enumerated rights that are incorporated are so fundamental that any law restricting such a right must both serve a compelling state purpose and be narrowly tailored to that compelling purpose.
The original interpretation of the United States Bill of Rights was that only the Federal Government was bound by it. In 1835, the U.S. Supreme Court in Barron v. Baltimore unanimously ruled that the Bill of Rights did not apply to the states. During post-Civil War Reconstruction, the Fourteenth Amendment was adopted in 1868 to rectify this condition, and to specifically apply the whole of the Constitution to all U.S. states. In 1873, the Supreme Court essentially nullified the key language of the Fourteenth Amendment that guaranteed all "privileges or immunities" to all U.S. citizens, in a series of cases called the Slaughterhouse cases. This decision and others allowed post-emancipation racial discrimination to continue largely unabated.
Later Supreme Court justices found a way around these limitations without overturning the Slaughterhouse precedent: they created a concept called Selective Incorporation. Under this legal theory, the court used the remaining Fourteenth Amendment protections for equal protection and due process to "incorporate" individual elements of the Bill of Rights against the states. "The test usually articulated for determining fundamentality under the Due Process Clause is that the putative right must be 'implicit in the concept of ordered liberty', or 'deeply rooted in this Nation's history and tradition.'" Compare page 267 Lutz v. City of York, Pa., 899 F. 2d 255 - United States Court of Appeals, 3rd Circuit, 1990.
This set in motion a continuous process under which each individual right under the Bill of Rights was incorporated, one by one. That process has extended more than a century, with the free speech clause of the First Amendment first incorporated in 1925 in Gitlow v New York. The most recent amendment completely incorporated as fundamental was the Second Amendment right to keep and bear arms for personal self-defense, in McDonald v Chicago, handed down in 2010 and the Eighth Amendment's restrictions on excessive fines in Timbs v. Indiana in 2019.
Not all clauses of all amendments have been incorporated. For example, states are not required to obey the Fifth Amendment's requirement of indictment by grand jury. Many states choose to use preliminary hearings instead of grand juries. It is possible that future cases may incorporate additional clauses of the Bill of Rights against the states.
The Bill of Rights lists specifically enumerated rights. The Supreme Court has extended fundamental rights by recognizing several fundamental rights not specifically enumerated in the Constitution, including but not limited to:
The right to interstate travel
The right to parent one's children
The right to privacy
The right to marriage
Any restrictions a government statute or policy places on these rights are evaluated with strict scrutiny. If a right is denied to everyone, it is an issue of substantive due process. If a right is denied to some individuals but not others, it is also an issue of equal protection. However, any action that abridges a right deemed fundamental, when also violating equal protection, is still held to the more exacting standard of strict scrutiny, instead of the less demanding rational basis test.
During the Lochner era, the right to freedom of contract was considered fundamental, and thus restrictions on that right were subject to strict scrutiny. Following the 1937 Supreme Court decision in West Coast Hotel Co. v. Parrish, though, the right to contract became considerably less important in the context of substantive due process and restrictions on it were evaluated under the rational basis standard.
See also
Fundamental Rights Agency of the European Union
Inalienable rights
Universal human rights
References
Fundamental rights
Constitutional law
Rights
Civil rights and liberties | 0.775344 | 0.997357 | 0.773294 |
Green anarchism | Green anarchism, also known as ecological anarchism or eco-anarchism, is an anarchist school of thought that focuses on ecology and environmental issues. It is an anti-capitalist and anti-authoritarian form of radical environmentalism, which emphasises social organization, freedom and self-fulfillment.
Ecological approaches to anarchism were first formulated during the 19th century, as the rise of capitalism and colonialism caused environmental degradation. Drawing from the ecology of Charles Darwin, the anarchist Mikhail Bakunin elaborated a naturalist philosophy that rejected the dualistic separation of humanity from nature. This was developed into an ecological philosophy by Peter Kropotkin and Élisée Reclus, who advocated for the decentralisation and degrowth of industry as a means to advance both social justice and environmental protection.
Green anarchism was first developed into a distinct political theory by sections of the New Left, as a revival in anarchism coincided with the emergence of an environmental movement. From the 1970s onwards, three main tendencies of green anarchism were established: Murray Bookchin elaborated the theory of social ecology, which argues that environmental issues stem directly from social issues; Arne Næss defined the theory of deep ecology, which advocates for biocentrism; and John Zerzan developed the theory of anarcho-primitivism, which calls for the abolition of technology and civilization. In the 21st century, these tendencies were joined by total liberation, which centres animal rights, and green syndicalism, which calls for the workers themselves to manage deindustrialisation.
At its core, green anarchism concerns itself with the identification and abolition of social hierarchies that cause environmental degradation. Opposed to the extractivism and productivism of industrial capitalism, it advocates for the degrowth and deindustrialisation of the economy. It also pushes for greater localisation and decentralisation, proposing forms of municipalism, bioregionalism or a "return to nature" as possible alternatives to the state.
History
Background
Before the Industrial Revolution, the only occurrences of ecological crisis were small-scale, localised to areas affected by natural disasters, overproduction or war. But as the enclosure of common land increasingly forced dispossessed workers into factories, more wide-reaching ecological damage began to be noticed by radicals of the period.
During the late 19th century, as capitalism and colonialism were reaching their height, political philosophers first began to develop critiques of industrialised society, which had caused a rise in pollution and environmental degradation. In response, these early environmentalists developed a concern for nature and wildlife conservation, soil erosion, deforestation, and natural resource management. Early political approaches to environmentalism were supplemented by the literary naturalism of writers such as Henry David Thoreau, John Muir and Ernest Thompson Seton, whose best-selling works helped to alter the popular perception of nature by rejecting the dualistic "man against nature" conflict. In particular, Thoreau's advocacy of anti-consumerism and vegetarianism, as well as his love for the wilderness, has been a direct inspiration for many eco-anarchists.
Ecology in its modern form was developed by Charles Darwin, whose work on evolutionary biology provided a scientific rejection of Christian and Cartesian anthropocentrism, instead emphasising the role of probability and individual agency in the process of evolution. Around the same time, anarchism emerged as a political philosophy that rejected all forms of hierarchy, authority and oppression, and instead advocated for decentralisation and voluntary association. The framework for an ecological anarchism was thus set in place, as a means to reject anthropocentric hierarchies that positioned humans in a dominating position over nature.
Roots
The ecological roots of anarchism go back to the classical anarchists, such as Pierre-Joseph Proudhon and Mikhail Bakunin, who both conceived of human nature as the basis for anarchism. Drawing from Charles Darwin's work, Bakunin considered people to be an intrinsic part of their environment. Bakunin rejected Cartesian dualism, denying its anthropocentric and mechanistic separation of humanity from nature. However, he also saw humans as uniquely capable of self-determination and called for humanity to achieve a mastery of its own natural environment as a means to achieve freedom. Bakunin's naturalism was developed into an ecological philosophy by the geographers Peter Kropotkin and Éliseé Reclus, who conceived the relationship between human society and nature as a dialectic. Their environmental ethics, which combined social justice with environmental protection, anticipated the green anarchist philosophies of social ecology and bioregionalism.
Like Bakunin before him, Kropotkin extolled the domestication of nature by humans, but also framed humanity as an intrinsic part of its natural environment and placed great value in the natural world. Kropotkin was among the first environmentalist thinkers to note the connections between industrialisation, environmental degradation and workers' alienation. In contrast to Marxists, who called for an increase in industrialisation, Kropotkin argued for the localisation of the economy, which he felt would increase people's connection with the land and halt environmental damage. In Fields, Factories and Workshops, Kropotkin advocated for the satisfaction of human needs through horticulture, and the decentralisation and degrowth of industry. He also criticised the division of labour, both between mental and manual labourers, and between the rural peasantry and urban proletariat. In Mutual Aid: A Factor of Evolution, he elaborated on the natural basis for communism, depicting the formation of social organisation among animals through the practice of mutual aid.
Reclus himself argued that environmental degradation caused by industrialisation, exemplified to him by mass deforestation in the Pacific Northwest, was characteristic of the "barbarity" of modern civilisation, which he felt subordinated both workers and the environment to the goal of capital accumulation. Reclus was also one of the earliest figures to develop the idea of "total liberation", directly comparing the exploitation of labour with cruelty to animals and thus advocating for both human and animal rights.
Kropotkin and Reclus' synthesis of environmental and social justice formed the foundation for eco-socialism, chiefly associated with libertarian socialists who advocated for a "return to nature", such as Robert Blatchford, William Morris and Henry Salt. Ecological aspects of anarchism were also emphasised by Emma Goldman and Alexander Berkman, who, drawing from the work of Henry David Thoreau, conceived of anarchism as a means to promote unity between humans and the natural world. These early ecological developments in anarchism lay the foundations for the elaboration of green anarchism in the 1960s, when it was first taken up by figures within the New Left.
Development
Green anarchism first emerged after the dawn of the Atomic Age, as increasingly centralized governments brought with them a new host of environmental and social issues. During the 1960s, the rise of the environmental movement coincided with a concurrent revival of interest in anarchism, leading to anarchists having a considerable influence on the development of radical environmentalist thought. Principles and practices that already formed the core of anarchist philosophy, from direct action to community organizing, thus became foundational to radical environmentalism. As the threats presented by environmental degradation, industrial agriculture and pollution became more urgent, the first green anarchists turned to decentralisation and diversity as solutions for socio-ecological systems.
Green anarchism as a tendency was first developed by the American social anarchist Murray Bookchin. Bookchin had already began addressing the problem of environmental degradation as far back as the 1950s. In 1962, he published the first major modern work of environmentalism, Our Synthetic Environment, which warned of the ecological dangers of pesticide application. Over the subsequent decades, Bookchin developed the first theory of green anarchism, social ecology, which presented social hierarchy as the root of ecological problems.
In 1973, Norwegian philosopher Arne Næss developed another green anarchist tendency, known as deep ecology, which rejected of anthropocentrism in favour of biocentrism. In 1985, this philosophy was developed into a political programme by the American academics Bill Devall and George Sessions, while Australian philosopher Warwick Fox proposed the formation of bioregions as a green anarchist alternative to the nation state.
Following on from deep ecology, the next major development in green anarchist philosophy was the articulation of anarcho-primitivism, which was critical of agriculture, technology and civilisation. First developed in the pages of the American anarchist magazine Fifth Estate during the mid-1980s, anarcho-primitivist theory was developed by Fredy Perlman, David Watson, and particularly John Zerzan. It was later taken up by the American periodical Green Anarchy and British periodical Green Anarchist, and partly inspired groups such as the Animal Liberation Front (ALF), Earth Liberation Front (ELF) and Individualists Tending to the Wild (ITS).
From theory to practice
By the 1970s, radical environmentalist groups had begun to carry out direct action against nuclear power infrastructure, with mobilisations of the anti-nuclear movement in France, Germany and the United States providing a direct continuity between contemporary environmentalism and the New Left of the 1960s. In the 1980s, green anarchist groups such as Earth First! started taking direct action against deforestation, roadworks and industrial agriculture. They called their sabotage actions "monkey-wrenching", after Edward Abbey's 1984 novel The Monkey Wrench Gang. During the 1990s, the road protest movements in the United Kingdom and Israel were also driven by eco-anarchists, while eco-anarchist action networks such as the Animal Liberation Front (ALF) and Earth Liberation Front (ELF) first rose to prominence. Eco-anarchist actions have included violent attacks, such as those carried out by cells of the Informal Anarchist Federation (IAF) and Individualists Tending to the Wild (ITS) against nuclear scientists and nanotechnology researchers respectively.
As environmental degradation was accelerated by the rise of economic globalisation and neoliberalism, green anarchists broadened their scope of action from a specific environmentalist focus into one that agitated for global justice. Green anarchists were instrumental in the establishment of the anti-globalisation movement (AGM), as well as its transformation into the subsequent global justice movement (GJM). The AGM gained support in both the Global North and Global South, with the Zapatista Army of National Liberation (EZLN) becoming a key organisation within the movement. It also gained a wide range of support from different sectors of society, not only including activists from left-wing politics or the environmental and peace movements, but also people from trade unions, church groups and the agricultural sector. Trade unionists were the most prominent presence at the 1999 Seattle WTO protests, even outnumbering the environmentalists and anarchists. Drawing from its anarchist roots, the AGM adopted a decentralised and non-hierarchical model of horizontal organisation, embracing new "anarchical" technologies such as the internet as a means to network and communicate. Through the environmental and anti-globalisation movements, contemporary anarchism was ultimately able to achieve a "quasi-renaissance" in anarchist ideas, tendencies and modes of organisation.
Contemporary theoretical developments
Writers such as Murray Bookchin and Alan Carter have claimed contemporary anarchism to be the only political movement capable of addressing climate change. In his 1996 book Ecology and Anarchism, British anthropologist Brian Morris argued that anarchism is intrinsically environmentalist, as it shared the ecologist principles of decentralisation, non-hierarchical social organisation and interdependence.
By the 21st century, green anarchists had begun to move beyond the previous century's divisions into social ecologist and anarcho-primitivist camps, establishing a new body of theory that rejected the dualisms of humanity against nature and civilisation against wilderness. Drawing on the biocentric philosophy of deep ecology, in 2006, Mark Somma called for a "revolutionary environmentalism" capable of overthrowing capitalism, reducing consumption and organising the conservation of biodiversity. Somma championed a form of solidarity between humanity and the non-human natural world, in a call that was taken up in 2009 by Steven Best, who called for eco-anarchists to commit themselves to "total liberation" and extend solidarity to animals. To Best, morality ought to be extended to animals due to their sentience and capacity to feel pain; he has called for the abolition of the hierarchy between humans and animals, although he implicitly excludes non-sentient plants from this moral consideration.
Drawing from eco-feminism, Patrice Jones called for human solidarity with both plants and animals, neither of which she considered to be lesser than humans, even describing them as "natural anarchists" that do not recognise or obey any government's laws.
In 2012, Jeff Shantz developed a theory of "green syndicalism", which seeks to use of syndicalist models of workplace organisation to link the labour movement with the environmental movement.
Branches
Social ecology
The green anarchist theory of social ecology is based on an analysis of the relationship between society and nature. Social ecology considers human society to be both the cause of and solution to environmental degradation, envisioning the creation of a rational and ecological society through a process of sociocultural evolution. Social ecologist Murray Bookchin saw society itself as a natural product of evolution, which intrinsically tended toward ever-increasing complexity and diversity. While he saw human society as having the potential to become "nature rendered self-conscious", in The Ecology of Freedom, Bookchin elaborated that the emergence of hierarchy had given way to a disfigured form of society that was both ecologically and socially destructive.
According to social ecology, the oppression of humans by humans directly preceded the exploitation of the environment by hierarchical society, which itself caused a vicious circle of increasing socio-ecological devastation. Considering social hierarchy to go against the natural evolutionary tendencies towards complexity and diversity, social ecology concludes that oppressive hierarchies have to be abolished in order to resolve the ecological crisis. Bookchin thus proposed a decentralised system of direct democracy, centred locally in the municipality, where people themselves could participate in decision making. He envisioned a self-organized system of popular assemblies to replace the state and re-educate individuals into socially and ecologically-minded citizens.
Deep ecology
The theory of deep ecology rejects anthropocentrism in favour of biocentrism, which recognizes the intrinsic value of all life, regardless of its utility to humankind. Unlike social ecologists, theorists of deep ecology considered human society to be incapable of reversing environmental degradation and, as a result, proposed a drastic reduction in world population. The solutions to human overpopulation proposed by deep ecologists included bioregionalism, which advocated the replacement of the nation state with bioregions, as well as a widespread return to a hunter-gatherer lifestyle. Some deep ecologists, including members of Earth First!, have even welcomed the mass death caused by disease and famine as a form of population control.
Anarcho-primitivism
The theory of anarcho-primitivism aims its critique at the emergence of technology, agriculture and civilisation, which it considers to have been the source of all social problems. According to American primitivist theorist John Zerzan, it was the division of labour in agricultural societies that had first given way to the social inequality and alienation which became characteristic of modernity. As such, Zerzan proposed the abolition of technology and science, in order for society to be broken down and humans to return to a hunter-gather lifestyle. Libertarian socialists such as Noam Chomsky and Michael Albert have been critical of anarcho-primitivism, with the former arguing that it would inevitably result in genocide.
Green syndicalism
Green syndicalism, as developed by Graham Purchase and Judi Bari, advocates for the unification of the labour movement with environmental movement and for trade unions such as the Industrial Workers of the World (IWW) to adopt ecological concerns into their platforms. Seeing workers' self-management as a means to address environmental degradation, green syndicalism pushes for workers to agitate their colleagues, sabotage environmentally destructive practices in their workplaces, and form workers' councils. Green syndicalist Jeff Shantz proposed that a free association of producers would be best positioned to dismantle the industrial economy, through the decentralisation and localisation of production. In contrast to Marxism and anarcho-syndicalism, green syndicalism opposes mass production and rejects the idea that the industrial economy has a "liberatory potential"; but it also rejects the radical environmentalist calls for a "complete, immediate break with industrialism".
Theory
Although a diverse body of thought, eco-anarchist theory has a fundamental basis unified by certain shared principles. Eco-anarchism considers there to be a direct connection between the problems of environmental degradation and hierarchy, and maintains an anti-capitalist critique of productivism and industrialism. Emphasising decentralisation and community ownership, it also advocates for the degrowth of the economy and the re-centring of social relations around local communities and bioregions.
Critique of civilisation
Green anarchism traces the roots of all forms of oppression to the widespread transition from hunting and gathering to sedentary lifestyles. According to green anarchism, the foundation of civilisation was defined by the extraction and importation of natural resources, which led to the formation of hierarchy through capital accumulation and the division of labour. Green anarchists are therefore critical of civilisation and its manifestations in globalized capitalism, which they consider to be causing a societal and ecological collapse that necessitates a "return to nature". Green anarchists uphold direct action as a form of resistance against civilisation, which they wish to replace with a way of simple living in harmony with nature. This may involve cultivating self-sustainability, practising survivalism or rewilding.
Decentralisation
Eco-anarchism considers the rise of states to be the primary cause of environmental degradation, as states promote greater industrial extraction and production as means to remain competitive with other state powers, even at the expense of the environment. Drawing from the ecological principle of "unity in diversity", eco-anarchism also recognises humans as an intrinsic part of the ecosystem that they live in and how their culture, history and language is shaped by their local environments. Eco-anarchists therefore argue for the abolition of states and their replacement with stateless societies, upholding various forms of localism and bioregionalism.
Deindustrialisation
Ecological anarchism considers the exploitation of labour under capitalism within a broader ecological context, holding that environmental degradation is intrinsically linked with societal oppression. As such, green anarchism is opposed to industrialism, due to both its social and ecological affects.
See also
Animal rights and punk subculture
Chellis Glendinning
Earth Liberation Front
Earth First!
Green Scare
Eco-socialism
Intentional community
Left-libertarianism
Operation Backfire (FBI)
Permaculture
References
Bibliography
Further reading
External links
The Institute for Social Ecology.
Articles tagged with "green" and "ecology" at The Anarchist Library.
Anarchist schools of thought
Animal Liberation Front
Animal rights and politics
Animal rights movement
Animal welfare
Earth Liberation Front
Green politics
Political theories
Simple living | 0.779914 | 0.991507 | 0.77329 |
Robotics | Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer science, robotics focuses on robotic automation algorithms. Other disciplines contributing to robotics include electrical, control, software, information, electronic, telecommunication, computer, mechatronic, and materials engineering.
The goal of most robotics is to design machines that can help and assist humans. Many robots are built to do jobs that are hazardous to people, such as finding survivors in unstable ruins, and exploring space, mines and shipwrecks. Others replace people in jobs that are boring, repetitive, or unpleasant, such as cleaning, monitoring, transporting, and assembling. Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes.
Robotics aspects
Robotics usually combines three aspects of design work to create robot systems:
Mechanical construction: a frame, form or shape designed to achieve a particular task. For example, a robot designed to travel across heavy dirt or mud might use caterpillar tracks. Origami inspired robots can sense and analyze in extreme environments. The mechanical aspect of the robot is mostly the creator's solution to completing the assigned task and dealing with the physics of the environment around it. Form follows function.
Electrical components that power and control the machinery. For example, the robot with caterpillar tracks would need some kind of power to move the tracker treads. That power comes in the form of electricity, which will have to travel through a wire and originate from a battery, a basic electrical circuit. Even petrol-powered machines that get their power mainly from petrol still require an electric current to start the combustion process which is why most petrol-powered machines like cars, have batteries. The electrical aspect of robots is used for movement (through motors), sensing (where electrical signals are used to measure things like heat, sound, position, and energy status), and operation (robots need some level of electrical energy supplied to their motors and sensors in order to activate and perform basic operations)
Software. A program is how a robot decides when or how to do something. In the caterpillar track example, a robot that needs to move across a muddy road may have the correct mechanical construction and receive the correct amount of power from its battery, but would not be able to go anywhere without a program telling it to move. Programs are the core essence of a robot, it could have excellent mechanical and electrical construction, but if its program is poorly structured, its performance will be very poor (or it may not perform at all). There are three different types of robotic programs: remote control, artificial intelligence, and hybrid. A robot with remote control programming has a preexisting set of commands that it will only perform if and when it receives a signal from a control source, typically a human being with remote control. It is perhaps more appropriate to view devices controlled primarily by human commands as falling in the discipline of automation rather than robotics. Robots that use artificial intelligence interact with their environment on their own without a control source, and can determine reactions to objects and problems they encounter using their preexisting programming. A hybrid is a form of programming that incorporates both AI and RC functions in them.
Applied robotics
As more and more robots are designed for specific tasks, this method of classification becomes more relevant. For example, many robots are designed for assembly work, which may not be readily adaptable for other applications. They are termed "assembly robots". For seam welding, some suppliers provide complete welding systems with the robot i.e. the welding equipment along with other material handling facilities like turntables, etc. as an integrated unit. Such an integrated robotic system is called a "welding robot" even though its discrete manipulator unit could be adapted to a variety of tasks. Some robots are specifically designed for heavy load manipulation, and are labeled as "heavy-duty robots".
Current and potential applications include:
Manufacturing. Robots have been increasingly used in manufacturing since the 1960s. According to the Robotic Industries Association US data, in 2016 the automotive industry was the main customer of industrial robots with 52% of total sales. In the auto industry, they can amount for more than half of the "labor". There are even "lights off" factories such as an IBM keyboard manufacturing factory in Texas that was fully automated as early as 2003.
Autonomous transport including airplane autopilot and self-driving cars
Domestic robots including robotic vacuum cleaners, robotic lawn mowers, dishwasher loading and flatbread baking.
Construction robots. Construction robots can be separated into three types: traditional robots, robotic arm, and robotic exoskeleton.
Automated mining.
Space exploration, including Mars rovers.
Energy applications including cleanup of nuclear contaminated areas; and cleaning solar panel arrays.
Medical robots and Robot-assisted surgery designed and used in clinics.
Agricultural robots. The use of robots in agriculture is closely linked to the concept of AI-assisted precision agriculture and drone usage.
Food processing. Commercial examples of kitchen automation are Flippy (burgers), Zume Pizza (pizza), Cafe X (coffee), Makr Shakr (cocktails), Frobot (frozen yogurts), Sally (salads), salad or food bowl robots manufactured by Dexai (a Draper Laboratory spinoff, operating on military bases), and integrated food bowl assembly systems manufactured by Spyce Kitchen (acquired by Sweetgreen) and Silicon Valley startup Hyphen. Other examples may include manufacturing technologies based on 3D Food Printing.
Military robots.
Robot sports for entertainment and education, including Robot combat, Autonomous racing, drone racing, and FIRST Robotics.
Mechanical robotics areas
Power source
At present, mostly (lead–acid) batteries are used as a power source. Many different types of batteries can be used as a power source for robots. They range from lead–acid batteries, which are safe and have relatively long shelf lives but are rather heavy compared to silver–cadmium batteries which are much smaller in volume and are currently much more expensive. Designing a battery-powered robot needs to take into account factors such as safety, cycle lifetime, and weight. Generators, often some type of internal combustion engine, can also be used. However, such designs are often mechanically complex and need fuel, require heat dissipation, and are relatively heavy. A tether connecting the robot to a power supply would remove the power supply from the robot entirely. This has the advantage of saving weight and space by moving all power generation and storage components elsewhere. However, this design does come with the drawback of constantly having a cable connected to the robot, which can be difficult to manage.
Potential power sources could be:
pneumatic (compressed gases)
Solar power (using the sun's energy and converting it into electrical power)
hydraulics (liquids)
flywheel energy storage
organic garbage (through anaerobic digestion)
nuclear
Actuation
Actuators are the "muscles" of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators that control industrial robots in factories. There are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air.
Electric motors
The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. These motors are often preferred in systems with lighter loads, and where the predominant form of motion is rotational.
Linear actuators
Various types of linear actuators move in and out instead of by spinning, and often have quicker direction changes, particularly when very large forces are needed such as with industrial robotics. They are typically powered by compressed and oxidized air (pneumatic actuator) or an oil (hydraulic actuator) Linear actuators can also be powered by electricity which usually consists of a motor and a leadscrew. Another common type is a mechanical linear actuator such as a rack and pinion on a car.
Series elastic actuators
Series elastic actuation (SEA) relies on the idea of introducing intentional elasticity between the motor actuator and the load for robust force control. Due to the resultant lower reflected inertia, series elastic actuation improves safety when a robot interacts with the environment (e.g., humans or workpieces) or during collisions. Furthermore, it also provides energy efficiency and shock absorption (mechanical filtering) while reducing excessive wear on the transmission and other mechanical components. This approach has successfully been employed in various robots, particularly advanced manufacturing robots and walking humanoid robots.
The controller design of a series elastic actuator is most often performed within the passivity framework as it ensures the safety of interaction with unstructured environments. Despite its remarkable stability and robustness, this framework suffers from the stringent limitations imposed on the controller which may trade-off performance. The reader is referred to the following survey which summarizes the common controller architectures for SEA along with the corresponding sufficient passivity conditions. One recent study has derived the necessary and sufficient passivity conditions for one of the most common impedance control architectures, namely velocity-sourced SEA. This work is of particular importance as it drives the non-conservative passivity bounds in an SEA scheme for the first time which allows a larger selection of control gains.
Air muscles
Pneumatic artificial muscles also known as air muscles, are special tubes that expand (typically up to 42%) when air is forced inside them. They are used in some robot applications.
Wire muscles
Muscle wire, also known as shape memory alloy, is a material that contracts (under 5%) when electricity is applied. They have been used for some small robot applications.
Electroactive polymers
EAPs or EPAMs are a plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots, and to enable new robots to float, fly, swim or walk.
Piezo motors
Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to step the motor in a circle or a straight line. Another type uses the piezo elements to cause a nut to vibrate or to drive a screw. The advantages of these motors are nanometer resolution, speed, and available force for their size. These motors are already available commercially and being used on some robots.
Elastic nanotubes
Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10 J/cm3 for metal nanotubes. Human biceps could be replaced with an 8 mm diameter wire of this material. Such compact "muscle" might allow future robots to outrun and outjump humans.
Sensing
Sensors allow robots to receive information about a certain measurement of the environment, or internal components. This is essential for robots to perform their tasks, and act upon any changes in the environment to calculate the appropriate response. They are used for various forms of measurements, to give the robots warnings about safety or malfunctions, and to provide real-time information about the task it is performing.
Touch
Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips. The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and are connected to an impedance-measuring device within the core. When the artificial skin touches an object the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. The researchers expect that an important function of such artificial fingertips will be adjusting the robotic grip on held objects.
Scientists from several European countries and Israel developed a prosthetic hand in 2009, called SmartHand, which functions like a real one —allowing patients to write with it, type on a keyboard, play piano, and perform other fine movements. The prosthesis has sensors which enable the patient to sense real feelings in its fingertips.
Other
Other common forms of sensing in robotics use lidar, radar, and sonar. Lidar measures the distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. Radar uses radio waves to determine the range, angle, or velocity of objects. Sonar uses sound propagation to navigate, communicate with or detect objects on or under the surface of the water.
Mechanical grippers
One of the most common types of end-effectors are "grippers". In its simplest manifestation, it consists of just two fingers that can open and close to pick up and let go of a range of small objects. Fingers can, for example, be made of a chain with a metal wire running through it. Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand. Hands that are of a mid-level complexity include the Delft hand. Mechanical grippers can come in various types, including friction and encompassing jaws. Friction jaws use all the force of the gripper to hold the object in place using friction. Encompassing jaws cradle the object in place, using less friction.
Suction end-effectors
Suction end-effectors, powered by vacuum generators, are very simple astrictive devices that can hold very large loads provided the prehension surface is smooth enough to ensure suction.
Pick and place robots for electronic components and for large objects like car windscreens, often use very simple vacuum end-effectors.
Suction is a highly used type of end-effector in industry, in part because the natural compliance of soft suction end-effectors can enable a robot to be more robust in the presence of imperfect robotic perception. As an example: consider the case of a robot vision system that estimates the position of a water bottle but has 1 centimeter of error. While this may cause a rigid mechanical gripper to puncture the water bottle, the soft suction end-effector may just bend slightly and conform to the shape of the water bottle surface.
General purpose effectors
Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS, and the Schunk hand. They have powerful robot dexterity intelligence (RDI), with as many as 20 degrees of freedom and hundreds of tactile sensors.
Control robotics areas
The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors), which move the mechanical structure to achieve the required co-ordinated motion or force actions.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands (e.g. firing motor power electronic gates based directly upon encoder feedback signals to achieve the required torque/velocity of the shaft). Sensor fusion and internal models may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction until an object is detected with a proximity sensor) is sometimes inferred from these estimates. Techniques from control theory are generally used to convert the higher-level tasks into individual commands that drive the actuators, most often using kinematic and dynamic models of the mechanical structure.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how the two interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.
Modern commercial robotic control systems are highly complex, integrate multiple sensors and effectors, have many interacting degrees-of-freedom (DOF) and require operator interfaces, programming tools and real-time capabilities. They are oftentimes interconnected to wider communication networks and in many cases are now both IoT-enabled and mobile. Progress towards open architecture, layered, user-friendly and 'intelligent' sensor-based interconnected robots has emerged from earlier concepts related to Flexible Manufacturing Systems (FMS), and several 'open or 'hybrid' reference architectures exist which assist developers of robot control software and hardware to move beyond traditional, earlier notions of 'closed' robot control systems have been proposed. Open architecture controllers are said to be better able to meet the growing requirements of a wide range of robot users, including system developers, end users and research scientists, and are better positioned to deliver the advanced robotic concepts related to Industry 4.0. In addition to utilizing many established features of robot controllers, such as position, velocity and force control of end effectors, they also enable IoT interconnection and the implementation of more advanced sensor fusion and control techniques, including adaptive control, Fuzzy control and Artificial Neural Network (ANN)-based control. When implemented in real-time, such techniques can potentially improve the stability and performance of robots operating in unknown or uncertain environments by enabling the control systems to learn and adapt to environmental changes. There are several examples of reference architectures for robot controllers, and also examples of successful implementations of actual robot controllers developed from them. One example of a generic reference architecture and associated interconnected, open-architecture robot and controller implementation was used in a number of research and development studies, including prototype implementation of novel advanced and intelligent control and environment mapping methods in real-time.
Manipulation
A definition of robotic manipulation has been provided by Matt Mason as: "manipulation refers to an agent's control of its environment through selective contact".
Robots need to manipulate objects; pick up, modify, destroy, move or otherwise have an effect. Thus the functional end of a robot arm intended to make the effect (whether a hand, or tool) are often referred to as end effectors, while the "arm" is referred to as a manipulator. Most robot arms have replaceable end-effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator that cannot be replaced, while a few have one very general-purpose manipulator, for example, a humanoid hand.
Locomotion
Rolling robots
For simplicity, most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four-wheeled robot would not be able to.
Two-wheeled balancing robots
Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the same direction, to counterbalance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum. Many different balancing robots have been designed. While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, when used as such Segway refer to them as RMP (Robotic Mobility Platform). An example of this use has been as NASA's Robonaut that has been mounted on a Segway.
One-wheeled balancing robots
A one-wheeled balancing robot is an extension of a two-wheeled balancing robot so that it can move in any 2D direction using a round ball as its only wheel. Several one-wheeled balancing robots have been designed recently, such as Carnegie Mellon University's "Ballbot" which is the approximate height and width of a person, and Tohoku Gakuin University's "BallIP". Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people.
Spherical orb robots
Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball, or by rotating the outer shells of the sphere. These have also been referred to as an orb bot or a ball bot.
Six-wheeled robots
Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass.
Tracked robots
Tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor off-road robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA's Urban Robot "Urbie".
Walking robots
Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however, none have yet been made which are as robust as a human. There has been much study on human-inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas A&M University. Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct. Walking robots can be used for uneven terrains, which would provide better mobility and energy efficiency than other locomotion methods. Typically, robots on two legs can walk well on flat floors and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:
ZMP technique
The zero moment point (ZMP) is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of Earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over). However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory. ASIMO's walking algorithm is not static, and some dynamic balancing is used (see below). However, it still requires a smooth surface to walk on.
Hopping
Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself. Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults. A quadruped was also demonstrated which could trot, run, pace, and bound. For a full list of these robots, see the MIT Leg Lab Robots page.
Dynamic balancing (controlled falling)
A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to maintain stability. This technique was recently demonstrated by Anybots' Dexter Robot, which is so stable, it can even jump. Another example is the TU Delft Flame.
Passive dynamics
Perhaps the most promising approach uses passive dynamics where the momentum of swinging limbs is used for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.
Flying
A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight, and even landing. Other flying robots are uninhabited and are known as unmanned aerial vehicles (UAVs). They can be smaller and lighter without a human pilot on board, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command. UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter, and the Epson micro helicopter robot. Robots such as the Air Penguin, Air Ray, and Air Jelly have lighter-than-air bodies, are propelled by paddles, and are guided by sonar.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments.
Biologically-inspired flying robots
A class of robots that are biologically inspired, but which do not attempt to mimic biology, are creations such as the Entomopter. Funded by DARPA, NASA, the United States Air Force, and the Georgia Tech Research Institute and patented by Prof. Robert C. Michelson for covert terrestrial missions as well as flight in the lower Mars atmosphere, the Entomopter flight propulsion system uses low Reynolds number wings similar to those of the hawk moth (Manduca sexta), but flaps them in a non-traditional "opposed x-wing fashion" while "blowing" the surface to enhance lift based on the Coandă effect as well as to control vehicle attitude and direction. Waste gas from the propulsion system not only facilitates the blown wing aerodynamics, but also serves to create ultrasonic emissions like that of a Bat for obstacle avoidance. The Entomopter and other biologically-inspired robots leverage features of biological systems, but do not attempt to create mechanical analogs.
Snaking
Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings. The Japanese ACM-R5 snake robot can even navigate both on land and in water.
Skating
A small number of skating robots have been developed, one of which is a multi-mode walking and skating device. It has four legs, with unpowered wheels, which can either step or roll. Another robot, Plen, can use a miniature skateboard or roller-skates, and skate across a desktop.
Climbing
Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin, built by Ruixiang Zhang at Stanford University, California. Another approach uses the specialized toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot and Stickybot.
China's Technology Daily reported on 15 November 2008, that Li Hiu Yeung and his research group of New Concept Aircraft (Zhuhai) Co., Ltd. had successfully developed a bionic gecko robot named "Speedy Freelander". According to Yeung, the gecko robot could rapidly climb up and down a variety of building walls, navigate through ground and wall fissures, and walk upside-down on the ceiling. It was also able to adapt to the surfaces of smooth glass, rough, sticky or dusty walls as well as various types of metallic materials. It could also identify and circumvent obstacles automatically. Its flexibility and speed were comparable to a natural gecko. A third approach is to mimic the motion of a snake climbing a pole.
Swimming (Piscine)
It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%. Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion. Notable examples are the Robotic Fish G9, and Robot Tuna built to analyze and mathematically model thunniform motion. The Aqua Penguin, copies the streamlined shape and propulsion by front "flippers" of penguins. The Aqua Ray and Aqua Jelly emulate the locomotion of manta ray, and jellyfish, respectively.
In 2014, iSplash-II was developed as the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7 m/s). The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined waveform.
Sailing
Sailboat robots have also been developed in order to make measurements at the surface of the ocean. A typical sailboat robot is Vaimos. Since the propulsion of sailboat robots uses the wind, the energy of the batteries is only used for the computer, for the communication and for the actuators (to tune the rudder and the sail). If the robot is equipped with solar panels, the robot could theoretically navigate forever. The two main competitions of sailboat robots are WRSC, which takes place every year in Europe, and Sailbot.
Computational robotics areas
Control systems may also have varying levels of autonomy.
Direct interaction is used for haptic or teleoperated devices, and the human has nearly complete control over the robot's motion.
Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
An autonomous robot may go without human interaction for extended periods of time . Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
Full autonomy. The machine will create and complete all its tasks without human interaction.
Vision
Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences and views from cameras.
In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common.
Computer vision systems rely on image sensors that detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Robots can also be equipped with multiple vision sensors to be better able to compute the sense of depth in the environment. Like human eyes, robots' "eyes" must also be able to focus on a particular area of interest, and also adjust to variations in light intensities.
There is a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological system, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have a background in biology.
Environmental interaction and navigation
Though a significant percentage of robots in commission today are either human controlled or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular, unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO and Meinü robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns' driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information, including by a swarm of autonomous robots. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints.
Human-robot interaction
The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation. Even though the current state of robotics cannot meet the standards of these robots from science-fiction, robotic media characters (e.g., Wall-E, R2-D2) can elicit audience sympathies that increase people's willingness to accept actual robots in the future. Acceptance of social robots is also likely to increase if people can meet a social robot under appropriate conditions. Studies have shown that interacting with a robot by looking at, touching, or even imagining interacting with the robot can reduce negative feelings that some people have about robots before interacting with them. However, if pre-existing negative sentiments are especially strong, interacting with a robot can increase those negative feelings towards robots.
Speech recognition
Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech. The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent. Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952. Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%. With the help of artificial intelligence, machines nowadays can use people's voice to identify their emotions such as satisfied or angry.
Robotic voice
Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium, making it necessary to develop the emotional component of robotic voice through various techniques. An advantage of diphonic branching is the emotion that the robot is programmed to project, can be carried on the voice tape, or phoneme, already pre-programmed onto the voice media. One of the earliest examples is a teaching robot named Leachim developed in 1974 by Michael J. Freeman. Leachim was able to convert digital memory to rudimentary verbal speech on pre-recorded computer discs. It was programmed to teach students in The Bronx, New York.
Facial expression
Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos). The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.
Gestures
One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is likely that gestures will make up a part of the interaction between humans and robots. A great many systems have been developed to recognize human hand gestures.
Proxemics
Proxemics is the study of personal space, and HRI systems may try to model and work with its concepts for human interactions.
Artificial emotions
Artificial emotions can also be generated, composed of a sequence of facial expressions or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots. An example of a robot with artificial emotions is Robin the Robot developed by an Armenian IT company Expper Technologies, which uses AI-based peer-to-peer interaction. Its main task is achieving emotional well-being, i.e. overcome stress and anxiety. Robin was trained to analyze facial expressions and use his face to display his emotions given the context. The robot has been tested by kids in US clinics, and observations show that Robin increased the appetite and cheerfulness of children after meeting and talking.
Personality
Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future. Nevertheless, researchers are trying to create robots which appear to have a personality: i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.
Research robotics
Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them. Other investigations, such as MIT's cyberflora project, are almost wholly academic.
To describe the level of advancement of a robot, the term "Generation Robots" can be used. This term is coined by Professor Hans Moravec, Principal Research Scientist at the Carnegie Mellon University Robotics Institute in describing the near future evolution of robot technology. First-generation robots, Moravec predicted in 1997, should have an intellectual capacity comparable to perhaps a lizard and should become available by 2010. Because the first generation robot would be incapable of learning, however, Moravec predicts that the second generation robot would be an improvement over the first and become available by 2020, with the intelligence maybe comparable to that of a mouse. The third generation robot should have intelligence comparable to that of a monkey. Though fourth generation robots, robots with human intelligence, professor Moravec predicts, would become possible, he does not predict this happening before around 2040 or 2050.
Dynamics and kinematics
The study of motion can be divided into kinematics and dynamics. Direct kinematics or forward kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance, and singularity avoidance. Once all relevant positions, velocities, and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end-effector acceleration. This information can be used to improve the control algorithms of a robot.
In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones, and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure, and control of robots must be developed and implemented.
Open source robotics
Open source robotics research seeks standards for defining, and methods for designing and building, robots so that they can easily be reproduced by anyone. Research includes legal and technical definitions; seeking out alternative tools and materials to reduce costs and simplify builds; and creating interfaces and standards for designs to work together. Human usability research also investigates how to best document builds through visual, text or video instructions.
Evolutionary robotics
Evolutionary robots is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This happens without any direct programming of the robots by the researchers. Researchers use this method both to create better robots, and to explore the nature of evolution. Because the process often requires many generations of robots to be simulated, this technique may be run entirely or mostly in simulation, using a robot simulator software package, then tested on real robots once the evolved algorithms are good enough. Currently, there are about 10 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry.
Bionics and biomimetics
Bionics and biomimetics apply the physiology and methods of locomotion of animals to the design of robots. For example, the design of BionicKangaroo was based on the way kangaroos jump.
Swarm robotics
Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. ″In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act.″*
Quantum computing
There has been some research into whether robotics algorithms can be run more quickly on quantum computers than they can be run on digital computers. This area has been referred to as quantum robotics.
Other research areas
Nanorobots.
Cobots (collaborative robots).
Autonomous drones.
High temperature crucibles allow robotic systems to automate sample analysis.
The main venues for robotics research are the international conferences ICRA and IROS.
Human factors
Education and training
Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics. Robots have become a popular educational tool in some middle and high schools, particularly in parts of the USA, as well as in numerous youth summer camps, raising interest in programming, artificial intelligence, and robotics among students.
Employment
Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of robotics–related jobs grow and have been observed to be steadily rising. The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long-term investment for benefactors. A study found that 47 percent of US jobs are at risk to automation "over some unspecified number of years". These claims have been criticized on the ground that social policy, not AI, causes unemployment. In a 2016 article in The Guardian, Stephen Hawking stated "The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining". The rise of robotics is thus often used as an argument for universal basic income.
According to a GlobalData September 2021 report, the robotics industry was worth $45bn in 2020, and by 2030, it will have grown at a compound annual growth rate (CAGR) of 29% to $568bn, driving jobs in robotics and related industries.
Occupational safety and health implications
A discussion paper drawn up by EU-OSHA highlights how the spread of robotics presents both opportunities and challenges for occupational safety and health (OSH).
The greatest OSH benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments. In space, defense, security, or the nuclear industry, but also in logistics, maintenance, and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers' exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks. For example, robots are already used to perform repetitive and monotonous tasks, to handle radioactive material or to work in explosive atmospheres. In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services.
Moreover, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility, and adaptability. This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the "man-robot merger". Some European countries are including robotics in their national programs and trying to promote a safe and flexible cooperation between robots and operators to achieve better productivity. For example, the German Federal Institute for Occupational Safety and Health (BAuA) organises annual workshops on the topic "human-robot collaboration".
In the future, cooperation between robots and humans will be diversified, with robots increasing their autonomy and human-robot collaboration reaching completely new forms. Current approaches and technical standards aiming to protect employees from the risk of working with collaborative robots will have to be revised.
User experience
Great user experience predicts the needs, experiences, behaviors, language and cognitive abilities, and other factors of each user group. It then uses these insights to produce a product or solution that is ultimately useful and usable. For robots, user experience begins with an understanding of the robot's intended task and environment, while considering any possible social impact the robot may have on human operations and interactions with it.
It defines that communication as the transmission of information through signals, which are elements perceived through touch, sound, smell and sight. The author states that the signal connects the sender to the receiver and consists of three parts: the signal itself, what it refers to, and the interpreter. Body postures and gestures, facial expressions, hand and head movements are all part of nonverbal behavior and communication. Robots are no exception when it comes to human-robot interaction. Therefore, humans use their verbal and nonverbal behaviors to communicate their defining characteristics. Similarly, social robots need this coordination to perform human-like behaviors.
Careers
Robotics is an interdisciplinary field, combining primarily mechanical engineering and computer science but also drawing on electronic engineering and other subjects. The usual way to build a career in robotics is to complete an undergraduate degree in one of these established subjects, followed by a graduate (masters') degree in Robotics. Graduate degrees are typically joined by students coming from all of the contributing disciplines, and include familiarization of relevant undergraduate level subject matter from each of them, followed by specialist study in pure robotics topics which build upon them. As an interdisciplinary subject, robotics graduate programmes tend to be especially reliant on students working and learning together and sharing their knowledge and skills from their home discipline first degrees.
Robotics industry careers then follow the same pattern, with most roboticists working as part of interdisciplinary teams of specialists from these home disciplines followed by the robotics graduate degrees which enable them to work together. Workers typically continue to identify as members of their home disciplines who work in robotics, rather than as 'roboticists'. This structure is reinforced by the nature of some engineering professions, which grant chartered engineer status to members of home disciplines rather than to robotics as a whole.
Robotics careers are widely predicted to grow in the 21st century, as robots replace more manual and intellectual human work. Some workers who lose their jobs to robotics may be well-placed to retrain to build and maintain these robots, using their domain-specific knowledge and skills.
History
See also
Artificial intelligence
Autonomous robot
Cloud robotics
Cognitive robotics
Evolutionary robotics
Fog robotics
Glossary of robotics
Index of robotics articles
Mechatronics
Multi-agent system
Outline of robotics
Quantum robotics
Roboethics
Robot rights
Robotic art
Robotic governance
Self-reconfiguring modular robot
Soft robotics
Telerobotics
Notes
References
Further reading
External links
IEEE Robotics and Automation Society
Investigation of social robots – Robots that mimic human behaviors and gestures.
Wired's guide to the '50 best robots ever', a mix of robots in fiction (Hal, R2D2, K9) to real robots (Roomba, Mobot, Aibo). | 0.774235 | 0.998758 | 0.773273 |
Context analysis | Context analysis is a method to analyze the environment in which a business operates. Environmental scanning mainly focuses on the macro environment of a business. But context analysis considers the entire environment of a business, its internal and external environment. This is an important aspect of business planning. One kind of context analysis, called SWOT analysis, allows the business to gain an insight into their strengths and weaknesses and also the opportunities and threats posed by the market within which they operate. The main goal of a context analysis, SWOT or otherwise, is to analyze the environment in order to develop a strategic plan of action for the business.
Context analysis also refers to a method of sociological analysis associated with Scheflen (1963) which believes that 'a given act, be it a glance at [another] person, a shift in posture, or a remark about the weather, has no intrinsic meaning. Such acts can only be understood when taken in relation to one another.' (Kendon, 1990: 16). This is not discussed here; only Context Analysis in the business sense is.
Define market or subject
The first step of the method is to define a particular market (or subject) one wishes to analyze and focus all analysis techniques on what was defined. A subject, for example, can be a newly proposed product idea.
Trend Analysis
The next step of the method is to conduct a trend analysis. Trend analysis is an analysis of macro environmental factors in the external environment of a business, also called PEST analysis. It consists of analyzing political, economical, social, technological and demographic trends. This can be done by first determining which factors, on each level, are relevant for the chosen subject and to score each item as to specify its importance. This allows the business to identify those factors that can influence them. They can’t control these factors but they can try to cope with them by adapting themselves. The trends (factors) that are addressed in PEST analysis are Political, Economical, Social and Technological; but for context analysis Demographic trends are also of importance. Demographic trends are those factors that have to do with the population, like for example average age, religion, education etc. Demographic information is of importance if, for example during market research, a business wants to determine a particular market segment to target. The other trends are described in environmental scanning and PEST analysis. Trend analysis only covers part of the external environment. Another important aspect of the external environment that a business should consider is its competition. This is the next step of the method, competitor analysis.
Competitor Analysis
As one can imagine, it is important for a business to know who its competition is, how they do their business and how powerful they are so that they can be on the defense and offense. In Competitor analysis a couple of techniques are introduced how to conduct such an analysis. Here I will introduce another technique which involves conducting four sub analyses, namely: determination of competition levels, competitive forces, competitor behavior and competitor strategy.
Competition levels
Businesses compete on several levels and it is important for them to analyze these levels so that they can understand the demand. Competition is identified on four levels:
Consumer needs: level of competition that refers to the needs and desires of consumers. A business should ask: What are the desires of the consumers?
General competition: The kind of consumer demand. For example: do consumers prefer shaving with electric razor or a razor blade?
Brand: This level refers to brand competition. Which brands are preferable to a consumer?
Product: This level refers to the type of demand. Thus what types of products do consumers prefer?
Another important aspect of a competition analysis is to increase the consumer insight. For example: [Ducati] has, by interviewing a lot of their customers, concluded that their main competitor is not another bicycle, but sport-cars like [Porsche] or [GM]. This will of course influence the competition level within this business.
Competitive forces
These are forces that determine the level of competition within a particular market. There are six forces that have to be taken into consideration, power of the competition, threat of new entrants, bargaining power of buyers and suppliers, threat of substitute products and the importance of complementary products. This analysis is described in Porter 5 forces analysis.
Competitor behavior
Competitor behaviors are the defensive and offensive actions of the competition.
Competitor strategy
These strategies refer to how an organization competes with other organizations. And these are: low price strategy and product differentiation strategy.
Opportunities and Threats
The next step, after the trend analysis and competitor analysis are conducted, is to determine threats and opportunities posed by the market. The trends analysis revealed a set of trends that can influence the business in either a positive or a negative manner. These can thus be classified as either opportunities or threats. Likewise, the competitor analysis revealed positive and negative competition issues that can be classified as opportunities or threats.
Organization Analysis
The last phase of the method is an analysis of the internal environment of the organization, thus the organization itself. The aim is to determine which skills, knowledge and technological fortes the business possesses. This entails conducting an internal analysis and a competence analysis.
Internal analysis
The internal analysis, also called SWOT analysis, involves identifying the organizations strengths and weaknesses. The strengths refer to factors that can result in a market advantage and weaknesses to factors that give a disadvantage because the business is unable to comply with the market needs.
Competence analysis
Competences are the combination of a business’ knowledge, skills and technology that can give them the edge versus the competition. Conducting such an analysis involves identifying market related competences, integrity related competences and functional related competences.
SWOT-i matrix
The previous sections described the major steps involved in context analysis. All these steps resulted in data that can be used for developing a strategy. These are summarized in a SWOT-i matrix. The trend and competitor analysis revealed the opportunities and threats posed by the market. The organization analysis revealed the competences of the organization and also its strengths and weaknesses. These strengths, weaknesses, opportunities and threats summarize the entire context analysis. A SWOT-i matrix, depicted in the table below, is used to depict these and to help visualize the strategies that are to be devised. SWOT- i stand for Strengths, Weaknesses, Opportunities, Threats and Issues. The Issues refer to strategic issues that will be used to devise a strategic plan.
This matrix combines the strengths with the opportunities and threats, and the weaknesses with the opportunities and threats that were identified during the analysis. Thus the matrix reveals four clusters:
Cluster strengths and opportunities: use strengths to take advantage of opportunities.
Cluster strengths and threats: use strengths to overcome the threats
Cluster weaknesses and opportunities: certain weaknesses hamper the organization from taking advantage of opportunities therefore they have to look for a way to turn those weaknesses around.
Cluster weaknesses and threats: there is no way that the organization can overcome the threats without having to make major changes.
Strategic Plan
The ultimate goal of context analysis is to develop a strategic plan. The previous sections described all the steps that form the stepping stones to developing a strategic plan of action for the organization. The trend and competitor analysis gives insight to the opportunities and threats in the market and the internal analysis gives insight to the competences of the organization. And these were combined in the SWOT-i matrix. The SWOT-i matrix helps identify issues that need to be dealt with. These issues need to be resolved by formulating an objective and a plan to reach that objective, a strategy.
Example
Joe Arden is in the process of writing a business plan for his business idea, Arden Systems. Arden Systems will be a software business that focuses on the development of software for small businesses. Joe realizes that this is a tough market because there are many software companies that develop business software. Therefore, he conducts context analysis to gain insight into the environment of the business in order to develop a strategic plan of action to achieve competitive advantage within the market.
Define market
First step is to define a market for analysis. Joe decides that he wants to focus on small businesses consisting of at most 20 employees.
Trend Analysis
Next step is to conduct trend analysis. The macro environmental factors that Joe should take into consideration are as follows:
Political trend: Intellectual property rights
Economical trend: Economic growth
Social trend: Reduce operational costs; Ease for conducting business administration
Technological trend: Software suites; Web applications
Demographic trend: Increase in the graduates of IT related studies
Competitor Analysis
Following trend analysis is competitor analysis. Joe analyzes the competition on four levels to gain insight into how they operate and where advantages lie.
Competition level:
Consumer need: Arden Systems will be competing on the fact that consumers want efficient and effective conducting of a business
Brand: There are software businesses that have been making business software for a while and thus have become very popular in the market. Competing based on brand will be difficult.
Product: They will be packaged software like the major competition.
Competitive forces: Forces that can affect Arden Systems are in particular:
The bargaining power of buyers: the extent to which they can switch from one product to the other.
Threat of new entrants: it is very easy for someone to develop a new software product that can be better than Arden's.
Power of competition: the market leaders have most of the cash and customers; they have to power to mold the market.
Competitor behavior: The focus of the competition is to take over the position of the market leader.
Competitor strategy: Joe intends to compete based on product differentiation.
Opportunities and Threats
Now that Joe has analyzed the competition and the trends in the market he can define opportunities and threats.
Opportunities:
Because the competitors focus on taking over the leadership position, Arden can focus on those segments of the market that the market leader ignores. This allows them to take over where the market leader shows weakness.
The fact that there are new IT graduates, Arden can employ or partner with someone that may have a brilliant idea.
Threats:
IT graduates with fresh idea's can start their own software businesses and form a major competition for Arden Systems.
Organization analysis
After Joe has identified the opportunities and threats of the market he can try to figure out what Arden System's strengths and weaknesses are by doing an organization analysis.
Internal analysis:
Strength: Product differentiation
Weakness: Lacks innovative people within the organization
Competence analysis:
Functional related competence: Arden Systems provides system functionalities that fit small businesses.
Market-related competence: Arden Systems has the opportunity to focus on a part of the market which is ignored.
SWOT-i matrix
After the previous analyses, Joe can create a SWOT-i matrix to perform SWOT analysis.
Strategic Plan
After creating the SWOT-i matrix, Joe is now able to devise a strategic plan.
Focus all software development efforts to that part of the market which is ignored by market leaders, small businesses.
Employ recent innovative It graduates to stimulate the innovation within Arden Systems.
See also
Organization design
Segmenting and positioning
Environmental scanning
Market research
SWOT analysis
Six Forces Model
PESTLE analysis
Gap analysis
References
Van der Meer, P.O. (2005). Omgevings analyse. In Ondernemerschap in hoofdlijnen. (pp 74–85). Houten: Wolters-Noordhoff.
Ward, J. & Peppard, J. (2002). The Strategic Framework. In Strategic Planning for information systems. (pp. 70–81).England: John Wiley & Sons.
Ward, J. & Peppard, J. (2002). Situation Analysis. In Strategic Planning for information systems. (pp. 82–83).England: John Wiley & Sons.
Porter, M. (1980). Competitive strategy: techniques for analyzing industries and competitors. New York: Free Press
Kendon, A. (1990). Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press.
Competition (economics)
Business intelligence terms
Market research
Strategic management | 0.797984 | 0.969032 | 0.773272 |
Climate resilience | Climate resilience is a concept to describe how well people or ecosystems are prepared to bounce back from certain climate hazard events. The formal definition of the term is the "capacity of social, economic and ecosystems to cope with a hazardous event or trend or disturbance". For example, climate resilience can be the ability to recover from climate-related shocks such as floods and droughts. Different actions can increase climate resilience of communities and ecosystems to help them cope. They can help to keep systems working in the face of external forces. For example, building a seawall to protect a coastal community from flooding might help maintain existing ways of life there.
To increase climate resilience means one has to reduce the climate vulnerability of people, communities and countries. This can be done in many different ways. They can be technological and infrastructural changes (including buildings and roads) or policy (e.g. laws and regulation). There are also social and community approaches, as well as nature-based ones, for example by restoring ecosystems like forests to act as natural barriers against climate impacts. These types of approaches are also known as climate change adaptation. Climate resilience is a broader concept that includes adaptation but also emphasizes a system-wide approach to managing risks. The changes have to be implemented at all scales of society, from local community action all the way to global treaties. It also emphasizes the need to transform systems and societies and to better cope with a changed climate.
To make societies more resilient, climate policies and plans should be shaped by choices that support sustainability. This kind of development has come to be known as climate resilient development. It has become a new paradigm for sustainable development. It influences theory and practice across all sectors globally. Two approaches that fall under this kind of development are climate resilient infrastructure and climate-smart agriculture. Another example are climate-resilient water services. These are services that provide access to high quality drinking water during all seasons and even during extreme weather events. On every continent, governments are now adopting policies for climate resilient economies. International frameworks such as the Paris Agreement and the Sustainable Development Goals are drivers for such initiatives.
Tools exist to measure climate resilience. They allow for comparisons of different groups of people through standardized metrics. Objective tools use fixed and transparent definitions of resilience. Two examples for objective tools are the Resilience Index Measurement and Analysis (RIMA) and the Livelihoods Change Over Time (LCOT). Subjective approaches on the other hand use people's feelings of what constitutes resilience. People then make their own assessment of their resilience.
Definition
Climate resilience is generally considered to be the ability to recover from, or to mitigate vulnerability to, climate-related shocks such as floods and droughts. It is a political process that strengthens the ability of all to mitigate vulnerability to risks from, and adapt to changing patterns in, climate hazards and variability.
The IPCC Sixth Assessment Report considers climate resilience to be "the capacity of social, economic and ecosystems to cope with a hazardous event or trend or disturbance". It includes the abilities to reorganize and learn.
Resilience is a useful concept because it speaks across sectors and disciplines but this also makes it open to interpretation resulting in differing, and at times competing, definitions. The definition of climate resilience is heavily debated, in both conceptual and practical terms.
According to one framework, the three basic capacities of resilience are adaptive, anticipatory and absorptive capacity. Each of these capacities are more readily recognizable which also means that any changes can more easily be tracked. The focus is on resilience as an outcome of an action or program, and how to measure an improvement.
Climate resilience is strongly related to climate change adaptation because both have to do with strengthening the capacity of a system to withstand climate events. Adaptation and resilience are often used interchangeably, however, there are key differences.
Resilience involves a more systematic approach to absorbing change. It involves using those changes to become more efficient. The idea is that people can intervene to reorganize the system when disturbance creates an opportunity to do so. Climate resilience is an important part of building system-level resilience to multiple shocks.
Adaptation is any action or process that helps people or nature adjust to negative impacts of climate change. More rarely, it is about taking advantage of those changes.
Climate resilient development is a closely related area of work and research topic that has recently emerged. It describes situations in which adaptation, mitigation and development solutions are pursued together. It is able to benefit from synergies from among the actions and reduce trade-offs.
Implementation
Currently, the majority of work regarding climate resilience has focused on actions taken to maintain existing systems and structures. Such adaptations are also considered to be incremental actions rather than transformational ones. They can help to keep the system working in the face of external forces. For example, building a seawall to protect a coastal community from flooding might help maintain existing ways of life there. In this way, implemented adaptation builds upon resilience as a way of bouncing back to recover after a disturbance.
On the other hand, climate resilience projects can also be activities to promote and support transformational adaptation. This is because transformational adaptation is connected with implementation at scale and ideally at the system-level. Transformations, and the processes of transition, cover major systems and sectors at scale. These are energy, land and ecosystems, urban and infrastructure, and industrial and societal. Structural changes are also recognized as transformational. Changing land use regulations in a coastal community and establishing a programme of managed retreat are examples of structural changes. However, transformations may fail if they do not integrate social justice, consider power differences and political inclusion, and if they do not deliver improvements in incomes and wellbeing for everyone.
Building climate resilience is a challenging activity that involves a wide range of actors and agents. It can involve individuals, community organizations, corporations, government at all levels as well as international organizations. Research shows that the strongest indicator of successful climate resilience efforts at all scales is a well developed, existing network of social, political, economic and financial institutions that is already positioned to effectively take on the work of identifying and addressing the risks posed by climate change. Cities, states, and nations that have already developed such networks generally have far higher net incomes and gross domestic product (GDP).
By sector
Development
"Climate resilient development" has become a new (albeit contested) paradigm for sustainable development, influencing theory and practice across all sectors globally. This is particularly true in the water sector, since water security is intimately connected to climate change. On every continent, governments are adopting policies for climate resilient economies, driven in part by international frameworks such as the Paris Agreement and the Sustainable Development Goals.
Climate resilient development "integrates adaptation measures and their enabling conditions with mitigation to advance sustainable development for all". It involves questions of equity and system transitions, and includes adaptations for human, ecosystem and planetary health. Climate resilient development is facilitated by developing partnerships with traditionally marginalized groups, including women, youth, Indigenous Peoples, local communities and ethnic minorities.
To achieve climate resilient development, the following actions are needed: increasing climate information, and financing and technical capacity for flexible and dynamic systems. This needs to be coupled with greater consideration of the socio-ecological resilience and context-specific values of marginalized communities and meaningful engagement with the most vulnerable in decision making. Consequently, resilience produces a range of challenges and opportunities when applied to sustainable development.
Infrastructure
Infrastructure failures can have broad-reaching consequences extending away from the site of the original event, and for a considerable duration after the immediate failure. Furthermore, increasing reliance infrastructure system interdependence, in combination with the effects of climate change and population growth all contribute to increasing vulnerability and exposure, and greater probability of catastrophic failures. To reduce this vulnerability, and in recognition of limited resources and future uncertainty about climate projections, new and existing long-lasting infrastructure must undergo a risk-based engineering and economic analyses to properly allocate resources and design for climate resilience.
Incorporating climate projections into building and infrastructure design standards, investment and appraisal criteria, and model building codes is currently not common. Some resilience guidelines and risk-informed frameworks have been developed by public entities. Such manuals can offer guidance for adaptive design methods, characterization of extremes, development of flood design criteria, flood load calculation and the application of adaptive risk management principals account for more severe climate/weather extremes. One example is the "Climate Resiliency Design Guidelines" by New York City.
Agriculture
Water and sanitation
Ecosystems
Climate change caused by humans can worsen ecosystem resilience. It can lead to regime shifts in ecosystems, often to less desirable and degraded conditions. On the hand, some human actions can make ecosystems more resilient and help species adapt. Examples are protecting larger areas of semi-natural habitat and creating links between parts of the landscape to help species move.
Disaster management
At larger governmental levels, general programs to improve climate resiliency through greater disaster preparedness are being implemented. For example, in cases such as Norway, this includes the development of more sensitive and far-reaching early warning systems for extreme weather events, creation of emergency electricity power sources, enhanced public transportation systems, and more.
Resilience assessment
Governments and development agencies are spending increasing amounts of finance to support resilience-building interventions. Resilience measurement can make valuable contributions in guiding resource allocations towards resilience-building. This includes targeted identification of vulnerability hotspots, a better understanding of the drivers of resilience, and tools to infer the impact and effectiveness of resilience-building interventions. In recent years, a large number of resilience measurement tools have emerged, offering ways to track and measure resilience at a range of scales - from individuals and households to communities and nations.
Indicators and indices
Efforts to measure climate resilience currently face several technical challenges. Firstly, the definition of resilience is heavily contested, making it difficult to choose appropriate characteristics and indicators to track. Secondly, the resilience or households or communities cannot be measured using a single observable metric. Resilience is made up of a range of processes and characteristics, many of which are intangible and difficult to observe (such as social capital). As a result, many resilience toolkits resort to using large lists of proxy indicators.
Indicator approaches use a composite index of many individual quantifiable indicators. To generate the index value or 'score', most often a simple average is calculated across a set of standardized values. However, sometimes weighting is done according what are thought to be the most important determinants of resilience.
Climate resilience framework
A climate resilience framework can better equip governments and policymakers to develop sustainable solutions that combat the effects of climate change. To begin with, climate resilience establishes the idea of multi-stable socio-ecological systems (socio-ecological systems can actually stabilize around a multitude of possible states). Secondly, climate resilience has played a critical role in emphasizing the importance of preventive action when assessing the effects of climate change. Although adaptation is always going to be a key consideration, making changes after the fact has a limited capability to help communities and nations deal with climate change. By working to build climate resilience, policymakers and governments can take a more comprehensive stance that works to mitigate the harms of climate change impacts before they happen. Finally, a climate resilience perspective encourages greater cross-scale connectedness of systems. Creating mechanisms of adaptation that occur in isolation at local, state, or national levels may leave the overall social-ecological system vulnerable. A resilience-based framework would require far more cross-talk, and the creation of environmental protections that are more holistically generated and implemented.
Tools
Tools for resilience assessment vary depending on the sector, the scale and the entity such as households, communities or species. They vary also by the type of assessment, for example if the aim is to understand effectiveness of resilience-building interventions.
Community resilience assessment tools
Community resilience assessment is an important step toward reducing disasters from climate hazards. They are also helpful for being ready to take advantage of the opportunities to reorganize. There are many tools available for investigating the environmental, social, economic and physical features of a community that are important for resilience. A survey of the available tools found many differences between tools with no standardized approaches to assess resilience. One category of tools focuses mainly on measuring outcomes. In contrast tools that focus on measuring resilience at the 'starting point' or early stages and continuously over a project are a less common.
Livelihoods and food security
Most of the recent initiatives to measure resilience in rural development contexts share two shortcomings: complexity and high cost. USAID published a field guide for assessing climate resilience in smallholder supply chains.
Most objective approaches use fixed and transparent definitions of resilience and allow for different groups of people to be compared through standardized metrics. However, as many resilience processes and capacities are intangible, objective approaches are heavily reliant on crude proxies. Examples of commonly used objective measures include the Resilience Index Measurement and Analysis (RIMA) and the Livelihoods Change Over Time (LCOT).
Subjective approaches to resilience measurement take a contrasting view. They assume that people have a valid understanding of their resilience and seek to factor perceptions into the measurement process. They challenge the notion that experts are best placed to evaluate other people's lives. Subjective approaches use people's menu of what constitutes resilience and allow them to self-evaluate accordingly. An example is the Subjectively-Evaluated Resilience Score (SERS).
Related concepts
Climate change adaptation
Climate change vulnerability
Disaster risk reduction
See also
References
Ecology terminology
Environmental impact of agriculture
Environmental justice
Climate change mitigation | 0.785167 | 0.984819 | 0.773247 |
Autodidacticism | Autodidacticism (also autodidactism) or self-education (also self-learning, self-study and self-teaching) is the practice of education without the guidance of schoolmasters (i.e., teachers, professors, institutions).
Overview
Autodidacts are self-taught humans who learn a subject-of-study's aboutness through self-study. This educative praxis (process) may involve or complement formal education. Formal education itself may have a hidden curriculum that requires self-study for the uninitiated.
Generally, autodidacts are individuals who choose the subject they will study, their studying material, and the studying rhythm and time. Autodidacts may or may not have formal education, and their study may be either a complement or an alternative to formal education. Many notable contributions have been made by autodidacts.
The self-learning curriculum is infinite. One may seek out alternative pathways in education and use these to gain competency; self-study may meet some prerequisite-curricula criteria for experiential education or apprenticeship.
Self-education techniques used in self-study can include reading educational textbooks, watching educational videos and listening to educational audio recordings, or by visiting infoshops. One uses some space as a learning space, where one uses critical thinking to develop study skills within the broader learning environment until they've reached an academic comfort zone.
Etymology
The term has its roots in the Ancient Greek words (, ) and (, ). The related term didacticism defines an artistic philosophy of education.
Terminology
Various terms are used to describe self-education. One such is heutagogy, coined in 2000 by Stewart Hase and Chris Kenyon of Southern Cross University in Australia; others are self-directed learning and self-determined learning. In the heutagogy paradigm, a learner should be at the centre of their own learning. A truly self-determined learning approach also sees the heutagogic learner exploring different approaches to knowledge in order to learn; there is an element of experimentation underpinned by a personal curiosity.
Andragogy "strive[s] for autonomy and self-direction in learning", while Heutagogy "identif[ies] the potential to learn from novel experiences as a matter of course [...] manage their own learning". Ubuntugogy is a type of cosmopolitanism that has a collectivist ethics of awareness concerning the African diaspora.
Modern era
Autodidacticism is sometimes a complement of modern formal education. As a complement to formal education, students would be encouraged to do more independent work. The Industrial Revolution created a new situation for self-directed learners.
Before the twentieth century, only a small minority of people received an advanced academic education. As stated by Joseph Whitworth in his influential report on industry dated from 1853, literacy rates were higher in the United States. However, even in the U.S., most children were not completing high school. High school education was necessary to become a teacher. In modern times, a larger percentage of those completing high school also attended college, usually to pursue a professional degree, such as law or medicine, or a divinity degree.
Collegiate teaching was based on the classics (Latin, philosophy, ancient history, theology) until the early nineteenth century. There were few if any institutions of higher learning offering studies in engineering or science before 1800. Institutions such as the Royal Society did much to promote scientific learning, including public lectures. In England, there were also itinerant lecturers offering their service, typically for a fee.
Prior to the nineteenth century, there were many important inventors working as millwrights or mechanics who, typically, had received an elementary education and served an apprenticeship. Mechanics, instrument makers and surveyors had various mathematics training. James Watt was a surveyor and instrument maker and is described as being "largely self-educated". Watt, like some other autodidacts of the time, became a Fellow of the Royal Society and a member of the Lunar Society. In the eighteenth century these societies often gave public lectures and were instrumental in teaching chemistry and other sciences with industrial applications which were neglected by traditional universities. Academies also arose to provide scientific and technical training.
Years of schooling in the United States began to increase sharply in the early twentieth century. This phenomenon was seemingly related to increasing mechanization displacing child labor. The automated glass bottle-making machine is said to have done more for education than child labor laws because boys were no longer needed to assist. However, the number of boys employed in this particular industry was not that large; it was mechanization in several sectors of industry that displaced child labor toward education. For males in the U.S. born 1886–90, years of school averaged 7.86, while for those born in 1926–30, years of school averaged 11.46.
One of the most recent trends in education is that the classroom environment should cater towards students' individual needs, goals, and interests. This model adopts the idea of inquiry-based learning where students are presented with scenarios to identify their own research, questions and knowledge regarding the area. As a form of discovery learning, students in today's classrooms are being provided with more opportunity to "experience and interact" with knowledge, which has its roots in autodidacticism.
Successful self-teaching can require self-discipline and reflective capability. Some research suggests that the ability to regulate one's own learning may need to be modeled to some students so that they become active learners, while others learn dynamically via a process outside conscious control. To interact with the environment, a framework has been identified to determine the components of any learning system: a reward function, incremental action value functions and action selection methods. Rewards work best in motivating learning when they are specifically chosen on an individual student basis. New knowledge must be incorporated into previously existing information as its value is to be assessed. Ultimately, these scaffolding techniques, as described by Vygotsky (1978) and problem solving methods are a result of dynamic decision making.
In his book Deschooling Society, philosopher Ivan Illich strongly criticized 20th-century educational culture and the institutionalization of knowledge and learning - arguing that institutional schooling as such is an irretrievably flawed model of education - advocating instead ad-hoc co-operative networks through which autodidacts could find others interested in teaching themselves a given skill or about a given topic, supporting one another by pooling resources, materials, and knowledge.
Secular and modern societies have given foundations for new systems of education and new kinds of autodidacts. As Internet access has become more widespread the World Wide Web (explored using search engines such as Google) in general, and websites such as Wikipedia (including parts of it that were included in a book or referenced in a reading list), YouTube, Udemy, Udacity and Khan Academy in particular, have developed as learning centers for many people to actively and freely learn together. Organizations like The Alliance for Self-Directed Education (ASDE) have been formed to publicize and provide guidance for self-directed education. Entrepreneurs like Henry Ford, Steve Jobs, and Bill Gates are considered influential self-teachers.
History
The first philosophical claim supporting an autodidactic program to the study of nature and God was in the philosophical novel Hayy ibn Yaqdhan (Alive son of the Vigilant), whose titular hero is considered the archetypal autodidact. The story is a medieval autodidactic utopia, a philosophical treatise in a literary form, which was written by the Andalusian philosopher Ibn Tufail in the 1160s in Marrakesh. It is a story about a feral boy, an autodidact prodigy who masters nature through instruments and reason, discovers laws of nature by practical exploration and experiments, and gains summum bonum through a mystical mediation and communion with God. The hero rises from his initial state of tabula rasa to a mystical or direct experience of God after passing through the necessary natural experiences. The focal point of the story is that human reason, unaided by society and its conventions or by religion, can achieve scientific knowledge, preparing the way to the mystical or highest form of human knowledge.
Commonly translated as "The Self-Taught Philosopher" or "The Improvement of Human Reason", Ibn-Tufayl's story Hayy Ibn-Yaqzan inspired debates about autodidacticism in a range of historical fields from classical Islamic philosophy through Renaissance humanism and the European Enlightenment. In his book Reading Hayy Ibn-Yaqzan: a Cross-Cultural History of Autodidacticism, Avner Ben-Zaken showed how the text traveled from late medieval Andalusia to early modern Europe and demonstrated the intricate ways in which autodidacticism was contested in and adapted to diverse cultural settings.
Autodidacticism apparently intertwined with struggles over Sufism in twelfth-century Marrakesh; controversies about the role of philosophy in pedagogy in fourteenth-century Barcelona; quarrels concerning astrology in Renaissance Florence in which Pico della Mirandola pleads for autodidacticism against the strong authority of intellectual establishment notions of predestination; and debates pertaining to experimentalism in seventeenth-century Oxford. Pleas for autodidacticism echoed not only within close philosophical discussions; they surfaced in struggles for control between individuals and establishments.
In the story of Black American self-education, Heather Andrea Williams presents a historical account to examine Black American's relationship to literacy during slavery, the Civil War and the first decades of freedom. Many of the personal accounts tell of individuals who have had to teach themselves due to racial discrimination in education.
In architecture
Many successful and influential architects, such as Mies van der Rohe, Frank Lloyd Wright, Violet-Le-Duc, Tadao Ando were self-taught.
There are very few countries allowing autodidacticism in architecture today. The practice of architecture or the use of the title "architect", are now protected in most countries.
Self-taught architects have generally studied and qualified in other fields such as engineering or arts and crafts. Jean Prouvé was first a structural engineer. Le Corbusier had an academic qualification in decorative arts. Tadao Ando started his career as a draftsman, and Eileen Gray studied fine arts.
When a political state starts to implement restrictions on the profession, there are issues related to the rights of established self-taught architects. In most countries the legislation includes a grandfather clause, authorising established self-taught architects to continue practicing. In the UK, the legislation allowed self-trained architects with two years of experience to register. In France, it allowed self-trained architects with five years of experience to register. In Belgium, the law allowed experienced self-trained architects in practice to register. In Italy, it allowed self-trained architects with 10 years of experience to register. In The Netherlands, the "" along with additional procedures, allowed architects with 10 years of experience and architects aged 40 years old or over, with 5 years of experience, to access the register.
However, other sovereign states chose to omit such a clause, and many established and competent practitioners were stripped of their professional rights. In the Republic of Ireland, a group named "Architects' Alliance of Ireland" is defending the interests of long-established self-trained architects who were deprived of their rights to practice as per Part 3 of the Irish Building Control Act 2007.
Theoretical research such as Architecture of Change, Sustainability and Humanity in the Built Environment or older studies such as Vers une Architecture from Le Corbusier describe the practice of architecture as an environment changing with new technologies, sciences, and legislation. All architects must be autodidacts to keep up to date with new standards, regulations, or methods.
Self-taught architects such as Eileen Gray, Luis Barragán, and many others, created a system where working is also learning, where self-education is associated with creativity and productivity within a working environment.
While he was primarily interested in naval architecture, William Francis Gibbs learned his profession through his own study of battleships and ocean liners. Through his life he could be seen examining and changing the designs of ships that were already built, that is, until he started his firm Gibbs and Cox.
Predictors
Openness is the largest predictor of self-directed learning out of the Big Five personality traits, though, in a study, personality only explained 10% of the variance in self-directed learning.
Future role
The role of self-directed learning continues to be investigated in learning approaches, along with other important goals of education, such as content knowledge, epistemic practices and collaboration. As colleges and universities offer distance learning degree programs and secondary schools provide cyber school options for K–12 students, technology provides numerous resources that enable individuals to have a self-directed learning experience. Several studies show these programs function most effectively when the "teacher" or facilitator is a full owner of virtual space to encourage a broad range of experiences to come together in an online format. This allows self-directed learning to encompass both a chosen path of information inquiry, self-regulation methods and reflective discussion among experts as well as novices in a given area. Furthermore, massive open online courses (MOOCs) make autodidacticism easier and thus more common.
A 2016 Stack Overflow poll reported that due to the rise of autodidacticism, 69.1% of software developers appear to be self-taught.
Notable individuals
Some notable autodidacts can be broadly grouped in the following interdisciplinary areas:
Artists and authors
Actors, musicians, and other artists
Architects
Engineers and inventors
Scientists, historians, and educators
Educational materials availability
Most governments have compulsory education that may deny the right to education on the basis of discrimination; state school teachers may unwittingly indoctrinate students into the ideology of the oppressive community and government via a hidden curriculum.
See also
References
Further reading
External links
African-American society
African Americans and education
Alternative education
Applied learning
Area studies
Black studies
Cybernetics
Education activism
Education theory
Education in Poland during World War II
Education museums in the United States
Espionage
History of education in the United States
Information sensitivity
Learning
Learning methods
Learning to read
Lyceum movement
Methodology
Open content
Pedagogical movements and theories
Philosophical methodology
Philosophy of education
Play (activity)
Pre-emancipation African-American history
Problem solving methods
Research methods
Sampling (statistics)
School desegregation pioneers
Science experiments
Self-care
Teaching
Underground education
United States education law
WikiLeaks | 0.775078 | 0.9976 | 0.773218 |
Descriptive ethics | Descriptive ethics, also known as comparative ethics, is the study of people's beliefs about morality. It contrasts with prescriptive or normative ethics, which is the study of ethical theories that prescribe how people ought to act, and with meta-ethics, which is the study of what ethical terms and theories actually refer to. The following examples of questions that might be considered in each field illustrate the differences between the fields:
Descriptive ethics: What do people think is right?
Meta-ethics: What does "right" even mean?
Normative (prescriptive) ethics: How should people act?
Applied ethics: How do we take moral knowledge and put it into practice?
Description
Descriptive ethics is a form of empirical research into the attitudes of individuals or groups of people. In other words, this is the division of philosophical or general ethics that involves the observation of the moral decision-making process with the goal of describing the phenomenon. Those working on descriptive ethics aim to uncover people's beliefs about such things as values, which actions are right and wrong, and which characteristics of moral agents are virtuous. Research into descriptive ethics may also investigate people's ethical ideals or what actions societies reward or punish in law or politics. What ought to be noted is that culture is generational and not static. Therefore, a new generation will come with its own set of morals and that qualifies to be their ethics. Descriptive ethics will hence try to oversee whether ethics still holds its place.
Because descriptive ethics involves empirical investigation, it is a field that is usually investigated by those working in the fields of evolutionary biology, psychology, sociology or anthropology. Information that comes from descriptive ethics is, however, also used in philosophical arguments.
Value theory can be either normative or descriptive but is usually descriptive.
Lawrence Kohlberg: An example of descriptive ethics
Lawrence Kohlberg is one example of a psychologist working on descriptive ethics. In one study, for example, Kohlberg questioned a group of boys about what would be a right or wrong action for a man facing a moral dilemma (specifically, the Heinz dilemma): should he steal a drug to save his wife, or refrain from theft even though that would lead to his wife's death?
Kohlberg's concern was not which choice the boys made, but the moral reasoning that lay behind their decisions. After carrying out a number of related studies, Kohlberg devised a theory about the development of human moral reasoning that was intended to reflect the moral reasoning actually carried out by the participants in his research. Kohlberg's research can be classed as descriptive ethics to the extent that he describes human beings' actual moral development. If, in contrast, he had aimed to describe how humans ought to develop morally, his theory would have involved prescriptive ethics.
See also
Experimental philosophy
List of ethics topics
Moral reasoning
Moral psychology
References
Further reading
Coleman, Stephen Edwin, "DIGITAL PHOTO MANIPULATION: A DESCRIPTIVE ANALYSIS OF CODES OF ETHICS AND ETHICAL DECISIONS OF PHOTO EDITORS" (2007). Dissertations. 1304. https://aquila.usm.edu/dissertations/1304
Descriptive ethics
Moral psychology | 0.783526 | 0.986779 | 0.773167 |
Horticulture | Horticulture is the art and science of growing plants. This definition is seen in its etymology, which is derived from the Latin words hortus, which means "garden" and cultura which means "to cultivate". There are various divisions of horticulture because plants are grown for a variety of purposes. These divisions include, but are not limited to: gardening, plant production/propagation, arboriculture, landscaping, floriculture and turf maintenance. For each of these, there are various professions, aspects, tools used and associated challenges; Each requiring highly specialized skills and knowledge of the horticulturist.
Typically, horticulture is characterized as the ornamental, small-scale/non-industrial cultivation of plants, as compared to the large-scale cultivation of crops/livestock that is seen in agriculture. However, there are aspects of horticulture that are industrialized/commercial such as greenhouse production across the globe.
Horticulture began with the domestication of plants around 10,000-20,000 years ago. At first, only plants for sustenance were grown and maintained, but eventually as humanity became increasingly sedentary, plants were grown for their ornamental value. Horticulture is considered to have diverged from agriculture during the middle-ages when people started growing plants for pleasure/aesthetics, rather than just for sustenance.
Emerging technologies are moving the industry forward, especially in the way of altering plants to be more adverse to parasites, disease and drought. Modifying technologies such as Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR/Cas9), are also improving the nutrition, taste and yield of crops.
There are many horticultural organizations and societies found around the world, that are formed by horticulturists and those within the industry. These include: The Royal Horticultural Society, International Society for Horticultural Science, The American Society of Horticultural Science, The Horticultural Society of India, The Global Horticulture Initiative, The Chartered Institute of Horticulture and The Australian Society of Horticultural Science.
Divisions of horticulture and types of horticulturists
There are divisions and sub-divisions within horticulture, this is because plants are grown for many different reasons. Some of the divisions in horticulture include:
gardening
plant production and propagation
arboriculture
landscaping
floriculture
garden design and maintenance
turf maintenance
plant conservation and landscape restoration.
It includes the cultivation of all plants including, but not limited to: ornamental trees/shrubs/plants, fruits, vegetables, flowers, turf, nuts, seeds, herbs and other medicinal/edible plants. This cultivation may occur in garden spaces, nurseries, greenhouses, vineyards, orchards, parks, recreation areas, etc.
Horticulturists, are those who study and practice the cultivation of plant material professionally. There are many different types of horticulturists with different job-titles, including: gardener, grower, farmer, arborist, floriculturist, landscaper, agronomist, designer, landscape architect, lawn-care specialist, nursery manager, botanical garden curator, horticulture therapist, and much more. They may be hired by a variety of companies/institutions including, but not limited to: botanical gardens, private/public gardens, parks, cemeteries, greenhouses, golf courses, vineyards, estates, landscaping companies, nurseries, educational institutions, etc. They may also be self-employed.
History
Horticulture began with the domestication of plants 10,000-20,000 years ago, and has since, been deeply integrated into humanity's history. The domestication of plants occurred independently within various civilizations across the globe. The history of horticulture overlaps with the history of agriculture and history of botany, as all three originated with the domestication of various plants for food. In Europe, agriculture and horticulture diverged at some point during the Middle Ages.
Early practices in horticulture
Early practices in horticulture include a number of various ways that people managed the land (using an assortment of tools), with a variety of methods and types of plants cultivated for a number of uses. Methods, tools and plants grown, have always depended on the culture and climate.
Pre-colonized North and Central America
There are a number of traditional horticultural practices that we know of today: such as the Indigenous peoples of pre-colonized North America using biochar to enhance soil productivity by smoldering plant waste - European settlers called this soil Terra Preta de Indio. In North America, Indigenous people grew maize, squash, and sunflower - among other crops. Mesoamerican cultures focused on the cultivating of crops on a small scale, such as the milpa or maize field, around their dwellings or in specialized plots which were visited occasionally during migrations from one area to the next. In Central America, the Maya involved augmentation of the forest with useful trees such as papaya, avocado, cacao, ceiba and sapodilla. In the fields, multiple crops such as beans, squash, pumpkins and chili peppers were grown. The first horticulturists in many cultures, were mainly or exclusively women.
Historical uses for plants in horticulture
In addition to the medicinal and nutritional values that plants hold, plants have also been grown for their beauty, and to impress and demonstrate power, knowledge, status and even wealth of those in-control of the cultivated plant material. This symbolic power that plants hold has existed even before the beginnings of their cultivation.
There is evidence that various gardens maintained by the Aztecs were sacred, as they grew plants that held religious value. Plants were grown for their metaphorical relation to Gods and Goddesses. Flowers held symbolic power in religious rites, as they were offered to the Gods, as well as were given in ceremonies to leaders to demonstrate their connection to the Gods.
Aspects of horticulture
Propagation
Plant propagation in horticulture is the process in which the multiplication of a species is performed, increasing the number of individual plants. Propagation involves both sexual and asexual methods. In sexual propagation seeds are used, while asexual propagation involves the division of plants, separation of tubers, corms, and bulbs - by use of techniques such as cutting, layering, grafting.
Plant selection
When selecting plants to cultivate, a horticulturist may consider aspects based on the plants intended use and can include plant morphology, rarity, and utility. When selecting plants for the landscape, there are necessary observations of the location that must be made first. Considerations as to soil-type, temperature/climate, light, moisture, and pre-existing plants are made. These evaluations of the given environment are taken into consideration when selecting plant material for the location. Plant selection may be for annual displays, or they may be for more permanent plantings. Characteristics of the plant such as mature height/size, colour, growth habit, ornamental value, flowering time and invasive potential are what finalizes the plant selection process.
Controlling environmental/growing variables
Environmental factors that effect plant development include: temperature, light, water, pH, nutrient availability, weather events (rain, snow, sleet, hail and freezing rain, dew, wind and frost) humidity, elevation, terrain, and micro-climate effects. In horticulture, these environmental variables may be avoided, controlled or manipulated in an indoor growing environment.
Temperature
Plants require specific temperatures to grow and develop properly. Temperature control can be done through a variety of methods. Covering plants with plastic in the form of cones - called hot caps, or tunnels, can help to manipulate the surrounding temperature. Mulching is also an effective method to protect outdoor plants from frost during the wintertime. Inside, other frost prevention methods include the use of wind machines, heaters, and sprinklers.
Light
Plants have evolved to require different amounts of light, and lengths of daytime; their growth and development is determined by the amount of light/light intensity that they receive. Control of this may be achieved artificially through the use of fluorescent lights in an indoor setting. Manipulating the amount of light also controls flowering. Lengthening the day encourages the flowering of long-day plants and discourages the flowering of short-day plants.
Water
Water management methods involve employing irrigation/drainage systems, and controlling soil moisture to the needs of the species. Methods of irrigation include surface irrigation, sprinkler irrigation, sub-irrigation, and trickle irrigation. Volume of water, pressure, and frequency are changed to optimize the growing environment. On a small scale watering can be done manually.
Growing media and soil management
The choice of growing media and components to the media help support plant life. Within a greenhouse environment, growers may choose to grow their plants in an aquaponic system where there is no soil used. Growers within a greenhouse setting will often opt for a soilless mix which does not include any actual components of naturally occurring soil. These mixes offer advantages such as water absorption, sterility, and are generally very available within the industry.
Soil management methods are broad, but includes the use of fertilizers, planned crop rotation to prevent the degradation of soils that are seen in monocultures, applying fertilizers, and soil analysis.
Control by use of enclosed environments
Abiotic factors such as weather, light and temperature are all things that can be manipulated with enclosed environments such as cold frames, greenhouses, conservatories, poly houses and shade houses. Materials that are used in the construction of these buildings are chosen based on the climate, purpose and budget.
Cold frames provide an enclosed environment, they are built close to the ground and with a top made of glass or plastic. The glass or plastic allows sunlight into the frame during the day and prevents heat loss that would have been lost as long-wave radiation at night. This allows plants to start to be grown before the growing season starts. Greenhouses/conservatories are similar in function, but are larger in construction and heated with an external energy source. They can be built out of glass, although they are now primarily made from plastic sheets. More expensive and modern greenhouses can include temperature control through shade and light control or air-conditioning as well as automatic watering. Shade houses provide shading to limit water loss by evapotranspiration.
Challenges
Abiotic stresses
Commercial horticulture is required to support a rapidly growing population with demands for its products. Due to global climate change, extremes in temperatures, strength of precipitation events, flood frequency, and drought length and frequency are increasing. Together with other abiotic stressors such salinity, heavy metal toxicity, UV damage, and air pollution, stressful environments are created for crop production. This is extrapolated as evapotranspiration is increased, soils are degraded of their nutrients, and oxygen levels are depleted, resulting in up to a 70% loss in crop yield.
Biotic stresses
Living organisms such as bacteria, viruses, fungi, parasites, insects, weeds and native plants are sources of biotics stresses and can deprive the host of its nutrients. Plants respond to these stresses using defence mechanisms such as morphological and structural barriers, chemical compounds, proteins, enzymes and hormones. The impact of biotic stresses can be prevented using practices such as incorporate tilling, spraying or Integrated Pest Management (IPM).
Harvest management
Care is required to reduce damages and losses to horticultural crops during harvest. Compression forces occur during harvesting, and horticultural goods can be hit in a series of impacts during transport and packhouse operations. Different techniques are used to minimize mechanical injuries and wounding to plants such as:
Manual harvesting: This is the process of harvesting horticultural crops by hand. Fruits, such as apples, pears and peaches, can be harvested by clippers
Sanitation: Harvest bags, crates, clippers and other equipment must be cleaned prior to harvest.
Emerging technology
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR/Cas9) has recently gained recognition as a highly efficient, simplified, precise, and low cost method of altering the genomes of species. Since 2013, CRISPR has been used to enhance a variety of species of grains, fruits, and vegetables. Crops are modified to increase their resistance to biotic and abiotic stressors such as parasites, disease, and drought as well as increase yield, nutrition, and flavour. Additionally, CRISPR has been used to edit undesirable traits, for example, reducing the browning and production of toxic and bitter substances of potatoes. CRISPR has also been employed to solve issues of low pollination rates and low fruit yield common in greenhouses. As compared to Genetically Modified Organisms (GMO), CRISPR does not add any alien DNA to the plant's genes.
Organizations
There are various organizations worldwide that focus on promoting and encouraging research and education in all branches of horticultural science; such organizations include the International Society for Horticultural Science and the American Society of Horticultural Science.
In the United Kingdom, there are two main horticulture societies. The Ancient Society of York Florists is the oldest horticultural society in the world and was founded in 1768; this organization continues to host four horticultural shows annually in York, England. Additionally, The Royal Horticultural Society, established in 1804, is a charity in United Kingdom that leads on the encouragement and improvement of the science, art, and practice of horticulture in all its branches. The organization shares the knowledge of horticulture through its community, learning programs, and world-class gardens and shows.
The Chartered Institute of Horticulture (CIH) is the Chartered professional body for horticulturists and horticultural scientists representing all sectors of the horticultural industry across Great Britain, Ireland and overseas. It is the only horticultural professional body where its top professionals can achieve Chartered status and become a Chartered Horticulturist. The Australian Institute of Horticulture and Australian Society of Horticultural Science was established in 1990 as a professional society to promote and enhance Australian horticultural science and industry. Finally, the New Zealand Horticulture Institute is another known horticultural organization.
In India, the Horticultural Society of India (now Indian Academy of Horticultural Sciences) is the oldest society which was established in 1941 at Lyallpur, Punjab (now in Pakistan) but was later shifted to Delhi in 1949. The other notable organization in operation since 2005 is the Society for Promotion of Horticulture based at Bengaluru. Both these societies publish scholarly journals – Indian Journal of Horticulture and Journal of Horticultural Sciences for the advancement of horticultural sciences. Horticulture in the Indian state of Kerala is spearheaded by Kerala State Horticulture Mission.
The National Junior Horticultural Association (NJHA) was established in 1934 and was the first organization in the world dedicated solely to youth and horticulture. NJHA programs are designed to help young people obtain a basic understanding of horticulture and develop skills in this ever-expanding art and science.
The Global Horticulture Initiative (GlobalHort) fosters partnerships and collective action among different stakeholders in horticulture. This organization has a special focus on horticulture for development (H4D), which involves using horticulture to reduce poverty and improve nutrition worldwide. GlobalHort is organized in a consortium of national and international organizations which collaborate in research, training, and technology-generating activities designed to meet mutually-agreed-upon objectives. GlobalHort is a non-profit organization registered in Belgium.
See also
Agricultural science
Agronomy
Floriculture
Forest gardening
Gardening
Genetically modified trees
Genomics of domestication
Hoe-farming
Horticultural botany
Horticultural flora
Horticultural oil
Horticultural therapy
Indigenous horticulture
Landscaping
Permaculture
Plant nutrition
Plug (horticulture)
Tropical horticulture
Turf management
Vertical farming
References
Further reading
C.R. Adams, Principles of Horticulture Butterworth-Heinemann; 5th edition (11 Aug 2008), .
External links
The Institute of Horticulture (archived 7 September 2015)
ISHS – International Society for Horticultural Science
The Royal Horticultural Society
British Library – information on the horticulture industry (archived 26 June 2006)
History of Horticulture (archived 10 September 2012)
HORTIVAR – The FAO Horticulture Cultivars Performance Database
Global Horticulture Initiative – GlobalHort
Horticulture Information & Resource Library (archived 4 October 2018)
Plant and Soil Sciences eLibrary
Agronomy
Agriculture by type | 0.774144 | 0.998681 | 0.773123 |
Coast | A coastalso called the coastline, shoreline, or seashoreis the land next to the sea or the line that forms the boundary between the land and the ocean or a lake. Coasts are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore that is created. Earth contains roughly of coastline.
Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas, they harbor salt marshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic animals. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds.
In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of .
According to an atlas prepared by the United Nations, about 44% of the human population lives within of the sea . Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism.
Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide.
However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, as well as related issues like coastal erosion, saltwater intrusion, and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems.
The interactive effects of climate change, habitat destruction, overfishing, and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water", which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
Since coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore).
Size
The Earth has approximately of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. about 2.86% of exclusive economic zones were part of marine protected areas.
The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientists might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems).
While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons.
Challenges of precisely measuring the coastline
Formation
Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean.
Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than ; mesotidal coasts with a tidal range of ; and microtidal coasts with a tidal range of less than . The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts.
Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast.
Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands.
Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias).
Importance for humans and ecosystems
Human settlements
More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals.
Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard.
Tourism
Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing.
Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast.
Ecosystem services
Types
Emergent coastline
According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords
Concordant coastline
According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings.
High and low energy coasts
Parts of a coastline can be categorised as high energy coast or low energy coast. The distinguishing characteristics of a high energy coast are that the average wave energy is relatively high so that erosion of small grained material tends to exceed deposition, and consequently landforms like cliffs, headlands and wave-cut terraces develop. Low energy coasts are generally sheltered from waves, or in regions where the average wind wave and swell conditions are relatively mild. Low energy coasts typically change slowly, and tend to be depositional environments.
High energy coasts are exposed to the direct impact of waves and storms, and are generally erosional environments. High energy storm events can make large changes to a coastline, and can move significant amounts of sediment over a short period, sometimes changing a shoreline configuration.
Destructive and constructive waves
Swash is the shoreward flow after the break, backwash is the water flow back down the beach. The relative strength of flow in the swash and backwash determines what size grains are deposited or eroded. This is dependent on how the wave breaks and the slope of the shore.
Depending on the form of the breaking wave, its energy can carry granular material up the beach and deposit it, or erode it by carrying more material down the slope than up it. Steep waves that are close together and break with the surf plunging down onto the shore slope expend much of their energy lifting the sediment. The weak swash does not carry it far up the slope, and the strong backwash carries it further down the slope, where it either settles in deeper water or is carried along the shore by a longshore current induced by an angled approach of the wave-front to the shore. These waves which erode the beach are called destructive waves.
Low waves that are further apart and break by spilling, expend more of their energy in the swash which carries particles up the beach, leaving less energy for the backwash to transport them downslope, with a net constrictive influence on the beach.
Rivieras
Riviera is an Italian word for "shoreline", ultimately derived from Latin ("riverbank"). It came to be applied as a proper name to the coast of the Ligurian Sea, in the form riviera ligure, then shortened to riviera. Historically, the Ligurian Riviera extended from Capo Corvo (Punta Bianca) south of Genoa, north and west into what is now French territory past Monaco and sometimes as far as Marseilles. Today, this coast is divided into the Italian Riviera and the French Riviera, although the French use the term "Riviera" to refer to the Italian Riviera and call the French portion the "Côte d'Azur".
As a result of the fame of the Ligurian rivieras, the term came into English to refer to any shoreline, especially one that is sunny, topographically diverse and popular with tourists. Such places using the term include the Australian Riviera in Queensland and the Turkish Riviera along the Aegean Sea.
Other coastal categories
A cliffed coast or abrasion coast is one where marine action has produced steep declivities known as cliffs.
A flat coast is one where the land gradually descends into the sea.
A graded shoreline is one where wind and water action has produced a flat and straight coastline.
A primary coast isone which is mainly undergoing early stage development by major long-term processes such as tectonism and climate change A secondary coast is one where the primary processes have mostly stabilised, and more localised processes have become prominent.
An erosional coast is on average undergoing erosion, while a depositional coast is accumulating material.
An active coast is on the edge of a tectonic plate, while a passive coast is usually on a substantial continental shelf or away from a plate edge.
Landforms
The following articles describe some coastal landforms:
Barrier island
Bay
Headland
Cove
Peninsula
Cliff erosion
Much of the sediment deposited along a coast is the result of erosion of a surrounding cliff, or bluff. Sea cliffs retreat landward because of the constant undercutting of slopes by waves. If the slope/cliff being undercut is made of unconsolidated sediment it will erode at a much faster rate than a cliff made of bedrock.
A natural arch is formed when a headland is eroded through by waves.
Sea caves are made when certain rock beds are more susceptible to erosion than the surrounding rock beds because of different areas of weakness. These areas are eroded at a faster pace creating a hole or crevice that, through time, by means of wave action and erosion, becomes a cave.
A stack is formed when a headland is eroded away by wave and wind action or an arch collapses leaving an offshore remnant.
A stump is a shortened sea stack that has been eroded away or fallen because of instability.
Wave-cut notches are caused by the undercutting of overhanging slopes which leads to increased stress on cliff material and a greater probability that the slope material will fall. The fallen debris accumulates at the bottom of the cliff and is eventually removed by waves.
A wave-cut platform forms after erosion and retreat of a sea cliff has been occurring for a long time. Gently sloping wave-cut platforms develop early on in the first stages of cliff retreat. Later, the length of the platform decreases because the waves lose their energy as they break further offshore.
Coastal features formed by sediment
Beach
Beach cusps
Cuspate foreland
Dune system
Mudflat
Raised beach
Ria
Shoal
Spit
Strand plain
Surge channel
Tombolo
Coastal features formed by another feature
Estuary
Lagoon
Salt marsh
Mangrove forests
Kelp forests
Coral reefs
Oyster reefs
Other features on the coast
Concordant coastline
Discordant coastline
Fjord
Island
Island arc
Machair
Coastal waters
"Coastal waters" (or "coastal seas") is a rather general term used differently in different contexts, ranging geographically from the waters within a few kilometers of the coast, through to the entire continental shelf which may stretch for more than a hundred kilometers from land. Thus the term coastal waters is used in a slightly different way in discussions of legal and economic boundaries (see territorial waters and international waters) or when considering the geography of coastal landforms or the ecological systems operating through the continental shelf (marine coastal ecosystems). The research on coastal waters often divides into these separate areas too.
The dynamic fluid nature of the ocean means that all components of the whole ocean system are ultimately connected, although certain regional classifications are useful and relevant. The waters of the continental shelves represent such a region. The term "coastal waters" has been used in a wide variety of different ways in different contexts. In European Union environmental management it extends from the coast to just a few nautical miles while in the United States the US EPA considers this region to extend much further offshore.
"Coastal waters" has specific meanings in the context of commercial coastal shipping, and somewhat different meanings in the context of naval littoral warfare. Oceanographers and marine biologists have yet other takes. Coastal waters have a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf.
Similarly, the term littoral zone has no single definition. It is the part of a sea, lake, or river that is close to the shore. In coastal environments, the littoral zone extends from the high water mark, which is rarely inundated, to shoreline areas that are permanently submerged.
Coastal waters can be threatened by coastal eutrophication and harmful algal blooms.
In geology
The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past.
Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles.
Some of the best-studied shoreline deposits in the world are found along the former western shore of the Western Interior Seaway, a shallow sea that flooded central North America during the late Cretaceous Period (about 100 to 66 million years ago). These are beautifully exposed along the Book Cliffs of Utah and Colorado.
Geologic processes
The following articles describe the various geologic processes that affect a coastal zone:
Attrition
Currents
Denudation
Deposition
Erosion
Flooding
Longshore drift
Marine sediments
Saltation
Sea level change
eustatic
isostatic
Sedimentation
Coastal sediment supply
sediment transport
solution
subaerial processes
suspension
Tides
Water waves
diffraction
refraction
wave breaking
wave shoaling
Weathering
Wildlife
Animals
Larger animals that live in coastal areas include puffins, sea turtles and rockhopper penguins, among many others. Sea snails and various kinds of barnacles live on rocky coasts and scavenge on food deposited by the sea. Some coastal animals are used to humans in developed areas, such as dolphins and seagulls who eat food thrown for them by tourists. Since the coastal areas are all part of the littoral zone, there is a profusion of marine life found just off-coast, including sessile animals such as corals, sponges, starfish, mussels, seaweeds, fishes, and sea anemones.
There are many kinds of seabirds on various coasts. These include pelicans and cormorants, who join up with terns and oystercatchers to forage for fish and shellfish. There are sea lions on the coast of Wales and other countries.
Coastal fish
Plants
Many coastal areas are famous for their kelp beds. Kelp is a fast-growing seaweed that can grow up to half a meter a day in ideal conditions. Mangroves, seagrasses, macroalgal beds, and salt marsh are important coastal vegetation types in tropical and temperate environments respectively. Restinga is another type of coastal vegetation.
Threats
Coasts also face many human-induced environmental impacts and coastal development hazards. The most important ones are:
Pollution which can be in the form of water pollution, nutrient pollution (leading to coastal eutrophication and harmful algal blooms), oil spills or marine debris that is contaminating coasts with plastic and other trash.
Sea level rise, and associated issues like coastal erosion and saltwater intrusion.
Pollution
The pollution of coastlines is connected to marine pollution which can occur from a number of sources: Marine debris (garbage and industrial debris); the transportation of petroleum in tankers, increasing the probability of large oil spills; small oil spills created by large and small vessels, which flush bilge water into the ocean.
Marine pollution
Marine debris
Microplastics
Sea level rise due to climate change
Global goals
International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021–2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
See also
Bank (geography)
Beach cleaning
Coastal and Estuarine Research Federation
European Atlas of the Seas
Intertidal zone
Land reclamation
List of countries by length of coastline
List of U.S. states by coastline
Offshore or Intertidal zone
Ballantine Scale
Coastal path
Shorezone
References
Further reading
External links
Woods Hole Oceanographic Institution - organization dedicated to ocean research, exploration, and education
Coastal and oceanic landforms
Coastal geography
Oceanographical terminology
Articles containing video clips | 0.77661 | 0.995459 | 0.773084 |
Natural selection | Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charles Darwin popularised the term "natural selection", contrasting it with artificial selection, which is intentional, whereas natural selection is not.
Variation of traits, both genotypic and phenotypic, exists within all populations of organisms. However, some traits are more likely to facilitate survival and reproductive success. Thus, these traits are passed onto the next generation. These traits can also become more common within a population if the environment that favours these traits remains fixed. If new traits become more favored due to changes in a specific niche, microevolution occurs. If new traits become more favored due to changes in the broader environment, macroevolution occurs. Sometimes, new species can arise especially if these new traits are radically different from the traits possessed by their predecessors.
The likelihood of these traits being 'selected' and passed down are determined by many factors. Some are likely to be passed down because they adapt well to their environments. Others are passed down because these traits are actively preferred by mating partners, which is known as sexual selection. Female bodies also prefer traits that confer the lowest cost to their reproductive health, which is known as fecundity selection.
Natural selection is a cornerstone of modern biology. The concept, published by Darwin and Alfred Russel Wallace in a joint presentation of papers in 1858, was elaborated in Darwin's influential 1859 book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. He described natural selection as analogous to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favoured for reproduction. The concept of natural selection originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, science had yet to develop modern theories of genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical genetics formed the modern synthesis of the mid-20th century. The addition of molecular genetics has led to evolutionary developmental biology, which explains evolution at the molecular level. While genotypes can slowly change by random genetic drift, natural selection remains the primary explanation for adaptive evolution.
Historical development
Pre-Darwinian theories
Several philosophers of the classical era, including Empedocles and his intellectual successor, the Roman poet Lucretius, expressed the idea that nature produces a huge variety of creatures, randomly, and that only those creatures that manage to provide for themselves and reproduce successfully persist. Empedocles' idea that organisms arose entirely by the incidental workings of causes such as heat and cold was criticised by Aristotle in Book II of Physics. He posited natural teleology in its place, and believed that form was achieved for a purpose, citing the regularity of heredity in species as proof. Nevertheless, he accepted in his biology that new types of animals, monstrosities (τερας), can occur in very rare instances (Generation of Animals, Book IV). As quoted in Darwin's 1872 edition of The Origin of Species, Aristotle considered whether different forms (e.g., of teeth) might have appeared accidentally, but only the useful forms survived:
But Aristotle rejected this possibility in the next paragraph, making clear that he is talking about the development of animals as embryos with the phrase "either invariably or normally come about", not the origin of species:
The struggle for existence was later described by the Islamic writer Al-Jahiz in the 9th century, particularly in the context of top-down population regulation, but not in reference to individual variation or natural selection.
At the turn of the 16th century Leonardo da Vinci collected a set of fossils of ammonites as well as other biological material. He extensively reasoned in his writings that the shapes of animals are not given once and forever by the "upper power" but instead are generated in different forms naturally and then selected for reproduction by their compatibility with the environment.
The more recent classical arguments were reintroduced in the 18th century by Pierre Louis Maupertuis and others, including Darwin's grandfather, Erasmus Darwin.
Until the early 19th century, the prevailing view in Western societies was that differences between individuals of a species were uninteresting departures from their Platonic ideals (or typus) of created kinds. However, the theory of uniformitarianism in geology promoted the idea that simple, weak forces could act continuously over long periods of time to produce radical changes in the Earth's landscape. The success of this theory raised awareness of the vast scale of geological time and made plausible the idea that tiny, virtually imperceptible changes in successive generations could produce consequences on the scale of differences between species.
The early 19th-century zoologist Jean-Baptiste Lamarck suggested the inheritance of acquired characteristics as a mechanism for evolutionary change; adaptive traits acquired by an organism during its lifetime could be inherited by that organism's progeny, eventually causing transmutation of species. This theory, Lamarckism, was an influence on the Soviet biologist Trofim Lysenko's ill-fated antagonism to mainstream genetic theory as late as the mid-20th century.
Between 1835 and 1837, the zoologist Edward Blyth worked on the area of variation, artificial selection, and how a similar process occurs in nature. Darwin acknowledged Blyth's ideas in the first chapter on variation of On the Origin of Species.
Darwin's theory
In 1859, Charles Darwin set out his theory of evolution by natural selection as an explanation for adaptation and speciation. He defined natural selection as the "principle by which each slight variation [of a trait], if useful, is preserved". The concept was simple but powerful: individuals best adapted to their environments are more likely to survive and reproduce. As long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. If the variations are heritable, then differential reproductive success leads to the evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species.
Darwin's ideas were inspired by the observations that he had made on the second voyage of HMS Beagle (1831–1836), and by the work of a political economist, Thomas Robert Malthus, who, in An Essay on the Principle of Population (1798), noted that population (if unchecked) increases exponentially, whereas the food supply grows only arithmetically; thus, inevitable limitations of resources would have demographic implications, leading to a "struggle for existence". When Darwin read Malthus in 1838 he was already primed by his work as a naturalist to appreciate the "struggle for existence" in nature. It struck him that as population outgrew resources, "favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The result of this would be the formation of new species." Darwin wrote:
Once he had his theory, Darwin was meticulous about gathering and refining evidence before making his idea public. He was in the process of writing his "big book" to present his research when the naturalist Alfred Russel Wallace independently conceived of the principle and described it in an essay he sent to Darwin to forward to Charles Lyell. Lyell and Joseph Dalton Hooker decided to present his essay together with unpublished writings that Darwin had sent to fellow naturalists, and On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection was read to the Linnean Society of London announcing co-discovery of the principle in July 1858. Darwin published a detailed account of his evidence and conclusions in On the Origin of Species in 1859. In the 3rd edition of 1861 Darwin acknowledged that others—like William Charles Wells in 1813, and Patrick Matthew in 1831—had proposed similar ideas, but had neither developed them nor presented them in notable scientific publications.
Darwin thought of natural selection by analogy to how farmers select crops or livestock for breeding, which he called "artificial selection"; in his early manuscripts he referred to a "Nature" which would do the selection. At the time, other mechanisms of evolution such as evolution by genetic drift were not yet explicitly formulated, and Darwin believed that selection was likely only part of the story: "I am convinced that Natural Selection has been the main but not exclusive means of modification." In a letter to Charles Lyell in September 1860, Darwin regretted the use of the term "Natural Selection", preferring the term "Natural Preservation".
For Darwin and his contemporaries, natural selection was in essence synonymous with evolution by natural selection. After the publication of On the Origin of Species, educated people generally accepted that evolution had occurred in some form. However, natural selection remained controversial as a mechanism, partly because it was perceived to be too weak to explain the range of observed characteristics of living organisms, and partly because even supporters of evolution balked at its "unguided" and non-progressive nature, a response that has been characterised as the single most significant impediment to the idea's acceptance. However, some thinkers enthusiastically embraced natural selection; after reading Darwin, Herbert Spencer introduced the phrase survival of the fittest, which became a popular summary of the theory. The fifth edition of On the Origin of Species published in 1869 included Spencer's phrase as an alternative to natural selection, with credit given: "But the expression often used by Mr. Herbert Spencer of the Survival of the Fittest is more accurate, and is sometimes equally convenient." Although the phrase is still often used by non-biologists, modern biologists avoid it because it is tautological if "fittest" is read to mean "functionally superior" and is applied to individuals rather than considered as an averaged quantity over populations.
The modern synthesis
Natural selection relies crucially on the idea of heredity, but developed before the basic concepts of genetics. Although the Moravian monk Gregor Mendel, the father of modern genetics, was a contemporary of Darwin's, his work lay in obscurity, only being rediscovered in 1900. With the early 20th-century integration of evolution with Mendel's laws of inheritance, the so-called modern synthesis, scientists generally came to accept natural selection. The synthesis grew from advances in different fields. Ronald Fisher developed the required mathematical language and wrote The Genetical Theory of Natural Selection (1930). J. B. S. Haldane introduced the concept of the "cost" of natural selection.
Sewall Wright elucidated the nature of selection and adaptation.
In his book Genetics and the Origin of Species (1937), Theodosius Dobzhansky established the idea that mutation, once seen as a rival to selection, actually supplied the raw material for natural selection by creating genetic diversity.
A second synthesis
Ernst Mayr recognised the key importance of reproductive isolation for speciation in his Systematics and the Origin of Species (1942).
W. D. Hamilton conceived of kin selection in 1964. This synthesis cemented natural selection as the foundation of evolutionary theory, where it remains today. A second synthesis was brought about at the end of the 20th century by advances in molecular genetics, creating the field of evolutionary developmental biology ("evo-devo"), which seeks to explain the evolution of form in terms of the genetic regulatory programs which control the development of the embryo at molecular level. Natural selection is here understood to act on embryonic development to change the morphology of the adult body.
Terminology
The term natural selection is most often defined to operate on heritable traits, because these directly participate in evolution. However, natural selection is "blind" in the sense that changes in phenotype can give a reproductive advantage regardless of whether or not the trait is heritable. Following Darwin's primary usage, the term is used to refer both to the evolutionary consequence of blind selection and to its mechanisms. It is sometimes helpful to explicitly distinguish between selection's mechanisms and its effects; when this distinction is important, scientists define "(phenotypic) natural selection" specifically as "those mechanisms that contribute to the selection of individuals that reproduce", without regard to whether the basis of the selection is heritable. Traits that cause greater reproductive success of an organism are said to be selected for, while those that reduce success are selected against.
Mechanism
Heritable variation, differential reproduction
Natural variation occurs among the individuals of any population of organisms. Some differences may improve an individual's chances of surviving and reproducing such that its lifetime reproductive rate is increased, which means that it leaves more offspring. If the traits that give these individuals a reproductive advantage are also heritable, that is, passed from parent to offspring, then there will be differential reproduction, that is, a slightly higher proportion of fast rabbits or efficient algae in the next generation. Even if the reproductive advantage is very slight, over many generations any advantageous heritable trait becomes dominant in the population. In this way the natural environment of an organism "selects for" traits that confer a reproductive advantage, causing evolutionary change, as Darwin described. This gives the appearance of purpose, but in natural selection there is no intentional choice. Artificial selection is purposive where natural selection is not, though biologists often use teleological language to describe it.
The peppered moth exists in both light and dark colours in Great Britain, but during the Industrial Revolution, many of the trees on which the moths rested became blackened by soot, giving the dark-coloured moths an advantage in hiding from predators. This gave dark-coloured moths a better chance of surviving to produce dark-coloured offspring, and in just fifty years from the first dark moth being caught, nearly all of the moths in industrial Manchester were dark. The balance was reversed by the effect of the Clean Air Act 1956, and the dark moths became rare again, demonstrating the influence of natural selection on peppered moth evolution. A recent study, using image analysis and avian vision models, shows that pale individuals more closely match lichen backgrounds than dark morphs and for the first time quantifies the camouflage of moths to predation risk.
Fitness
The concept of fitness is central to natural selection. In broad terms, individuals that are more "fit" have better potential for survival, as in the well-known phrase "survival of the fittest", but the precise meaning of the term is much more subtle. Modern evolutionary theory defines fitness not by how long an organism lives, but by how successful it is at reproducing. If an organism lives half as long as others of its species, but has twice as many offspring surviving to adulthood, its genes become more common in the adult population of the next generation. Though natural selection acts on individuals, the effects of chance mean that fitness can only really be defined "on average" for the individuals within a population. The fitness of a particular genotype corresponds to the average effect on all individuals with that genotype.
A distinction must be made between the concept of "survival of the fittest" and "improvement in fitness". "Survival of the fittest" does not give an "improvement in fitness", it only represents the removal of the less fit variants from a population. A mathematical example of "survival of the fittest" is given by Haldane in his paper "The Cost of Natural Selection". Haldane called this process "substitution" or more commonly in biology, this is called "fixation". This is correctly described by the differential survival and reproduction of individuals due to differences in phenotype. On the other hand, "improvement in fitness" is not dependent on the differential survival and reproduction of individuals due to differences in phenotype, it is dependent on the absolute survival of the particular variant. The probability of a beneficial mutation occurring on some member of a population depends on the total number of replications of that variant. The mathematics of "improvement in fitness was described by Kleinman. An empirical example of "improvement in fitness" is given by the Kishony Mega-plate experiment. In this experiment, "improvement in fitness" depends on the number of replications of the particular variant for a new variant to appear that is capable of growing in the next higher drug concentration region. Fixation or substitution is not required for this "improvement in fitness". On the other hand, "improvement in fitness" can occur in an environment where "survival of the fittest" is also acting. Richard Lenski's classic E. coli long-term evolution experiment is an example of adaptation in a competitive environment, ("improvement in fitness" during "survival of the fittest"). The probability of a beneficial mutation occurring on some member of the lineage to give improved fitness is slowed by the competition. The variant which is a candidate for a beneficial mutation in this limited carrying capacity environment must first out-compete the "less fit" variants in order to accumulate the requisite number of replications for there to be a reasonable probability of that beneficial mutation occurring.
Competition
In biology, competition is an interaction between organisms in which the fitness of one is lowered by the presence of another. This may be because both rely on a limited supply of a resource such as food, water, or territory. Competition may be within or between species, and may be direct or indirect. Species less suited to compete should in theory either adapt or die out, since competition plays a powerful role in natural selection, but according to the "room to roam" theory it may be less important than expansion among larger clades.
Competition is modelled by r/K selection theory, which is based on Robert MacArthur and E. O. Wilson's work on island biogeography. In this theory, selective pressures drive evolution in one of two stereotyped directions: r- or K-selection. These terms, r and K, can be illustrated in a logistic model of population dynamics:
where r is the growth rate of the population (N), and K is the carrying capacity of its local environmental setting. Typically, r-selected species exploit empty niches, and produce many offspring, each with a relatively low probability of surviving to adulthood. In contrast, K-selected species are strong competitors in crowded niches, and invest more heavily in much fewer offspring, each with a relatively high probability of surviving to adulthood.
Classification
Natural selection can act on any heritable phenotypic trait, and selective pressure can be produced by any aspect of the environment, including sexual selection and competition with members of the same or other species. However, this does not imply that natural selection is always directional and results in adaptive evolution; natural selection often results in the maintenance of the status quo by eliminating less fit variants.
Selection can be classified in several different ways, such as by its effect on a trait, on genetic diversity, by the life cycle stage where it acts, by the unit of selection, or by the resource being competed for.
By effect on a trait
Selection has different effects on traits. Stabilizing selection acts to hold a trait at a stable optimum, and in the simplest case all deviations from this optimum are selectively disadvantageous. Directional selection favours extreme values of a trait. The uncommon disruptive selection also acts during transition periods when the current mode is sub-optimal, but alters the trait in more than one direction. In particular, if the trait is quantitative and univariate then both higher and lower trait levels are favoured. Disruptive selection can be a precursor to speciation.
By effect on genetic diversity
Alternatively, selection can be divided according to its effect on genetic diversity. Purifying or negative selection acts to remove genetic variation from the population (and is opposed by de novo mutation, which introduces new variation. In contrast, balancing selection acts to maintain genetic variation in a population, even in the absence of de novo mutation, by negative frequency-dependent selection. One mechanism for this is heterozygote advantage, where individuals with two different alleles have a selective advantage over individuals with just one allele. The polymorphism at the human ABO blood group locus has been explained in this way.
By life cycle stage
Another option is to classify selection by the life cycle stage at which it acts. Some biologists recognise just two types: viability (or survival) selection, which acts to increase an organism's probability of survival, and fecundity (or fertility or reproductive) selection, which acts to increase the rate of reproduction, given survival. Others split the life cycle into further components of selection. Thus viability and survival selection may be defined separately and respectively as acting to improve the probability of survival before and after reproductive age is reached, while fecundity selection may be split into additional sub-components including sexual selection, gametic selection, acting on gamete survival, and compatibility selection, acting on zygote formation.
By unit of selection
Selection can also be classified by the level or unit of selection. Individual selection acts on the individual, in the sense that adaptations are "for" the benefit of the individual, and result from selection among individuals. Gene selection acts directly at the level of the gene. In kin selection and intragenomic conflict, gene-level selection provides a more apt explanation of the underlying process. Group selection, if it occurs, acts on groups of organisms, on the assumption that groups replicate and mutate in an analogous way to genes and individuals. There is an ongoing debate over the degree to which group selection occurs in nature.
By resource being competed for
Finally, selection can be classified according to the resource being competed for. Sexual selection results from competition for mates. Sexual selection typically proceeds via fecundity selection, sometimes at the expense of viability. Ecological selection is natural selection via any means other than sexual selection, such as kin selection, competition, and infanticide. Following Darwin, natural selection is sometimes defined as ecological selection, in which case sexual selection is considered a separate mechanism.
Sexual selection as first articulated by Darwin (using the example of the peacock's tail) refers specifically to competition for mates, which can be intrasexual, between individuals of the same sex, that is male–male competition, or intersexual, where one gender chooses mates, most often with males displaying and females choosing. However, in some species, mate choice is primarily by males, as in some fishes of the family Syngnathidae.
Phenotypic traits can be displayed in one sex and desired in the other sex, causing a positive feedback loop called a Fisherian runaway, for example, the extravagant plumage of some male birds such as the peacock. An alternate theory proposed by the same Ronald Fisher in 1930 is the sexy son hypothesis, that mothers want promiscuous sons to give them large numbers of grandchildren and so choose promiscuous fathers for their children. Aggression between members of the same sex is sometimes associated with very distinctive features, such as the antlers of stags, which are used in combat with other stags. More generally, intrasexual selection is often associated with sexual dimorphism, including differences in body size between males and females of a species.
Arms races
Natural selection is seen in action in the development of antibiotic resistance in microorganisms. Since the discovery of penicillin in 1928, antibiotics have been used to fight bacterial diseases. The widespread misuse of antibiotics has selected for microbial resistance to antibiotics in clinical use, to the point that the methicillin-resistant Staphylococcus aureus (MRSA) has been described as a "superbug" because of the threat it poses to health and its relative invulnerability to existing drugs. Response strategies typically include the use of different, stronger antibiotics; however, new strains of MRSA have recently emerged that are resistant even to these drugs. This is an evolutionary arms race, in which bacteria develop strains less susceptible to antibiotics, while medical researchers attempt to develop new antibiotics that can kill them. A similar situation occurs with pesticide resistance in plants and insects. Arms races are not necessarily induced by man; a well-documented example involves the spread of a gene in the butterfly Hypolimnas bolina suppressing male-killing activity by Wolbachia bacteria parasites on the island of Samoa, where the spread of the gene is known to have occurred over a period of just five years.
Evolution by means of natural selection
A prerequisite for natural selection to result in adaptive evolution, novel traits and speciation is the presence of heritable genetic variation that results in fitness differences. Genetic variation is the result of mutations, genetic recombinations and alterations in the karyotype (the number, shape, size and internal arrangement of the chromosomes). Any of these changes might have an effect that is highly advantageous or highly disadvantageous, but large effects are rare. In the past, most changes in the genetic material were considered neutral or close to neutral because they occurred in noncoding DNA or resulted in a synonymous substitution. However, many mutations in non-coding DNA have deleterious effects. Although both mutation rates and average fitness effects of mutations are dependent on the organism, a majority of mutations in humans are slightly deleterious.
Some mutations occur in "toolkit" or regulatory genes. Changes in these often have large effects on the phenotype of the individual because they regulate the function of many other genes. Most, but not all, mutations in regulatory genes result in non-viable embryos. Some nonlethal regulatory mutations occur in HOX genes in humans, which can result in a cervical rib or polydactyly, an increase in the number of fingers or toes. When such mutations result in a higher fitness, natural selection favours these phenotypes and the novel trait spreads in the population.
Established traits are not immutable; traits that have high fitness in one environmental context may be much less fit if environmental conditions change. In the absence of natural selection to preserve such a trait, it becomes more variable and deteriorate over time, possibly resulting in a vestigial manifestation of the trait, also called evolutionary baggage. In many circumstances, the apparently vestigial structure may retain a limited functionality, or may be co-opted for other advantageous traits in a phenomenon known as preadaptation. A famous example of a vestigial structure, the eye of the blind mole-rat, is believed to retain function in photoperiod perception.
Speciation
Speciation requires a degree of reproductive isolation—that is, a reduction in gene flow. However, it is intrinsic to the concept of a species that hybrids are selected against, opposing the evolution of reproductive isolation, a problem that was recognised by Darwin. The problem does not occur in allopatric speciation with geographically separated populations, which can diverge with different sets of mutations. E. B. Poulton realized in 1903 that reproductive isolation could evolve through divergence, if each lineage acquired a different, incompatible allele of the same gene. Selection against the heterozygote would then directly create reproductive isolation, leading to the Bateson–Dobzhansky–Muller model, further elaborated by H. Allen Orr and Sergey Gavrilets. With reinforcement, however, natural selection can favor an increase in pre-zygotic isolation, influencing the process of speciation directly.
Genetic basis
Genotype and phenotype
Natural selection acts on an organism's phenotype, or physical characteristics. Phenotype is determined by an organism's genetic make-up (genotype) and the environment in which the organism lives. When different organisms in a population possess different versions of a gene for a certain trait, each of these versions is known as an allele. It is this genetic variation that underlies differences in phenotype. An example is the ABO blood type antigens in humans, where three alleles govern the phenotype.
Some traits are governed by only a single gene, but most traits are influenced by the interactions of many genes. A variation in one of the many genes that contributes to a trait may have only a small effect on the phenotype; together, these genes can produce a continuum of possible phenotypic values.
Directionality of selection
When some component of a trait is heritable, selection alters the frequencies of the different alleles, or variants of the gene that produces the variants of the trait. Selection can be divided into three classes, on the basis of its effect on allele frequencies: directional, stabilizing, and disruptive selection. Directional selection occurs when an allele has a greater fitness than others, so that it increases in frequency, gaining an increasing share in the population. This process can continue until the allele is fixed and the entire population shares the fitter phenotype. Far more common is stabilizing selection, which lowers the frequency of alleles that have a deleterious effect on the phenotype—that is, produce organisms of lower fitness. This process can continue until the allele is eliminated from the population. Stabilizing selection conserves functional genetic features, such as protein-coding genes or regulatory sequences, over time by selective pressure against deleterious variants. Disruptive (or diversifying) selection is selection favoring extreme trait values over intermediate trait values. Disruptive selection may cause sympatric speciation through niche partitioning.
Some forms of balancing selection do not result in fixation, but maintain an allele at intermediate frequencies in a population. This can occur in diploid species (with pairs of chromosomes) when heterozygous individuals (with just one copy of the allele) have a higher fitness than homozygous individuals (with two copies). This is called heterozygote advantage or over-dominance, of which the best-known example is the resistance to malaria in humans heterozygous for sickle-cell anaemia. Maintenance of allelic variation can also occur through disruptive or diversifying selection, which favours genotypes that depart from the average in either direction (that is, the opposite of over-dominance), and can result in a bimodal distribution of trait values. Finally, balancing selection can occur through frequency-dependent selection, where the fitness of one particular phenotype depends on the distribution of other phenotypes in the population. The principles of game theory have been applied to understand the fitness distributions in these situations, particularly in the study of kin selection and the evolution of reciprocal altruism.
Selection, genetic variation, and drift
A portion of all genetic variation is functionally neutral, producing no phenotypic effect or significant difference in fitness. Motoo Kimura's neutral theory of molecular evolution by genetic drift proposes that this variation accounts for a large fraction of observed genetic diversity. Neutral events can radically reduce genetic variation through population bottlenecks. which among other things can cause the founder effect in initially small new populations. When genetic variation does not result in differences in fitness, selection cannot directly affect the frequency of such variation. As a result, the genetic variation at those sites is higher than at sites where variation does influence fitness. However, after a period with no new mutations, the genetic variation at these sites is eliminated due to genetic drift. Natural selection reduces genetic variation by eliminating maladapted individuals, and consequently the mutations that caused the maladaptation. At the same time, new mutations occur, resulting in a mutation–selection balance. The exact outcome of the two processes depends both on the rate at which new mutations occur and on the strength of the natural selection, which is a function of how unfavourable the mutation proves to be.
Genetic linkage occurs when the loci of two alleles are close on a chromosome. During the formation of gametes, recombination reshuffles the alleles. The chance that such a reshuffle occurs between two alleles is inversely related to the distance between them. Selective sweeps occur when an allele becomes more common in a population as a result of positive selection. As the prevalence of one allele increases, closely linked alleles can also become more common by "genetic hitchhiking", whether they are neutral or even slightly deleterious. A strong selective sweep results in a region of the genome where the positively selected haplotype (the allele and its neighbours) are in essence the only ones that exist in the population. Selective sweeps can be detected by measuring linkage disequilibrium, or whether a given haplotype is overrepresented in the population. Since a selective sweep also results in selection of neighbouring alleles, the presence of a block of strong linkage disequilibrium might indicate a 'recent' selective sweep near the centre of the block.
Background selection is the opposite of a selective sweep. If a specific site experiences strong and persistent purifying selection, linked variation tends to be weeded out along with it, producing a region in the genome of low overall variability. Because background selection is a result of deleterious new mutations, which can occur randomly in any haplotype, it does not produce clear blocks of linkage disequilibrium, although with low recombination it can still lead to slightly negative linkage disequilibrium overall.
Impact
Darwin's ideas, along with those of Adam Smith and Karl Marx, had a profound influence on 19th century thought, including his radical claim that "elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner" evolved from the simplest forms of life by a few simple principles. This inspired some of Darwin's most ardent supporters—and provoked the strongest opposition. Natural selection had the power, according to Stephen Jay Gould, to "dethrone some of the deepest and most traditional comforts of Western thought", such as the belief that humans have a special place in the world.
In the words of the philosopher Daniel Dennett, "Darwin's dangerous idea" of evolution by natural selection is a "universal acid," which cannot be kept restricted to any vessel or container, as it soon leaks out, working its way into ever-wider surroundings. Thus, in the last decades, the concept of natural selection has spread from evolutionary biology to other disciplines, including evolutionary computation, quantum Darwinism, evolutionary economics, evolutionary epistemology, evolutionary psychology, and cosmological natural selection. This unlimited applicability has been called universal Darwinism.
Origin of life
How life originated from inorganic matter remains an unresolved problem in biology. One prominent hypothesis is that life first appeared in the form of short self-replicating RNA polymers. On this view, life may have come into existence when RNA chains first experienced the basic conditions, as conceived by Charles Darwin, for natural selection to operate. These conditions are: heritability, variation of type, and competition for limited resources. The fitness of an early RNA replicator would likely have been a function of adaptive capacities that were intrinsic (i.e., determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities could logically have been: (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type), (2) the capacity to avoid decay, and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations (including those configurations with ribozyme activity) of the RNA replicators that, in turn, would have been encoded in their individual nucleotide sequences.
Cell and molecular biology
In 1881, the embryologist Wilhelm Roux published Der Kampf der Theile im Organismus (The Struggle of Parts in the Organism) in which he suggested that the development of an organism results from a Darwinian competition between the parts of the embryo, occurring at all levels, from molecules to organs. In recent years, a modern version of this theory has been proposed by Jean-Jacques Kupiec. According to this cellular Darwinism, random variation at the molecular level generates diversity in cell types whereas cell interactions impose a characteristic order on the developing embryo.
Social and psychological theory
The social implications of the theory of evolution by natural selection also became the source of continuing controversy. Friedrich Engels, a German political philosopher and co-originator of the ideology of communism, wrote in 1872 that "Darwin did not know what a bitter satire he wrote on mankind, and especially on his countrymen, when he showed that free competition, the struggle for existence, which the economists celebrate as the highest historical achievement, is the normal state of the animal kingdom." Herbert Spencer and the eugenics advocate Francis Galton's interpretation of natural selection as necessarily progressive, leading to supposed advances in intelligence and civilisation, became a justification for colonialism, eugenics, and social Darwinism. For example, in 1940, Konrad Lorenz, in writings that he subsequently disowned, used the theory as a justification for policies of the Nazi state. He wrote "... selection for toughness, heroism, and social utility ... must be accomplished by some human institution, if mankind, in default of selective factors, is not to be ruined by domestication-induced degeneracy. The racial idea as the basis of our state has already accomplished much in this respect." Others have developed ideas that human societies and culture evolve by mechanisms analogous to those that apply to evolution of species.
More recently, work among anthropologists and psychologists has led to the development of sociobiology and later of evolutionary psychology, a field that attempts to explain features of human psychology in terms of adaptation to the ancestral environment. The most prominent example of evolutionary psychology, notably advanced in the early work of Noam Chomsky and later by Steven Pinker, is the hypothesis that the human brain has adapted to acquire the grammatical rules of natural language. Other aspects of human behaviour and social structures, from specific cultural norms such as incest avoidance to broader patterns such as gender roles, have been hypothesised to have similar origins as adaptations to the early environment in which modern humans evolved. By analogy to the action of natural selection on genes, the concept of memes—"units of cultural transmission," or culture's equivalents of genes undergoing selection and recombination—has arisen, first described in this form by Richard Dawkins in 1976 and subsequently expanded upon by philosophers such as Daniel Dennett as explanations for complex cultural activities, including human consciousness.
Information and systems theory
In 1922, Alfred J. Lotka proposed that natural selection might be understood as a physical principle that could be described in terms of the use of energy by a system, a concept later developed by Howard T. Odum as the maximum power principle in thermodynamics, whereby evolutionary systems with selective advantage maximise the rate of useful energy transformation.
The principles of natural selection have inspired a variety of computational techniques, such as "soft" artificial life, that simulate selective processes and can be highly efficient in 'adapting' entities to an environment defined by a specified fitness function. For example, a class of heuristic optimisation algorithms known as genetic algorithms, pioneered by John Henry Holland in the 1970s and expanded upon by David E. Goldberg, identify optimal solutions by simulated reproduction and mutation of a population of solutions defined by an initial probability distribution. Such algorithms are particularly useful when applied to problems whose energy landscape is very rough or has many local minima.
In fiction
Darwinian evolution by natural selection is pervasive in literature, whether taken optimistically in terms of how humanity may evolve towards perfection, or pessimistically in terms of the dire consequences of the interaction of human nature and the struggle for survival. Among major responses is Samuel Butler's 1872 pessimistic Erewhon ("nowhere", written mostly backwards). In 1893 H. G. Wells imagined "The Man of the Year Million", transformed by natural selection into a being with a huge head and eyes, and shrunken body.
Notes
References
Sources
Modified from Christiansen by adding survival selection in the reproductive phase.
The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-07-23.
.
The book is available from the Marxist Internet Archive.
"This book is based on a series of lectures delivered in January 1931 at the Prifysgol Cymru, Aberystwyth, and entitled 'A re-examination of Darwinism'."
.
The book is available here from Frank Elwell, Rogers State University.
Retrieved 2015-08-11.
Further reading
For technical audiences
For general audiences
Historical
External links
– Chapter 4, Natural Selection
Biological interactions
Charles Darwin
Competition
Ecological processes
Ethology
Evolution
Evolutionary biology
Selection
Sexual selection | 0.773992 | 0.998806 | 0.773068 |
VALS | VALS (Values and Lifestyle Survey) is a proprietary research methodology used for psychographic market segmentation. Market segmentation is designed to guide companies in tailoring their products and services in order to appeal to the people most likely to purchase them.
History and description
VALS was developed in 1978 by social scientist and consumer futurist Arnold Mitchell and his colleagues at SRI International. It was immediately embraced by advertising agencies and is currently offered as a product of SRI's consulting services division. VALS draws heavily on the work of Harvard sociologist David Riesman and psychologist Abraham Maslow.
Mitchell used statistics to identify attitudinal and demographic questions that helped categorize adult American consumers into one of nine lifestyle types: survivors (4%), sustainers (7%), belongers (35%), emulators (9%), achievers (22%), I-am-me (5%), experiential (7%), societally conscious (9%), and integrated (2%). The questions were weighted using data developed from a sample of 1,635 Americans and their significant others, who responded to an SRI International survey in 1980.
The main dimensions of the VALS framework are resources (the vertical dimension) and primary motivation (the horizontal dimension). The vertical dimension segments people based on the degree to which they are innovative and have resources such as income, education, self-confidence, intelligence, leadership skills, and energy. The horizontal dimension represents primary motivations and includes three distinct types:
Consumers driven by knowledge and principles are motivated primarily by ideals. These consumers include groups called Thinkers and Believers.
Consumers driven by demonstrating success to their peers are motivated primarily by achievement. These consumers include groups referred to as Achievers and Strivers.
Consumers driven by a desire for social or physical activity, variety, and risk taking are motivated primarily by self-expression. These consumers include the groups known as Experiencers and Makers.
At the top of the rectangle are the Innovators, who have such high resources that they could have any of the three primary motivations. At the bottom of the rectangle are the Survivors, who live complacently and within their means without a strong primary motivation of the types listed above. The VALS Framework gives more details about each of the groups.
VALS
Researchers faced some problems with the VALS method, and in response, SRI developed the VALS2 programme in 1978; additionally, SRI significantly revised it in 1989. VALS2 places less emphasis on activities and interests and more on a psychological base to tap relatively enduring attitudes and values. The VALS2 program has two dimensions. The first dimension, Self-orientation, determines the type of goals and behaviours that individuals will pursue, and refers to patterns of attitudes and activities which help individuals reinforce, sustain, or modify their social self-image. This is a fundamental human need.
The second dimension, Resources, reflects the ability of individuals to pursue their dominant self-orientation and includes full-range of physical, psychological, demographic, and material means such as self-confidence, interpersonal skills, inventiveness, intelligence, eagerness to buy, money, position, education, etc. According to VALS 2, a consumer purchases certain products and services because the individual is a specific type of person. The purchase is believed to reflect a consumer's lifestyle, which is a function of self–orientation and resources.
In 1991, the name VALS2 was switched back to VALS, because of brand equity.
Criticisms
Psychographic segmentation has been criticized by well-known public opinion analyst and social scientist Daniel Yankelovich, who says psychographics are "very weak" at predicting people's purchases, making it a "very poor" tool for corporate decision-makers.
The VALS Framework has also been criticized as too culturally specific for international use.
Segments
The following types correspond to VALS segments of US adults based on two concepts for understanding consumers: primary motivation and resources.
Innovators. These consumers are on the leading edge of change, have the highest incomes, and such high self-esteem and abundant resources that they can indulge in any or all self-orientations. They are located above the rectangle. Image is important to them as an expression of taste, independence, and character. Their consumer choices are directed toward the "finer things in life."
Thinkers. These consumers are the high-resource group of those who are motivated by ideals. They are mature, responsible, well-educated professionals. Their leisure activities center on their homes, but they are well informed about what goes on in the world and are open to new ideas and social change. They have high incomes but are practical consumers and rational decision makers.
Believers. These consumers are the low-resource group of those who are motivated by ideals. They are conservative and predictable consumers who favor local products and established brands. Their lives are centered on family, community, and the nation. They have modest incomes.
Achievers. These consumers are the high-resource group of those who are motivated by achievement. They are successful work-oriented people who get their satisfaction from their jobs and families. They are politically conservative and respect authority and the status quo. They favor established products and services that show off their success to their peers.
Strivers. These consumers are the low-resource group of those who are motivated by achievements. They have values very similar to achievers but have fewer economic, social, and psychological resources. Style is extremely important to them as they strive to emulate people they admire.
Experiencers. These consumers are the high-resource group of those who are motivated by self-expression. They are the youngest of all the segments, with a median age of 25. They have a lot of energy, which they pour into physical exercise and social activities. They are avid consumers, spending heavily on clothing, fast-foods, music, and other youthful favorites, with particular emphasis on new products and services.
Makers. These consumers are the low-resource group of those who are motivated by self-expression. They are practical people who value self-sufficiency. They are focused on the familiar - family, work, and physical recreation - and have little interest in the broader world. As consumers, they appreciate practical and functional products.
Survivors. These consumers have the lowest incomes. They have too few resources to be included in any consumer self-orientation and are thus located below the rectangle. They are the oldest of all the segments, with a median age of 61. Within their limited means, they tend to be brand-loyal consumers.
See also
Advertising
Data mining
Demographics
Fear, uncertainty, and doubt
Marketing
Psychographics
References
Further reading
External links
Strategic Business Insights Official website (was formerly SRI Consulting Business Intelligence)
Market research
Market segmentation | 0.781024 | 0.989805 | 0.773061 |
Economics | Economics is a social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyses what is viewed as basic elements within economies, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyses economies as systems where production, distribution, consumption, savings, and investment expenditure interact, and factors affecting it: factors of production, such as labour, capital, land, and enterprise, inflation, economic growth, and public policies that have impact on these elements. It also seeks to analyse and describe the global economy.
Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics.
Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science, and the environment.
Definitions of economics
The earlier term for the discipline was "political economy", but since the late 19th century, it has commonly been called "economics". The term is ultimately derived from Ancient Greek (oikonomia) which is a term for the "way (nomos) to run a household (oikos)", or in other words the know-how of an (oikonomikos), or "household or homestead manager". Derived terms such as "economy" can therefore often mean "frugal" or "thrifty". By extension then, "political economy" was the way to manage a polis or state.
There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations", in particular as:
Jean-Baptiste Say (1803), distinguishing the subject matter from its public-policy uses, defined it as the science of production, distribution, and consumption of wealth. On the satirical side, Thomas Carlyle (1849) coined "the dismal science" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798). John Stuart Mill (1844) delimited the subject matter further:
Alfred Marshall provided a still widely cited definition in his textbook Principles of Economics (1890) that extended analysis beyond wealth and from the societal to the microeconomic level:
Lionel Robbins (1932) developed implications of what has been termed "[p]erhaps the most commonly accepted current definition of the subject":
Robbins described the definition as not classificatory in "pick[ing] out certain kinds of behaviour" but rather analytical in "focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity." He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow. But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. Economics cannot be defined as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought after end).
Some subsequent comments criticised the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields. There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment.
Gary Becker, a contributor to the expansion of economics into new areas, described the approach he favoured as "combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly." One commentary characterises the remark as making economics an approach rather than a subject matter but with great specificity as to the "choice process and the type of social interaction that [such] analysis involves." The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve.
Many economists including nobel prize winners James M. Buchanan and Ronald Coase reject the method-based definition of Robbins and continue to prefer definitions like those of Say, in terms of its subject matter. Ha-Joon Chang has for example argued that the definition of Robbins would make economics very peculiar because all other sciences define themselves in terms of the area of inquiry or object of inquiry rather than the methodology. In the biology department, it is not said that all biology should be studied with DNA analysis. People study living organisms in many different ways, so some people will perform DNA analysis, others might analyse anatomy, and still others might build game theoretic models of animal behaviour. But they are all called biology because they all study living organisms. According to Ha Joon Chang, this view that the economy can and should be studied in only one way (for example by studying only rational choices), and going even one step further and basically redefining economics as a theory of everything, is peculiar.
History of economic thought
From antiquity through the physiocrats
Questions regarding distribution of resources are found throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod as the "first economist". However, the word Oikos, the Greek word from which the word economy derives, was used for issues regarding how to manage a household (which was understood to be the landowner, his family, and his slaves) rather than to refer to some normative societal system of distribution of resources, which is a more recent phenomenon. Xenophon, the author of the Oeconomicus, is credited by philologues for being the source of the word economy. Joseph Schumpeter described 16th and 17th century scholastic writers, including Tomás de Mercado, Luis de Molina, and Juan de Lugo, as "coming nearer than any other group to being the 'founders' of scientific economics" as to monetary, interest, and value theory within a natural-law perspective.
Two groups, who later were called "mercantilists" and "physiocrats", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of economic nationalism and modern capitalism in Europe. Mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. It held that a nation's wealth depended on its accumulation of gold and silver. Nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. The doctrine called for importing inexpensive raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies.
Physiocrats, a group of 18th-century French thinkers and writers, developed the idea of the economy as a circular flow of income and output. Physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. Thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. Physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. In reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez-faire, which called for minimal government intervention in the economy.
Adam Smith (1723–1790) was an early economic theorist. Smith was harshly critical of the mercantilists but described the physiocratic system "with all its imperfections" as "perhaps the purest approximation to the truth that has yet been published" on the subject.
Classical political economy
The publication of Adam Smith's The Wealth of Nations in 1776, has been described as "the effective birth of economics as a separate discipline." The book identified land, labour, and capital as the three factors of production and the major contributors to a nation's wealth, as distinct from the physiocratic idea that only agriculture was productive.
Smith discusses potential benefits of specialisation by division of labour, including increased labour productivity and gains from trade, whether between town and country or across countries. His "theorem" that "the division of labor is limited by the extent of the market" has been described as the "core of a theory of the functions of firm and industry" and a "fundamental principle of economic organization." To Smith has also been ascribed "the most important substantive proposition in all of economics" and foundation of resource-allocation theory—that, under competition, resource owners (of labour, land, and capital) seek their most profitable uses, resulting in an equal rate of return for all uses in equilibrium (adjusted for apparent differences arising from such factors as training and unemployment).
In an argument that includes "one of the most famous passages in all economics," Smith represents every individual as trying to employ any capital they might command for their own advantage, not that of the society, and for the sake of profit, which is necessary at some level for employing capital in domestic industry, and positively related to the value of produce. In this:
The Reverend Thomas Robert Malthus (1798) used the concept of diminishing returns to explain low living standards. Human population, he argued, tended to increase geometrically, outstripping the production of food, which increased arithmetically. The force of a rapidly growing population against a limited amount of land meant diminishing returns to labour. The result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. Economist Julian Simon has criticised Malthus's conclusions.
While Adam Smith emphasised production and income, David Ricardo (1817) focused on the distribution of income among landowners, workers, and capitalists. Ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. He posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. Ricardo was also the first to state and prove the principle of comparative advantage, according to which each country should specialise in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. It has been termed a "fundamental analytical explanation" for gains from trade.
Coming at the end of the classical tradition, John Stuart Mill (1848) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. Mill pointed to a distinct difference between the market's two roles: allocation of resources and distribution of income. The market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene.
Value theory was important in classical theory. Smith wrote that the "real price of every thing ... is the toil and trouble of acquiring it". Smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. Other classical economists presented variations on Smith, termed the 'labour theory of value'. Classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth (capital) and a constant population size.
Marxian economics
Marxist (later, Marxian) economics descends from classical economics and it derives from the work of Karl Marx. The first volume of Marx's major work, , was published in 1867. Marx focused on the labour theory of value and theory of surplus value. Marx wrote that they were mechanisms used by capital to exploit labour. The labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production, and the theory of surplus value demonstrated how workers were only paid a proportion of the value their work had created.
Marxian economics was further developed by Karl Kautsky (1854–1938)'s The Economic Doctrines of Karl Marx and The Class Struggle (Erfurt Program), Rudolf Hilferding's (1877–1941) Finance Capital, Vladimir Lenin (1870–1924)'s The Development of Capitalism in Russia and Imperialism, the Highest Stage of Capitalism, and Rosa Luxemburg (1871–1919)'s The Accumulation of Capital.
Neoclassical economics
At its inception as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by Jean-Baptiste Say in his Treatise on Political Economy or, The Production, Distribution, and Consumption of Wealth (1803). These three items were considered only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. Say's definition has survived in part up to the present, modified by substituting the word "wealth" for "goods and services" meaning that wealth may include non-material objects as well. One hundred and thirty years later, Lionel Robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. In his Essay on the Nature and Significance of Economic Science, he proposed a definition of economics as a study of human behaviour, subject to and constrained by scarcity, which forces people to choose, allocate scarce resources to competing ends, and economise (seeking the greatest welfare while avoiding the wasting of scarce resources). According to Robbins: "Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". Robbins' definition eventually became widely accepted by mainstream economists, and found its way into current textbooks. Although far from unanimous, most mainstream economists would accept some version of Robbins' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition.
A body of theory later termed "neoclassical economics" formed from about 1870 to 1910. The term "economics" was popularised by such neoclassical economists as Alfred Marshall and Mary Paley Marshall as a concise synonym for "economic science" and a substitute for the earlier "political economy". This corresponded to the influence on the subject of mathematical methods used in the natural sciences.
Neoclassical economics systematically integrated supply and demand as joint determinants of both price and quantity in market equilibrium, influencing the allocation of output and income distribution. It rejected the classical economics' labour theory of value in favour of a marginal utility theory of value on the demand side and a more comprehensive theory of costs on the supply side. In the 20th century, neoclassical theorists departed from an earlier idea that suggested measuring total utility for a society, opting instead for ordinal utility, which posits behaviour-based relations across individuals.
In microeconomics, neoclassical economics represents incentives and costs as playing a pervasive role in shaping decision making. An immediate example of this is the consumer theory of individual demand, which isolates how prices (as costs) and income affect quantity demanded. In macroeconomics it is reflected in an early and lasting neoclassical synthesis with Keynesian macroeconomics.
Neoclassical economics is occasionally referred as orthodox economics whether by its critics or sympathisers. Modern mainstream economics builds on neoclassical economics but with many refinements that either supplement or generalise earlier analysis, such as econometrics, game theory, analysis of market failure and imperfect competition, and the neoclassical model of economic growth for analysing long-run variables affecting national income.
Neoclassical economics studies the behaviour of individuals, households, and organisations (called economic actors, players, or agents), when they manage or use scarce resources, which have alternative uses, to achieve desired ends. Agents are assumed to act rationally, have multiple desirable ends in sight, limited resources to obtain these ends, a set of stable preferences, a definite overall guiding objective, and the capability of making a choice. There exists an economic problem, subject to study by economic science, when a decision (choice) is made by one or more players to attain the best possible outcome.
Keynesian economics
Keynesian economics derives from John Maynard Keynes, in particular his book The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book focused on determinants of national income in the short run when prices are relatively inflexible. Keynes attempted to explain in broad theoretical detail why high labour-market unemployment might not be self-correcting due to low "effective demand" and why even price flexibility and monetary policy might be unavailing. The term "revolutionary" has been applied to the book in its impact on economic analysis.
During the following decades, many economists followed Keynes' ideas and expanded on his works. John Hicks and Alvin Hansen developed the IS–LM model which was a simple formalisation of some of Keynes' insights on the economy's short-run equilibrium. Franco Modigliani and James Tobin developed important theories of private consumption and investment, respectively, two major components of aggregate demand. Lawrence Klein built the first large-scale macroeconometric model, applying the Keynesian thinking systematically to the US economy.
Post-WWII economics
Immediately after World War II, Keynesian was the dominant economic view of the United States establishment and its allies, Marxian economics was the dominant economic view of the Soviet Union nomenklatura and its allies.
Monetarism
Monetarism appeared in the 1950s and 1960s, its intellectual leader being Milton Friedman. Monetarists contended that monetary policy and other monetary shocks, as represented by the growth in the money stock, was an important cause of economic fluctuations, and consequently that monetary policy was more important than fiscal policy for purposes of stabilisation. Friedman was also skeptical about the ability of central banks to conduct a sensible active monetary policy in practice, advocating instead using simple rules such as a steady rate of money growth.
Monetarism rose to prominence in the 1970s and 1980s, when several major central banks followed a monetarist-inspired policy, but was later abandoned because the results were unsatisfactory.
New classical economics
A more fundamental challenge to the prevailing Keynesian paradigm came in the 1970s from new classical economists like Robert Lucas, Thomas Sargent and Edward Prescott. They introduced the notion of rational expectations in economics, which had profound implications for many economic discussions, among which were the so-called Lucas critique and the presentation of real business cycle models.
New Keynesians
During the 1980s, a group of researchers appeared being called New Keynesian economists, including among others George Akerlof, Janet Yellen, Gregory Mankiw and Olivier Blanchard. They adopted the principle of rational expectations and other monetarist or new classical ideas such as building upon models employing micro foundations and optimizing behaviour, but simultaneously emphasised the importance of various market failures for the functioning of the economy, as had Keynes. Not least, they proposed various reasons that potentially explained the empirically observed features of price and wage rigidity, usually made to be endogenous features of the models, rather than simply assumed as in older Keynesian-style ones.
New neoclassical synthesis
After decades of often heated discussions between Keynesians, monetarists, new classical and new Keynesian economists, a synthesis emerged by the 2000s, often given the name the new neoclassical synthesis. It integrated the rational expectations and optimizing framework of the new classical theory with a new Keynesian role for nominal rigidities and other market imperfections like imperfect information in goods, labour and credit markets. The monetarist importance of monetary policy in stabilizing the economy and in particular controlling inflation was recognised as well as the traditional Keynesian insistence that fiscal policy could also play an influential role in affecting aggregate demand. Methodologically, the synthesis led to a new class of applied models, known as dynamic stochastic general equilibrium or DSGE models, descending from real business cycles models, but extended with several new Keynesian and other features. These models proved useful and influential in the design of modern monetary policy and are now standard workhorses in most central banks.
After the financial crisis
After the 2007–2008 financial crisis, macroeconomic research has put greater emphasis on understanding and integrating the financial system into models of the general economy and shedding light on the ways in which problems in the financial sector can turn into major macroeconomic recessions. In this and other research branches, inspiration from behavioural economics has started playing a more important role in mainstream economic theory. Also, heterogeneity among the economic agents, e.g. differences in income, plays an increasing role in recent economic research.
Other schools and approaches
Other schools or trends of thought referring to a particular style of economics practised at and disseminated from well-defined groups of academicians that have become known worldwide, include the Freiburg School, the School of Lausanne, the Stockholm school and the Chicago school of economics. During the 1970s and 1980s mainstream economics was sometimes separated into the Saltwater approach of those universities along the Eastern and Western coasts of the US, and the Freshwater, or Chicago school approach.
Within macroeconomics there is, in general order of their historical appearance in the literature; classical economics, neoclassical economics, Keynesian economics, the neoclassical synthesis, monetarism, new classical economics, New Keynesian economics and the new neoclassical synthesis.
Beside the mainstream development of economic thought, various alternative or heterodox economic theories have evolved over time, positioning themselves in contrast to mainstream theory. These include:
Austrian School, emphasizing human action, property rights and the freedom to contract and transact to have a thriving and successful economy. It also emphasises that the state should play as small role as possible (if any role) in the regulation of economic activity between two transacting parties. Friedrich Hayek and Ludwig von Mises are the two most prominent representatives of the Austrian school.
Post-Keynesian economics concentrates on macroeconomic rigidities and adjustment processes. It is generally associated with the University of Cambridge and the work of Joan Robinson.
Ecological economics like environmental economics studies the interactions between human economies and the ecosystems in which they are embedded, but in contrast to environmental economics takes an oppositional position towards general mainstream economic principles. A major difference between the two subdisciplines is their assumptions about the substitution possibilities between human-made and natural capital.
Additionally, alternative developments include Marxian economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, econodynamics, feminist economics and biophysical economics.
Feminist economics emphasises the role that gender plays in economies, challenging analyses that render gender invisible or support gender-oppressive economic systems. The goal is to create economic research and policy analysis that is inclusive and gender-aware to encourage gender equality and improve the well-being of marginalised groups.
Methodology
Theoretical research
Mainstream economic theory relies upon analytical economic models. When creating theories, the objective is to find assumptions which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. While neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories.
In microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. Early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new Keynesians, reformulated their models with microfoundations, in which microeconomic concepts play a major part.
Sometimes an economic hypothesis is only qualitative, not quantitative.
Expositions of economic reasoning often use two-dimensional graphs to illustrate theoretical relationships. At a higher level of generality, mathematical economics is the application of mathematical methods to represent theories and analyse problems in economics. Paul Samuelson's treatise Foundations of Economic Analysis (1947) exemplifies the method, particularly as to maximizing behavioural relations of agents reaching equilibrium. The book focused on examining the class of statements called operationally meaningful theorems in economics, which are theorems that can conceivably be refuted by empirical data.
Empirical research
Economic theories are frequently tested empirically, largely through the use of econometrics using economic data. The controlled experiments common to the physical sciences are difficult and uncommon in economics, and instead broad data is observationally studied; this type of testing is typically regarded as less rigorous than controlled experimentation, and the conclusions typically more tentative. However, the field of experimental economics is growing, and increasing use is being made of natural experiments.
Statistical methods such as regression analysis are common. Practitioners use such methods to estimate the size, economic significance, and statistical significance ("signal strength") of the hypothesised relation(s) and to adjust for noise from other variables. By such means, a hypothesis may gain acceptance, although in a probabilistic, rather than certain, sense. Acceptance is dependent upon the falsifiable hypothesis surviving tests. Use of commonly accepted methods need not produce a final conclusion or even a consensus on a particular question, given different tests, data sets, and prior beliefs.
Experimental economics has promoted the use of scientifically controlled experiments. This has reduced the long-noted distinction of economics from natural sciences because it allows direct tests of what were previously taken as axioms. In some cases these have found that the axioms are not entirely correct.
In behavioural economics, psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his and Amos Tversky's empirical discovery of several cognitive biases and heuristics. Similar empirical testing occurs in neuroeconomics. Another example is the assumption of narrowly selfish preferences versus a model that tests for selfish, altruistic, and cooperative preferences. These techniques have led some to argue that economics is a "genuine science".
Microeconomics
Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment.
Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a "price taker" as no participant influences the price of a product. In the real world, markets often experience imperfect competition.
Forms of imperfect competition include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Firms under imperfect competition have the potential to be "price makers", which means that they can influence the prices of their products.
In partial equilibrium method of analysis, it is assumed that activity in the market being analysed does not affect other markets. This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium.
Production, cost, and efficiency
In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods, and "guns" vs "butter".
Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car.
Economic efficiency measures how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off.
The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case an economy can produce just two goods (say "guns" and "butter"). The PPF is a table or graph (as at the right) showing the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good.
Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve. If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter.
The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents.
By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organisation of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points.
Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organise society for the most efficient use of resources has been described as the "essence of economics", where the subject "makes its unique contribution."
Specialisation
Specialisation is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input.
Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialise in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else.
It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialisation in similar but differentiated product lines, to the overall benefit of respective trading parties or regions.
The general theory of specialisation applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses.
An example that combines features above is a country that specialises in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products.
Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design. Such specialisation of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate.
Supply and demand
Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.
For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximisation" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesised relation of each individual consumer for ranking different commodity bundles as more or less preferred.
The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.
Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.
That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply.
Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilise at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.
Firms
People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organise their production in firms when the costs of doing business becomes lower than doing it on the market. Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading.
In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organisation generalises from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly.
Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimise business decisions, including unit-cost minimisation and profit maximisation, given the firm's objectives and constraints imposed by technology and market conditions.
Uncertainty and game theory
Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry. Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it.
Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organisation, discussed above, to model different types of firm behaviour, for example in a solipsistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. In behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own.
In this, it generalises maximisation approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as the formulation of nuclear strategies, ethics, political science, and evolutionary biology.
Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation.
Some market organisations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's "Market for Lemons" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a "lemon" depress its price below what a quality second-hand car would be. Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving).
Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ("incomplete markets"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care. Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure.
Market failure
The term "market failure" encompasses several problems which may undermine standard economic assumptions. Although economists categorise market failures differently, the following categories emerge in the main texts.
Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above.
Natural monopoly, or the overlapping concepts of "practical" and "technical" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause.
Public goods are goods which are under-supplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time.
Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidise or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities. Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply.
In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesised long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition.
Some specialised fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or "public bads".
Policy options include regulations that reflect cost–benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights.
Welfare
Welfare economics uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium. It analyses social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no "social welfare" apart from the "welfare" associated with its individual units.
Macroeconomics
Macroeconomics, another branch of economics, examines the economy as a whole to explain broad aggregates and their interactions "top down", that is, using a simplified form of general-equilibrium theory. Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy.
Since at least the 1960s, macroeconomics has been characterised by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition. This has addressed a long-standing concern about inconsistent developments of the same subject.
Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth.
Growth
Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth.
Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting.
Business cycle
The economics of a depression were the spur for the creation of "macroeconomics" as a separate discipline. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output.
He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilise output over the business cycle. Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory.
Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with classical economics, stating that Keynesianism is correct in the short run but qualified by classical-like considerations in the intermediate and long run.
New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and "rational expectations" theory, led by Robert Lucas, and real business cycle theory.
In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are "sticky", which means they do not adjust instantaneously to changes in economic conditions.
Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the "long run" may be very long.
Unemployment
The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes.
Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment.
Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required for open jobs. Large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in demand. Structural unemployment is similar to frictional unemployment since both reflect the problem of matching workers with job vacancies, but structural unemployment covers the time needed to acquire new skills not just the short term search process.
While some types of unemployment may occur regardless of the condition of the economy, cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and economic growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment.
Money and monetary policy
Money is a means of final payment for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, "Money is what money does" ("Money is that money does" in the original).
As a medium of exchange, money facilitates trade. It is essentially a measure of value and more importantly, a store of value being a basis for credit creation. Its economic function can be contrasted with barter (non-monetary exchange). Given a diverse array of produced goods and specialised producers, barter may entail a hard-to-locate double coincidence of wants as to what is exchanged, say apples and a book. Money can reduce the transaction cost of exchange because of its ready acceptability. Then it is less costly for the seller to accept money in exchange, rather than what the buyer produces.
Monetary policy is the policy that central banks conduct to accomplish their broader objectives. Most central banks in developed countries follow inflation targeting, whereas the main objective for many central banks in development countries is to uphold a fixed exchange rate system. The primary monetary tool is normally the adjustment of interest rates, either directly via administratively changing the central bank's own interest rates or indirectly via open market operations. Via the monetary transmission mechanism, interest rate changes affect investment, consumption and net export, and hence aggregate demand, output and employment, and ultimately the development of wages and inflation.
Fiscal policy
Governments implement fiscal policy to influence macroeconomic conditions by adjusting spending and taxation policies to alter aggregate demand. When aggregate demand falls below the potential output of the economy, there is an output gap where some productive capacity is left unemployed. Governments increase spending and cut taxes to boost aggregate demand. Resources that have been idled can be used by the government.
For example, unemployed home builders can be hired to expand highways. Tax cuts allow consumers to increase their spending, which boosts aggregate demand. Both tax cuts and spending have multiplier effects where the initial increase in demand from the policy percolates through the economy and generates additional economic activity.
The effects of fiscal policy can be limited by crowding out. When there is no output gap, the economy is producing at full capacity and there are no excess productive resources. If the government increases spending in this situation, the government uses resources that otherwise would have been used by the private sector, so there is no increase in overall output. Some economists think that crowding out is always an issue while others do not think it is a major issue when output is depressed.
Sceptics of fiscal policy also make the argument of Ricardian equivalence. They argue that an increase in debt will have to be paid for with future tax increases, which will cause people to reduce their consumption and save money to pay for the future tax increase. Under Ricardian equivalence, any boost in demand from tax cuts will be offset by the increased saving intended to pay for future higher taxes.
Inequality
Economic inequality includes income inequality, measured using the distribution of income (the amount of money people receive), and wealth inequality measured using the distribution of wealth (the amount of wealth people own), and other measures such as consumption, land ownership, and human capital. Inequality exists at different extents between countries or states, groups of people, and individuals. There are many methods for measuring inequality, the Gini coefficient being widely used for income differences among individuals. An example measure of inequality between countries is the Inequality-adjusted Human Development Index, a composite index that takes inequality into account. Important concepts of equality include equity, equality of outcome, and equality of opportunity.
Research has linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict. Research suggests that greater inequality hinders economic growth and macroeconomic stability, and that land and human capital inequality reduce growth more than inequality of income. Inequality is at the centre stage of economic policy debate across the globe, as government tax and spending policies have significant effects on income distribution. In advanced economies, taxes and transfers decrease income inequality by one-third, with most of this being achieved via public social spending (such as pensions and family benefits.)
Other branches of economics
Public economics
Public economics is the field of economics that deals with economic activities of a public sector, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost–benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats.
Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like.
Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society.
International economics
International trade studies determinants of goods-and-services flows across international boundaries. It also concerns the size and distribution of gains from trade. Policy applications include estimating the effects of changing tariff rates and trade quotas. International finance is a macroeconomic field which examines the flow of capital across international borders, and the effects of these movements on exchange rates. Increased trade in goods, services and capital between countries is a major effect of contemporary globalisation.
Labour economics
Labour economics seeks to understand the functioning and dynamics of the markets for wage labour. Labour markets function through the interaction of workers and employers. Labour economics looks at the suppliers of labour services (workers), the demands of labour services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labour is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms.
Development economics
Development economics examines economic aspects of the economic development process in relatively low-income countries focusing on structural change, poverty, and economic growth. Approaches in development economics frequently incorporate social and political factors.
Related subjects
Economics is one social science among several and has fields bordering on other areas, including economic geography, economic history, public choice, energy economics, cultural economics, family economics and institutional economics.
Law and economics, or economic analysis of law, is an approach to legal theory that applies methods of economics to law. It includes the use of economic concepts to explain the effects of legal rules, to assess which legal rules are economically efficient, and to predict what the legal rules will be. A seminal article by Ronald Coase published in 1961 suggested that well-defined property rights could overcome the problems of externalities.
Political economy is the interdisciplinary study that combines economics, law, and political science in explaining how political institutions, the political environment, and the economic system (capitalist, socialist, mixed) influence each other. It studies questions such as how monopoly, rent-seeking behaviour, and externalities should impact government policy. Historians have employed political economy to explore the ways in the past that persons and groups with common economic interests have used politics to effect changes beneficial to their interests.
Energy economics is a broad scientific subject area which includes topics related to energy supply and energy demand. Georgescu-Roegen reintroduced the concept of entropy in relation to economics and energy from thermodynamics, as distinguished from what he viewed as the mechanistic foundation of neoclassical economics drawn from Newtonian physics. His work contributed significantly to thermoeconomics and to ecological economics. He also did foundational work which later developed into evolutionary economics.
The sociological subfield of economic sociology arose, primarily through the work of Émile Durkheim, Max Weber and Georg Simmel, as an approach to analysing the effects of economic phenomena in relation to the overarching social paradigm (i.e. modernity). Classic works include Max Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Georg Simmel's The Philosophy of Money (1900). More recently, the works of James S. Coleman, Mark Granovetter, Peter Hedstrom and Richard Swedberg have been influential in this field.
Gary Becker in 1974 presented an economic theory of social interactions, whose applications included the family, charity, merit goods and multiperson interactions, and envy and hatred. He and Kevin Murphy authored a book in 2001 that analysed market behaviour in a social environment.
Profession
The professionalisation of economics, reflected in the growth of graduate programmes on the subject, has been described as "the main change in economics since around 1900". Most major universities and many colleges have a major, school, or department in which academic degrees are awarded in the subject, whether in the liberal arts, business, or for professional study. See Bachelor of Economics and Master of Economics.
In the private sector, professional economists are employed as consultants and in industry, including banking and finance. Economists also work for various government departments and agencies, for example, the national treasury, central bank or National Bureau of Statistics. See Economic analyst.
There are dozens of prizes awarded to economists each year for outstanding intellectual contributions to the field, the most prominent of which is the Nobel Memorial Prize in Economic Sciences, though it is not a Nobel Prize.
Contemporary economics uses mathematics. Economists draw on the tools of calculus, linear algebra, statistics, game theory, and computer science. Professional economists are expected to be familiar with these tools, while a minority specialise in econometrics and mathematical methods.
Women in economics
Harriet Martineau (1802–1876) was a widely-read populariser of classical economic thought. Mary Paley Marshall (1850–1944), the first women lecturer at a British economics faculty, wrote The Economics of Industry with her husband Alfred Marshall. Joan Robinson (1903–1983) was an important post-Keynesian economist. The economic historian Anna Schwartz (1915–2012) coauthored A Monetary History of the United States, 1867–1960 with Milton Friedman. Three women have received the Nobel Prize in Economics: Elinor Ostrom (2009), Esther Duflo (2019) and Claudia Goldin (2023). Five have received the John Bates Clark Medal: Susan Athey (2007), Esther Duflo (2010), Amy Finkelstein (2012), Emi Nakamura (2019) and Melissa Dell (2020).
Women's authorship share in prominent economic journals reduced from 1940 to the 1970s, but has subsequently risen, with different patterns of gendered coauthorship. Women remain globally under-represented in the profession (19% of authors in the RePEc database in 2018), with national variation.
See also
Asymmetric cointegration
Critical juncture theory
Democracy and economic growth
Economic democracy
Economic ideology
Economic union
Economics terminology that differs from common usage
Free trade
Glossary of economics
Happiness economics
Humanistic economics
Index of economics articles
List of economics awards
List of economics films
Outline of economics
Socioeconomics
Solidarity economy
Notes
References
Sources
Further reading
Post, Louis F. (1927), The Basic Facts of Economics: A Common-Sense Primer for Advanced Students. United States: Columbian Printing Company, Incorporated.
.
External links
General information
Economic journals on the web.
Economics at Encyclopædia Britannica
Economics A–Z. Definitions from The Economist.
Economics Online (UK-based), with drop-down menus at top, incl. Definitions.
Intute: Economics: Internet directory of UK universities.
Research Papers in Economics (RePEc)
Resources For Economists : American Economic Association-sponsored guide to 2,000+ Internet resources from "Data" to "Neat Stuff", updated quarterly.
Institutions and organizations
Economics Departments, Institutes and Research Centers in the World
Organization For Co-operation and Economic Development (OECD) Statistics
United Nations Statistics Division
World Bank Data
American Economic Association
Study resources
Economics at About.com
Economics textbooks on Wikibooks
MERLOT Learning Materials: Economics : US-based database of learning materials
Online Learning and Teaching Materials UK Economics Network's database of text, slides, glossaries and other resources | 0.773346 | 0.999622 | 0.773053 |
Climate crisis | Climate crisis is a term that is used to describe global warming and climate change, and their effects. This term and the term climate emergency have been used to describe the threat of global warming to humanity and Earth, and to urge aggressive climate change mitigation and transformational adaptation.
The term climate crisis is used by those who "believe it evokes the gravity of the threats the planet faces from continued greenhouse gas emissions and can help spur the kind of political willpower that has long been missing from climate advocacy". They believe, much as global warming provoked more emotional engagement and support for action than climate change, calling climate change a crisis could have an even stronger effect.
A study has shown the term climate crisis invokes a strong emotional response by conveying a sense of urgency. However, some caution this response may be counter-productive and may cause a backlash due to perceptions of alarmist exaggeration.
In the scientific journal BioScience, a January 2020 article that was endorsed by over 11,000 scientists states: "the climate crisis has arrived" and that an "immense increase of scale in endeavors to conserve our biosphere is needed to avoid untold suffering due to the climate crisis".
Scientific basis
Until the mid 2010s, the scientific community had been using neutral, constrained language when discussing climate change. Advocacy groups, politicians and media have traditionally been using more-powerful language than that used by climate scientists. From around 2014, a shift in scientists' language connoted an increased sense of urgency. Use of the terms urgency, climate crisis and climate emergency in scientific publications and in mass media has grown. Scientists have called for more-extensive action and transformational climate-change adaptation that focuses on large-scale change in systems.
In 2020, a group of over 11,000 scientists said in a paper in BioScience describing global warming as a climate emergency or climate crisis was appropriate. The scientists stated an "immense increase of scale in endeavor" is needed to conserve the biosphere. They warned about "profoundly troubling signs", which may have many indirect effects such as large-scale human migration and food insecurity; these signs include increases in dairy and meat production, fossil fuel consumption, greenhouse gas emissions and deforestation, activities that are all concurrent with upward trends in climate-change effects such as rising global temperatures, global ice melt and extreme weather.
In 2019, scientists published an article in Nature saying evidence from climate tipping points alone suggests "we are in a state of planetary emergency". They defined emergency as a product of risk and urgency, factors they said are "acute". Previous research had shown individual tipping points could be exceeded with a of global temperature increase; warming has already exceeded . A global cascade of tipping points is possible with greater warming.
Definitions
In the context of climate change, the word crisis is used to denote "a crucial or decisive point or situation that could lead to a tipping point". It is a situation with an "unprecedented circumstance". A similar definition states in this context, crisis means "a turning point or a condition of instability or danger" and implies "action needs to be taken now or else the consequences will be disastrous". Another definition defines climate crisis as "the various negative effects that unmitigated climate change is causing or threatening to cause on our planet, especially where these effects have a direct impact on humanity".
Use of the term
20th century
Former U.S. Vice President Al Gore has used crisis terminology since the 1980s; the Climate Crisis Coalition, which was formed in 2004, formalized the term climate crisis. A 1990 report by the American University International Law Review includes legal texts that use the word crisis. "The Cairo Compact: Toward a Concerted World-Wide Response to the Climate Crisis" (1989) states: "All nations ... will have to cooperate on an unprecedented scale. They will have to make difficult commitments without delay to address this crisis."
21st century
In the late 2010s, the phrase climate crisis emerged "as a crucial piece of the climate hawk lexicon", and was adopted by the Green New Deal, The Guardian, Greta Thunberg, and U.S. Democratic political candidates such as Kamala Harris. At the same time, it came into more-popular use following a series of warnings from climate scientists and newly-energized activists.
In the U.S. in late 2018, the United States House of Representatives established the House Select Committee on the Climate Crisis, the name of which was regarded as "a reminder of how much energy politics have changed in the last decade". The original House climate committee had been called the "Select Committee on Energy Independence and Global Warming" in 2007. It was abolished in 2011 when Republicans regained control of the House.
The advocacy group Public Citizen reported that in 2018, less than 10% of articles in top-50 U.S. newspapers used the terms crisis or emergency in the context of climate change. In the same year, 3.5% of national television news segments in the U.S. referred to climate change as a crisis or an emergency (50 of 1,400). In 2019, Public Citizen launched a campaign called "Call it a Climate Crisis"; it urged major media organizations to adopt the term climate crisis. In the first four months of 2019, the number of uses of the term in U.S. media tripled to 150. Likewise, the Sierra Club, the Sunrise Movement, Greenpeace, and other environmental and progressive organizations joined in a June 6, 2019 Public Citizen letter to news organizations urging the news organizations to call climate change and human inaction "what it is–a crisis–and to cover it like one".
In 2019, the language describing climate appeared to change: the UN Secretary General's address at the 2019 UN Climate Action Summit used more emphatic language; Al Gore's campaign The Climate Reality Project, Greenpeace and the Sunrise Movement petitioned news organizations to alter their language; and in May 2019, The Guardian changed its style guide to favor the terms "climate emergency, crisis or breakdown" and "global heating". Editor-in-Chief Katharine Viner said: "We want to ensure that we are being scientifically precise, while also communicating clearly with readers on this very important issue. The phrase 'climate change', for example, sounds rather passive and gentle when what scientists are talking about is a catastrophe for humanity." The Guardian became a lead partner in Covering Climate Now, an initiative of news organizations Columbia Journalism Review and The Nation that was founded in 2019 to address the need for stronger climate coverage.
In May 2019, The Climate Reality Project promoted an open petition of news organizations to use climate crisis instead of climate change and global warming. The NGO said: "it's time to abandon both terms in culture".
In June 2019, Spanish news agency EFE announced its preferred phrase was "crisis climática". In November 2019, Hindustan Times also adopted the term because climate change "does not correctly reflect the enormity of the existential threat". The Polish newspaper Gazeta Wyborcza also uses the term climate crisis rather than climate change; one of its editors described climate change as one of the most-important topics the paper has ever covered.
Also in June 2019, the Canadian Broadcasting Corporation (CBC) changed its language guide to say: "Climate crisis and climate emergency are OK in some cases as synonyms for 'climate change'. But they're not always the best choice ... For example, 'climate crisis' could carry a whiff of advocacy in certain political coverage". Journalism professor Sean Holman does not agree with this and said in an interview:It's about being accurate in terms of the scope of the problem that we are facing. And in the media we, generally speaking, don't have any hesitation about naming a crisis when it is a crisis. Look at the opioid epidemic [in the U.S.], for example. We call it an epidemic because it is one. So why are we hesitant about saying the climate crisis is a crisis?
In June 2019, climate activists demonstrated outside the offices of The New York Times; they urged the newspaper's editors to adopt terms such as climate emergency or climate crisis. This kind of public pressure led New York City Council to make New York the largest city in the world to formally adopt a climate emergency declaration.
In November 2019, the website Oxford Dictionaries named climate crisis Word of the year for 2019. The term was chosen because it matches the "ethos, mood, or preoccupations of the passing year".
In 2021, the Finnish newspaper Helsingin Sanomat created a free variable font called Climate Crisis that has eight weights that correlate with Arctic sea ice decline, visualizing historical changes in ice melt. The newspaper's art director said the font both evokes the aesthetics of environmentalism and is a data visualization graphic.
In updates to the World Scientists' Warning to Humanity of 2021 and 2022, scientists used the terms climate crisis and climate emergency; the title of the publications is "World Scientists' Warning of a Climate Emergency". They said: "we need short, frequent, and easily accessible updates on the climate emergency".
Effectiveness
In September 2019, Bloomberg journalist Emma Vickers said crisis terminology may be "showing results", citing a 2019 poll by The Washington Post and the Kaiser Family Foundation saying 38% of U.S. adults termed climate change "a crisis" while an equal number called it "a major problem but not a crisis". Five years earlier, 23% of U.S. adults considered climate change to be a crisis. , use of crisis terminology in non-binding climate-emergency declarations is regarded as ineffective in making governments "shift into action".
Concerns about crisis terminology
Emergency framing may have several disadvantages. Such framing may implicitly prioritize climate change over other important social issues, encouraging competition among activists rather than cooperation. It could also de-emphasize dissent within the climate-change movement. Emergency framing may suggest a need for solutions by government, which provides less-reliable long-term commitment than does popular mobilization, and which may be perceived as being "imposed on a reluctant population". Without immediate dramatic effects of climate change, emergency framing may be counterproductive by causing disbelief, disempowerment in the face of a problem that seems overwhelming, and withdrawal.
There could also be a "crisis fatigue" in which urgency to respond to threats loses its appeal over time. Crisis terminology could lose audiences if meaningful policies to address the emergency are not enacted. According to researchers Susan C. Moser and Lisa Dilling of University of Colorado, appeals to fear usually do not create sustained, constructive engagement; they noted psychologists consider human responses to danger—fight, flight or freeze—can be maladaptive if they do not reduce the danger. According to Sander van der Linden, director of the Cambridge Social Decision-Making Lab, fear is a "paralyzing emotion". He favors climate crisis over other terms because it conveys a sense of both urgency and optimism, and not a sense of doom. Van der Linden said: "people know that crises can be avoided and that they can be resolved".
Climate scientist Katharine Hayhoe said in early 2019 crisis framing is only "effective for those already concerned about climate change, but complacent regarding solutions". She added it "is not yet effective" for those who perceive climate activists "to be alarmist Chicken Littles", and that "it would further reinforce their pre-conceived—and incorrect—notions". According to Nick Reimer, journalists in Germany say the word crisis may be misunderstood to mean climate change is "inherently episodic"—crises are "either solved or they pass"—or as a temporary state before a return to normalcy that is not possible. Arnold Schwarzenegger, organizer of the Austrian World Summit for climate action, said people are not motivated by the term climate change; according to Schwarzenegger, focusing on the word pollution might evoke be a more-direct and negative connotation. A 2023 U.S. survey found no evidence that climate crisis or climate emergency—terms less familiar to those surveyed—elicit more perceived urgency than climate change or global warming.
Psychological and neuroscientific studies
In 2019, an advertising consulting agency conducted a neuroscientific study involving 120 U.S. people who were equally divided into supporters of the Republican Party, the Democratic Party and independents. The study involved electroencephalography (EEG) and galvanic skin response (GSR) measurements. Responses to the terms climate crisis, environmental destruction, environmental collapse, weather destabilization, global warming and climate change were measured. The study found Democrats had a 60% greater emotional response to climate crisis than to climate change. In Republicans, the emotional response to climate crisis was three times stronger than that for climate change. According to CBS News, climate crisis "performed well in terms of responses across the political spectrum and elicited the greatest emotional response among independents". The study concluded climate crisis elicited stronger emotional responses than neutral and "worn out" terms like global warming and climate change. Climate crisis was found to encourage a sense of urgency, though not a strong-enough response to cause cognitive dissonance that would cause people to generate counterarguments.
Related terminology
<noinclude>
Research has shown the naming of a phenomenon and the way it is framed "has a tremendous effect on how audiences come to perceive that phenomenon" and "can have a profound impact on the audience's reaction". Climate change, and its real and hypothetical effects, are usually described in scientific-and-practitioner literature in terms of climate risks.
The many related terms other than climate crisis include:
climate catastrophe (used with reference to a 2019 David Attenborough documentary, the 2019–20 Australian bushfire season, and the 2022 Pakistan floods)
threats that impact the earth (World Wildlife Fund, 2012—)
climate breakdown (climate scientist Peter Kalmus, 2018)
climate chaos ("The New York Times" article title, 2019; U.S. Democratic candidates, 2019; and an Ad Age marketing team, 2019)
climate ruin (U.S. Democratic candidates, 2019)
global heating (Richard A. Betts, Met Office U.K., 2018)
global overheating (Public Citizen, 2019)
climate emergency (11,000 scientists' warning letter in BioScience, and in The Guardian, both 2019),
ecological breakdown, ecological crisis and ecological emergency (all set forth by climate activist Greta Thunberg, 2019)
global meltdown, Scorched Earth, The Great Collapse, and Earthshattering (an Ad Age marketing team, 2019)
climate disaster (The Guardian, 2019)
environmental Armageddon (Fiji Prime Minister Frank Bainimarama)
climate calamity (Los Angeles Times, 2022)
climate havoc (The New York Times, 2022)
climate pollution, carbon pollution (Grist, 2022)
global boiling (U.N. Secretary-General António Guterres speech, July 2023)
climate breaking point (Stuart P.M. Mackintosh, The Hill, August 2023)
(Has humanity) broken the climate (The Guardian, August 2023)
(climate) abyss (spokesman for the United Nations secretary general, May 2024)
climate hell (U.N. Secretary-General António Guterres, June 2024)
In addition to climate crisis, other terms have been investigated for their effects upon audiences, including global warming, climate change, climatic disruption, environmental destruction, weather destabilization and environmental collapse.
In 2022, The New York Times journalist Amanda Hess said "end of the world" characterizations of the future, such as climate apocalypse, are often used to refer to the current climate crisis, and that the characterization is spreading from "the ironized hellscape of the internet" to books and film.
See also
Footnotes
References
Further reading
(Nature joining Covering Climate Now.)
(advertising perspective by a "professional namer")
External links
Covering Climate Now (CCNow), a collaboration among news organizations "to produce more informed and urgent climate stories" (archive)
Crisis
Crisis
Crisis
2010s neologisms
2020s neologisms | 0.783386 | 0.986793 | 0.77304 |
Protection | Protection is any measure taken to guard a thing against damage caused by outside forces. Protection can be provided to physical objects, including organisms, to systems, and to intangible things like civil and political rights. Although the mechanisms for providing protection vary widely, the basic meaning of the term remains the same. This is illustrated by an explanation found in a manual on electrical wiring:
Some kind of protection is a characteristic of all life, as living things have evolved at least some protective mechanisms to counter damaging environmental phenomena, such as ultraviolet light. Biological membranes such as bark on trees and skin on animals offer protection from various threats, with skin playing a key role in protecting organisms against pathogens and excessive water loss. Additional structures like scales and hair offer further protection from the elements and from predators, with some animals having features such as spines or camouflage serving exclusively as anti-predator adaptations. Many animals supplement the protection afforded by their physiology by burrowing or otherwise adopting habitats or behaviors that insulate them from potential sources of harm. Humans originally began wearing clothing and building shelters in prehistoric times for protection from the elements. Both humans and animals are also often concerned with the protection of others, with adult animals being particularly inclined to seek to protect their young from elements of nature and from predators.
In the human sphere of activity, the concept of protection has been extended to nonliving objects, including technological systems such as computers, and to intangible things such as intellectual property, beliefs, and economic systems. Humans seek to protect locations of historical and cultural significance through historic preservation efforts, and are also concerned with protecting the environment from damage caused by human activity, and with protecting the Earth as a whole from potentially harmful objects from space.
Physical protection
Protection of objects
Fire protection, including passive fire protection measures such as physical firewalls and fireproofing, and active fire protection measures, such as fire sprinkler systems.
Waterproofing, though application of surface layers that repel water.
Rot-proofing and rustproofing
Thermal conductivity resistance
Impact resistance
Radiation protection, protection of people and the environment from radiation
Dust resistance
Conservation and restoration of immovable cultural property, including a large number of techniques to preserve sites of historical or archaeological value
Protection of persons
Close protection, physical protection and security from danger of very important persons
Climbing protection, safety measures in climbing
Diplomatic protection
Humanitarian protection, the protection of civilians, in conflict zones and other humanitarian crises
Journalism source protection
Personal protective equipment
Safe sex practices to afford sexual protection against pregnancy and disease, particularly the use of condoms
Executive protection, security measures taken to ensure the safety of important persons
Protection racket, a criminal scheme involving exchanging money from "protection" against violence
Right of asylum, protection for those seeking asylum from persecution by political groups and to ensure safe passage
Workplace or employment retaliation, protecting individuals in the workplace such as from being fired for opposing, aiding and complaining about workplace practices
Protection of systems
Protection of technological systems
Protection of technological systems is often symbolized by the use of a padlock icon, such as "🔒", or a padlock image.
Protection mechanism, in computer science. In computer sciences the separation of protection and security is a design choice. William Wulf has identified protection as a mechanism and security as a policy.
Power-system protection, in power engineering
A way of encapsulation in object-oriented programming
Protection of ecological systems
Environmental protection, the practice of protecting the natural environment
Protection of social systems
Consumer protection, laws governing sales and credit practices involving the public.
Protectionism, an economic policy of protecting a country's market from competitors.
Protection of rights, with respect to civil and political rights.
Data protection through information privacy measures.
Intellectual property protection.
See also
Safety
Security
References
Safety | 0.785574 | 0.983983 | 0.772991 |
Gamification | Gamification is the strategic attempt to enhance systems, services, organizations, and activities by creating similar experiences to those experienced when playing games in order to motivate and engage users. This is generally accomplished through the application of game design elements and game principles (dynamics and mechanics) in non-game contexts.
Gamification is part of persuasive system design, and it commonly employs game design elements to improve user engagement, organizational productivity, flow, learning, crowdsourcing, knowledge retention, employee recruitment and evaluation, ease of use, usefulness of systems, physical exercise, traffic violations, voter apathy, public attitudes about alternative energy, and more. A collection of research on gamification shows that a majority of studies on gamification find it has positive effects on individuals. However, individual and contextual differences exist.
Gamification can be achieved using different game mechanics and elements which can be linked to 8 core drives when using the Octalysis framework.
Techniques
Gamification techniques are intended to leverage people's natural desires for socializing, learning, mastery, competition, achievement, status, self-expression, altruism, or closure, or simply their response to the framing of a situation as game or play. Early gamification strategies use rewards for players who accomplish desired tasks or competition to engage players. Types of rewards include points, achievement badges or levels, the filling of a progress bar, or providing the user with virtual currency. Making the rewards for accomplishing tasks visible to other players or providing leader boards are ways of encouraging players to compete.
Another approach to gamification is to make existing tasks feel more like games. Some techniques used in this approach include adding meaningful choice, onboarding with a tutorial, increasing challenge, and adding narrative.
Game elements
Game elements are the basic building blocks of gamification applications. Among these typical game design elements, are points, badges, leader-boards, performance graphs, meaningful stories, avatars, and teammates. According to Chou, the efficacy of the Octalysis Framework in gamification, shows that experience points (XP), badges, and progress indicators can significantly enhance user engagement and productivity in business learning programs.
Points
Points are basic elements of a multitude of games and gamified applications. They are typically rewarded for the successful accomplishment of specified activities within the gamified environment and they serve to numerically represent a player's progress. Various kinds of points can be differentiated between, e.g. experience points, redeemable points, or reputation points, as can the different purposes that points serve. One of the most important purposes of points is to provide feedback. Points allow the players' in-game behavior to be measured, and they serve as continuous and immediate feedback and as a reward.
Badges
Badges are defined as visual representations of achievements and can be earned and collected within the gamification environment. They confirm the players' achievements, symbolize their merits, and visibly show their accomplishment of levels or goals. Earning a badge can be dependent on a specific number of points or on particular activities within the game. Badges have many functions, serving as goals, if the prerequisites for winning them are known to the player, or as virtual status symbols. In the same way as points, badges also provide feedback, in that they indicate how the players have performed. Badges can influence players' behavior, leading them to select certain routes and challenges in order to earn badges that are associated with them. Additionally, as badges symbolize one's membership in a group of those who own this particular badge, they also can exert social influences on players and co-players, particularly if they are rare or hard to earn.
Leaderboards
Leaderboards rank players according to their relative success, measuring them against a certain success criterion. As such, leaderboards can help determine who performs best in a certain activity and are thus competitive indicators of progress that relate the player's own performance to the performance of others. However, the motivational potential of leaderboards is mixed. Werbach and Hunter regard them as effective motivators if there are only a few points left to the next level or position, but as demotivators, if players find themselves at the bottom end of the leaderboard. Competition caused by leaderboards can create social pressure to increase the player's level of engagement and can consequently have a constructive effect on participation and learning. However, these positive effects of competition are more likely if the respective competitors are approximately at the same performance level.
Performance graphs
Performance graphs, which are often used in simulation or strategy games, provide information about the players' performance compared to their preceding performance during a game. Thus, in contrast to leaderboards, performance graphs do not compare the player's performance to other players, but instead, evaluate the player's own performance over time. Unlike the social reference standard of leaderboards, performance graphs are based on an individual reference standard. By graphically displaying the player's performance over a fixed period, they focus on improvements. Motivation theory postulates that this fosters mastery orientation, which is particularly beneficial to learning.
Meaningful stories
Meaningful stories are game design elements that do not relate to the player's performance. The narrative context in which a gamified application can be embedded contextualizes activities and characters in the game and gives them meaning beyond the mere quest for points and achievements. A story can be communicated by a game's title (e.g., Space Invaders) or by complex storylines typical of contemporary role-playing video games (e.g., The Elder Scrolls Series). Narrative contexts can be oriented towards real, non-game contexts or act as analogies of real-world settings. The latter can enrich boring, barely stimulating contexts, and, consequently, inspire and motivate players particularly if the story is in line with their personal interests. As such, stories are also an important part in gamification applications, as they can alter the meaning of real-world activities by adding a narrative 'overlay', e.g. being hunted by zombies while going for a run.
Avatars
Avatars are visual representations of players within the game or gamification environment. Usually, they are chosen or even created by the player. Avatars can be designed quite simply as a mere pictogram, or they can be complexly animated, three- dimensional representations. Their main formal requirement is that they unmistakably identify the players and set them apart from other human or computer-controlled avatars. Avatars allow the players to adopt or create another identity and, in cooperative games, to become part of a community.
Teammates
Teammates, whether they are other real players or virtual non-player characters, can induce conflict, competition or cooperation. The latter can be fostered particularly by introducing teams, i.e. by creating defined groups of players that work together towards a shared objective. Meta-analytic evidence supports that the combination of competition and collaboration in games is likely to be effective for learning.
Game element hierarchy
The described game elements fit within a broader framework, which involves three types of elements: dynamics, mechanics, and components. These elements constitute the hierarchy of game elements.
Dynamics are the highest in the hierarchy. They are the big picture aspects of the gamified system that should be considered and managed; however, they never directly enter into the game. Dynamics elements provide motivation through features such as narrative or social interaction.
Mechanics are the basic processes that drive the action forward and generate player engagement and involvement. Examples are chance, turns, and rewards.
Components are the specific instantiations of mechanics and dynamics; elements like points, quests, and virtual goods.
Applications
Gamification has been applied to almost every aspect of life. Examples of gamification in business context include the U.S. Army, which uses military simulator America's Army as a recruitment tool, and M&M's "Eye Spy" pretzel game, launched in 2013 to amplify the company's pretzel marketing campaign by creating a fun way to "boost user engagement." Another example can be seen in the American education system. Students are ranked in their class based on their earned grade-point average (GPA), which is comparable to earning a high score in video games. Students may also receive incentives, such as an honorable mention on the dean's list, the honor roll, and scholarships, which are equivalent to leveling-up a video game character or earning virtual currency or tools that augment game success.
Job application processes sometimes use gamification as a way to hire employees by assessing their suitability through questionnaires and mini games that simulate the actual work environment of that company.
Marketing
Gamification has been widely applied in marketing. Over 70% of Forbes Global 2000 companies surveyed in 2013 said they planned to use gamification for the purposes of marketing and customer retention. For example, in November, 2011, Australian broadcast and online media partnership Yahoo!7 launched its Fango mobile app/SAP, which TV viewers use to interact with shows via techniques like check-ins and badges. Gamification has also been used in customer loyalty programs. In 2010, Starbucks gave custom Foursquare badges to people who checked in at multiple locations, and offered discounts to people who checked in most frequently at an individual store. As a general rule Gamification Marketing or Game Marketing usually falls under four primary categories;
1. Brandification (in-game advertising): Messages, images or videos promoting a Brand, Product or Service within a game's visuals components. According to NBCNews game creators Electronic Arts used "Madden 09" and "Burnout Paradise" to promote 'in-game' billboards encouraging players to vote.
2. Transmedia: The result of taking a media property and extending it into a different medium for both promotional and monetisation purposes. Nintendo's "007: GoldenEye" is a classic example. A video game created to advertise the originally titled movie. In the end, the promotional game brought in more money than the originally titled film.
3. Through-the-line (TTL) & Below-the-line (BTL): Text above, side or below main game screen (also known as an iFrame) advertising images or text. Example of this would be "I love Bees".
4. Advergames: Usually games based on popular mobile game templates, such as 'Candy Crush' or 'Temple Run'. These games are then recreated via platforms like WIX with software from the likes of Gamify, in order to promote Brands, Products and Services. Usually to encourage engagement, loyalty and product education. These usually involve social leaderboards and rewards that are advertised via social media platforms like Facebook's Top 10 games.
Gamification also has been used as a tool for customer engagement, and for encouraging desirable website usage behaviour. Additionally, gamification is applicable to increasing engagement on sites built on social network services. For example, in August, 2010, the website builder DevHub announced an increase in the number of users who completed their online tasks from 10% to 80% after adding gamification elements. On the programming question-and-answer site Stack Overflow users receive points and/or badges for performing a variety of actions, including spreading links to questions and answers via Facebook and Twitter. A large number of different badges are available, and when a user's reputation points exceed various thresholds, the user gains additional privileges, eventually including moderator privileges.
Inspiration
Gamification can be used for ideation (structured brainstorming to produce new ideas). A study at MIT Sloan found that ideation games helped participants generate more and better ideas, and compared it to gauging the influence of academic papers by the numbers of citations received in subsequent research.
Health
Applications like Fitocracy and QUENTIQ (Dacadoo) use gamification to encourage their users to exercise more effectively and improve their overall health. Users are awarded varying numbers of points for activities they perform in their workouts, and gain levels based on points collected. Users can also complete quests (sets of related activities) and gain achievement badges for fitness milestones. Health Month adds aspects of social gaming by allowing successful users to restore points to users who have failed to meet certain goals. Public health researchers have studied the use of gamification in self-management of chronic diseases and common mental disorders, STD prevention, and infection prevention and control.
In a review of health apps in the 2014 Apple App Store, more than 100 apps showed a positive correlation between gamification elements used and high user ratings. MyFitnessPal was named as the app that used the most gamification elements.
Reviewers of the popular location-based game Pokémon Go praised the game for promoting physical exercise. Terri Schwartz (IGN) said it was "secretly the best exercise app out there," and that it changed her daily walking routine. Patrick Allen (Lifehacker) wrote an article with tips about how to work out using Pokémon Go. Julia Belluz (Vox) said it could be the "greatest unintentional health fad ever," writing that one of the results of the game that the developers may not have imagined was that "it seems to be getting people moving." One study showed users took an extra 194 steps per day once they started using the app, approximately 26% more than usual. Ingress is a similar game that also requires a player to be physically active. Zombies, Run!, a game in which the player is trying to survive a zombie apocalypse through a series of missions, requires the player to (physically) run, collect items to help the town survive, and listen to various audio narrations to uncover mysteries. Mobile, context-sensitive serious games for sports and health have been called exergames.
Work
Gamification has been used in an attempt to improve employee productivity in healthcare, financial services, transportation, government, and others. In general, enterprise gamification refers to work situations where "game thinking and game-based tools are used in a strategic manner to integrate with existing business processes or information systems. And these techniques are used to help drive positive employee and organizational outcomes."
Crowdsourcing
Crowdsourcing has been gamified in games like Foldit, a game designed by the University of Washington, in which players compete to manipulate proteins into more efficient structures. A 2010 paper in science journal Nature credited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions. The ESP Game is a game that is used to generate image metadata. Google Image Labeler is a version of the ESP Game that Google has licensed to generate its own image metadata. Research from the University of Bonn used gamification to increase wiki contributions by 62%.
In the context of online crowdsourcing, gamification is also employed to improve the psychological and behavioral consequences of the solvers. According to numerous research, adding gamification components to a crowdsourcing platform can be considered as a design that shifts participants' focus from task completion to involvement motivated by intrinsic factors. Since the success of crowdsourcing competitions depends on a large number of participating solvers, the platforms for crowdsourcing provide motivating factors to increase participation by drawing on the concepts of the game.
Education and training
Gamification in the context of education and training is of particular interest because it offers a variety of benefits associated with learning outcomes and retention. Using video-game inspired elements like leaderboards and badges has been shown to be effective in engaging large groups and providing objectives for students to achieve outside of traditional norms like grades or verbal feedback. Online learning platforms such as Khan Academy and even physical schools like New York City Department of Education's Quest to Learn use gamification to motivate students to complete mission-based units and master concepts. There is also an increasing interest in the use of gamification in health sciences and education as an engaging information delivery tool and in order to add variety to revision.
With increased access to one-to-one student devices, and accelerated by pressure from the COVID-19 pandemic, many teachers from primary to post-secondary settings have introduced live, online quiz-show style games into their lessons.
Gamification has also been used to promote learning outside of schools. In August 2009, Gbanga launched a game for the Zurich Zoo where participants learned about endangered species by collecting animals in mixed reality. Companies seeking to train their customers to use their product effectively can showcase features of their products with interactive games like Microsoft's Ribbon Hero 2.
A wide range of employers including the United States Armed Forces, Unilever, and SAP currently use gamified training modules to educate their employees and motivate them to apply what they learned in trainings to their job. According to a study conducted by Badgeville, 78% of workers are utilizing games-based motivation at work and nearly 91% say these systems improve their work experience by increasing engagement, awareness and productivity. In the form of occupational safety training, technology can provide realistic and effective simulations of real-life experiences, making safety training less passive and more engaging, more flexible in terms of time management and a cost-effective alternative to practice. The combined use od virtual reality and gamification can provide a more effective solutions in term of knowledge acquisition and retention when they are compared with traditional training methods.
Politics and terrorist groups
Alix Levine, an American security consultant, reports that some techniques that a number of extremist websites such as Stormfront and various terrorism-related sites used to build loyalty and participation can be described as gamification. As an example, Levine mentioned reputation scores.
The Chinese government has announced that it will begin using gamification to rate its citizens in 2020, implementing a Social Credit System in which citizens will earn points representing trustworthiness. Details of this project are still vague, but it has been reported that citizens will receive points for good behavior, such as making payments on time and educational attainments.
Bellingcat contributor Robert Evans has written about the "gamification of terror" in the wake of the El Paso shooting, in an analysis of the role 8Chan and similar boards played in inspiring the massacre, as well as other acts of terrorism and mass shootings. According to Evans, "[w]hat we see here is evidence of the only real innovation 8chan has brought to global terrorism: the gamification of mass violence. We see this not just in the references to "high scores", but in the very way the Christchurch shooting was carried out. Brenton Tarrant livestreamed his massacre from a helmet cam in a way that made the shooting look almost exactly like a First Person Shooter video game. This was a conscious choice, as was his decision to pick a sound-track for the spree that would entertain and inspire his viewers."
Technology design
Traditionally, researchers thought of motivations to use computer systems to be primarily driven by extrinsic purposes; however, many modern systems have their use driven primarily by intrinsic motivations. Examples of such systems used primarily to fulfill users' intrinsic motivations, include online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, online pornography, and so on. Such systems are excellent candidates for further 'gamification' in their design. Moreover, even traditional management information systems (e.g., ERP, CRM) are being 'gamified' such that both extrinsic and intrinsic motivations must increasingly be considered.
As illustration, Microsoft has announced plans to use gamification techniques for its Windows Phone 7 operating system design. While businesses face the challenges of creating motivating gameplay strategies, what makes for effective gamification is a key question.
One important type of technological design in gamification is the player centered design. Based on the design methodology user-centered design, its main goal is to promote greater connectivity and positive behavior change between technological consumers. It has five steps that help computer users connect with other people online to help them accomplish goals and other tasks they need to complete. The 5 steps are: an individual or company has to know their player (their target audience), identify their mission (their goal), understand human motivation (the personality, desires, and triggers of the target audience), apply mechanics (points, badges, leaderboards, etc.), and to manage, monitor, and measure the way they are using their mechanics to ensure it is helping them achieve the desired outcome of their goal and that their goal is specific and realistic.
Authentication
Gamification has also been applied to authentication. Games have been proposed as a way for users to learn new and more complicated passwords. Gamification has also been proposed as a way to select and manage archives.
Online gambling
Gamification has been used to some extent by online casinos. Some brands use an incremental reward system to extend the typical player lifecycle and to encourage repeat visits and cash deposits at the casino in return for rewards such as free spins and cash match bonuses on subsequent deposits.
History
The term "gamification" first appeared online in the context of computer software in 2008. Gamification did not gain popularity until 2010. Even prior to the term coming into use, other fields borrowing elements from videogames was common; for example, some work in learning disabilities and scientific visualization adapted elements from videogames.
The term "gamification" first gained widespread usage in 2010, in a more specific sense referring to incorporation of social/reward aspects of games into software. The technique captured the attention of venture capitalists, one of whom said he considered gamification the most promising area in gaming. Another observed that half of all companies seeking funding for consumer software applications mentioned game design in their presentations.
Several researchers consider gamification closely related to earlier work on adapting game-design elements and techniques to non-game contexts. Deterding et al. survey research in human–computer interaction that uses game-derived elements for motivation and interface design, and Nelson argues for a connection to both the Soviet concept of socialist competition, and the American management trend of "fun at work". Fuchs points out that gamification might be driven by new forms of ludic interfaces. Gamification conferences have also retroactively incorporated simulation; e.g. Will Wright, designer of the 1989 video game SimCity, was the keynote speaker at the gamification conference Gsummit 2013.
In addition to companies that use the technique, a number of businesses created gamification platforms. In October 2007, Bunchball, backed by Adobe Systems Incorporated, was the first company to provide game mechanics as a service, on Dunder Mifflin Infinity, the community site for the NBC TV show The Office. Bunchball customers have included Playboy, Chiquita, Bravo, and The USA Network. Badgeville, which offers gamification services, launched in late 2010, and raised $15 million in venture-capital funding in its first year of operation.
Gabe Zichermann coined "funware" as an alternative term for gamification.
Gamification as an educational and behavior modification tool reached the public sector by 2012, when the United States Department of Energy co-funded multiple research trials, including consumer behavior studies, adapting the format of Programmed learning into mobile microlearning to experiment with the impacts of gamification in reducing energy use. Cultural anthropologist Susan Mazur-Stommen published a business case for applying games to addressing climate change and sustainability, delivering research which "...took many forms including card-games (Cool Choices), videogames (Ludwig), and games for mobile devices such as smartphones (Ringorang) [p.9]."
Gamification 2013, an event exploring the future of gamification, was held at the University of Waterloo Stratford Campus in October 2013.
Legal restrictions
Through gamification's growing adoption and its nature as a data aggregator, multiple legal restrictions may apply to gamification. Some refer to the use of virtual currencies and virtual assets, data privacy laws and data protection, or labor laws.
The use of virtual currencies, in contrast to traditional payment systems, is not regulated. The legal uncertainty surrounding the virtual currency schemes might constitute a challenge for public authorities, as these schemes can be used by criminals, fraudsters and money launderers to perform their illegal activities.
A March 2022 consultation paper by the Board of the International Organization of Securities Commissions (IOSCO) questions whether some gamification tactics should be banned.
Criticism
University of Hamburg researcher Sebastian Deterding has characterized the initial popular strategies for gamification as not being fun and creating an artificial sense of achievement. He also says that gamification can encourage unintended behaviours.
Poorly designed gamification in the workplace has been compared to Taylorism, and is considered a form of micromanagement.
In a review of 132 of the top health and fitness apps in the Apple app store, in 2014, using gamification as a method to modify behavior, the authors concluded that "Despite the inclusion of at least some components of gamification, the mean scores of integration of gamification components were still below 50 percent. This was also true for the inclusion of game elements and the use of health behavior theory constructs, thus showing a lack of following any clear industry standard of effective gaming, gamification, or behavioral theory in health and fitness apps."
Concern was also expressed in a 2016 study analyzing outcome data from 1,298 users who competed in gamified and incentivized exercise challenges while wearing wearable devices. In that study the authors conjectured that data may be highly skewed by cohorts of already healthy users, rather than the intended audiences of participants requiring behavioral intervention.
Game designers like Jon Radoff and Margaret Robertson have also criticized gamification as excluding elements like storytelling and experiences and using simple reward systems in place of true game mechanics.
Gamification practitioners have pointed out that while the initial popular designs were in fact mostly relying on simplistic reward approach, even those led to significant improvements in short-term engagement. This was supported by the first comprehensive study in 2014, which concluded that an increase in gamification elements correlated with an increase in motivation score, but not with capacity or opportunity/trigger scores.
The same study called for standardization across the app industry on gamification principles to improve the effectiveness of health apps on the health outcomes of users.
MIT Professor Kevin Slavin has described business research into gamification as flawed and misleading for those unfamiliar with gaming. Heather Chaplin, writing in Slate, describes gamification as "an allegedly populist idea that actually benefits corporate interests over those of ordinary people". Jane McGonigal has distanced her work from the label "gamification", listing rewards outside of gameplay as the central idea of gamification and distinguishing game applications where the gameplay itself is the reward under the term "gameful design".
"Gamification" as a term has also been criticized. Ian Bogost has referred to the term as a marketing fad and suggested "exploitation-ware" as a more suitable name for the games used in marketing. Other opinions on the terminology criticism have made the case why the term gamification makes sense.
In an article in the LA Times, the gamification of worker engagement at Disneyland was described as an "electronic whip". Workers had reported feeling controlled and overworked by the system.
See also
Bartle taxonomy of player types
BrainHex
Dark pattern
Egoboo, a component of some gamification strategies
Gamification of learning
GNS theory
Notes
References
Further reading
Boller, Sharon; Kapp, Karl M. (2017). Play to Learn: Everything You Need to Know About Designing Effective Learning Games. ISBN 978-1562865771.
Gray, Dave; Brown, Sunni; Macanufo, James (2010). Gamestorming: A Playbook for Innovators, Rulebreakers, and Changemakers. ISBN 978-0596804176.
Routledge, Helen (2015). "Why Games Are Good For Business: How to Leverage the Power of Serious Games, Gamification and Simulations". Palgrave Macmillan.
Behavioral economics
Gaming
User interface techniques | 0.775196 | 0.99713 | 0.772971 |
Technological convergence | Technological convergence is the tendency for technologies that were originally unrelated to become more closely integrated and even unified as they develop and advance. For example, watches, telephones, television, computers, and social media platforms began as separate and mostly unrelated technologies, but have converged in many ways into an interrelated telecommunication, media, and technology industry.
Definitions
"Convergence is a deep integration of knowledge, tools, and all relevant activities of human activity for a common goal, to allow society to answer new questions to change the respective physical or social ecosystem. Such changes in the respective ecosystem open new trends, pathways, and opportunities in the following divergent phase of the process".
Siddhartha Menon defines convergence as integration and digitalization. Integration, here, is defined as "a process of transformation measure by the degree to which diverse media such as phone, data broadcast and information technology infrastructures are combined into a single seamless all purpose network architecture platform". Digitalization is not so much defined by its physical infrastructure, but by the content or the medium. Jan van Dijk suggests that "digitalization means breaking down signals into bytes consisting of ones and zeros".
Convergence is defined by Blackman (1998) as a trend in the evolution of technology services and industry structures. Convergence is later defined more specifically as the coming together of telecommunications, computing and broadcasting into a single digital bit-stream.
Mueller stands against the statement that convergence is really a takeover of all forms of media by one technology: digital computers.
Acronyms
Some acronyms for converging scientific or technological fields include:
NBIC (Nanotechnology, Biotechnology, Information technology and Cognitive science)
GNR (Genetics, Nanotechnology and Robotics)
GRIN (Genetics, Robotics, Information, and Nano processes)
GRAIN (Genetics, Robotics, Artificial Intelligence, and Nanotechnology)
BANG (Bits, Atoms, Neurons, Genes)
Biotechnology
A 2010 citation analysis of patent data shows that biomedical devices are strongly connected to computing and mobile telecommunications, and that molecular bioengineering is strongly connected to several IT fields.
Bioconvergence is the integration of biology with engineering. Possible areas of bioconvergence include:
Materials inspired by biology (such as in electronics)
DNA data storage
Medical technologies:
Omics-based profiling
Miniaturized drug delivery
Tissue reconstruction
Traceable pharmaceutical packaging
More efficient bioreactors
Digital convergence
Digital convergence is the inclination for various digital innovations and media to become more similar with time. It enables the convergence of access devices and content as well as the industry participant operations and strategy. This is how this type of technological convergence creates opportunities, particularly in the area of product development and growth strategies for digital product companies. The same can be said in the case of individual content creators, such as vloggers on YouTube. The convergence in this example is demonstrated in the involvement of the Internet, home devices such as smart television, camera, the YouTube application, and digital content. In this setup, there are the so-called "spokes", which are the devices that connect to a central hub (such as a PC or smart TV). Here, the Internet serves as the intermediary, particularly through its interactivity tools and social networking, in order to create unique mixes of products and services via horizontal integration.
The above example highlights how digital convergence encompasses three phenomena:
previously stand-alone devices are being connected by networks and software, significantly enhancing functionalities;
previously stand-alone products are being converged onto the same platform, creating hybrid products in the process; and,
companies are crossing traditional boundaries such as hardware and software to provide new products and new sources of competition.
Another example is the convergence of different types of digital contents. According to Harry Strasser, former CTO of Siemens "[digital convergence will substantially impact people's lifestyle and work style]".
Cellphones
The functions of the cellphone changes as technology converges. Because of technological advancement, a cellphone functions as more than just a phone: it can also contain an Internet connection, video players, MP3 players, gaming, and a camera. Their areas of use have increased over time, partly substituting for other devices.
A mobile convergence device is one that, if connected to a keyboard, monitor, and mouse, can run applications as a desktop computer would. Convergent operating systems include the Linux operating systems Ubuntu Touch, Plasma Mobile and PureOS.
Convergence can also refer to being able to run the same app across different devices and being able to develop apps for different devices (such as smartphones, TVs and desktop computers) at once, with the same code base. This can be done via Linux applications that adapt to the device they are being used on (including native apps designed for such via frameworks like Kirigami) or by the use of multi-platform frameworks like the Quasar framework that use tools such as Apache Cordova, Electron and Capacitor, which can increase the userbase, the pace and ease of development and the number of reached platforms while decreasing development costs.
The Internet
The role of the Internet has changed from its original use as a communication tool to easier and faster access to information and services, mainly through a broadband connection. The television, radio and newspapers were the world's media for accessing news and entertainment; now, all three media have converged into one, and people all over the world can read and hear news and other information on the Internet. The convergence of the Internet and conventional TV became popular in the 2010s, through Smart TV, also sometimes referred to as "Connected TV" or "Hybrid TV", (not to be confused with IPTV, Internet TV, or with Web TV). Smart TV is used to describe the current trend of integration of the Internet and Web 2.0 features into modern television sets and set-top boxes, as well as the technological convergence between computers and these television sets or set-top boxes. These new devices most often also have a much higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and less focus on traditional broadcast media like previous generations of television sets and set-top boxes always have had.
Social movements
The integration of social movements in cyberspace is one of the potential strategies that social movements can use in the age of media convergence. Because of the neutrality of the Internet and the end-to-end design, the power structure of the Internet was designed to avoid discrimination between applications. Mexico's Zapatistas campaign for land rights was one of the most influential case in the information age; Manuel Castells defines the Zapatistas as "the first informational guerrilla movement". The Zapatista uprising had been marginalized by the popular press. The Zapatistas were able to construct a grassroots, decentralized social movement by using the Internet. The Zapatistas Effect, observed by Cleaver, continues to organize social movements on a global scale. A sophisticated webmetric analysis, which maps the links between different websites and seeks to identify important nodal points in a network, demonstrates that the Zapatistas cause binds together hundreds of global NGOs. The majority of the social movement organized by Zapatistas targets their campaign especially against global neoliberalism. A successful social movement not only need online support but also protest on the street. Papic wrote, "Social Media Alone Do Not Instigate Revolutions", which discusses how the use of social media in social movements needs good organization both online and offline.
Media
Media technological convergence is the tendency that as technology changes, different technological systems sometimes evolve toward performing similar tasks. It is the interlinking of computing and other information technologies, media content, media companies and communication networks that have arisen as the result of the evolution and popularization of the Internet as well as the activities, products and services that have emerged in the digital media space.
Generally, media convergence refers to the merging of both old and new media and can be seen as a product, a system or a process. Jenkins states that convergence is, "the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behaviour of media audiences who would go almost anywhere in search of the kinds of entertainment experiences they wanted". According to Jenkins, there are five areas of convergence: technological, economic, social or organic, cultural, and global. Media convergence is not just a technological shift or a technological process, it also includes shifts within the industrial, cultural, and social paradigms that encourage the consumer to seek out new information. Convergence, simply put, is how individual consumers interact with others on a social level and use various media platforms to create new experiences, new forms of media and content that connect us socially, and not just to other consumers, but to the corporate producers of media in ways that have not been as readily accessible in the past. However, Lugmayr and Dal Zotto argued, that media convergence takes place on technology, content, consumer, business model, and management level. They argue that media convergence is a matter of evolution and can be described through the triadic phenomena of convergence, divergence, and coexistence. Today's digital media ecosystems coexist, as e.g., mobile app stores provide vendor lock-ins into particular eco-systems; some technology platforms are converging under one technology, due to, for example, the usage of common communication protocols as in digital TV; and other media are diverging, as, for example, media content offerings are more and more specializing and provides a space for niche media.
Closely linked to the multilevel process of media convergence are also several developments in different areas of the media and communication sector which are also summarized under the term of media deconvergence. Many experts view this as simply being the tip of the iceberg, as all facets of institutional activity and social life such as business, government, art, journalism, health, and education, are increasingly being carried out in these digital media spaces across a growing network of information and communication technology devices. Also included in this topic is the basis of computer networks, wherein many different operating systems are able to communicate via different protocols.
Convergent services, such as VoIP, IPTV, Smart TV, and others, tend to replace the older technologies and thus can disrupt markets. IP-based convergence is inevitable and will result in new service and new demand in the market. When the old technology converges into the public-owned common, IP based services become access-independent or less dependent. The old service is access-dependent.
Advances in technology bring the ability for technological convergence that Rheingold believes can alter the "social-side effects," in that "the virtual, social and physical world are colliding, merging and coordinating." It was predicted in the late 1980s, around the time that CD-ROM was becoming commonplace, that a digital revolution would take place, and that old media would be pushed to one side by new media. Broadcasting is increasingly being replaced by the Internet, enabling consumers all over the world the freedom to access their preferred media content more easily and at a more available rate than ever before.
However, when the dot-com bubble of the 1990s suddenly popped, that poured cold water over the talk of such a digital revolution. In today's society, the idea of media convergence has once again emerged as a key point of reference as newer as well as established media companies attempt to visualize the future of the entertainment industry. If this revolutionary digital paradigm shift presumed that old media would be increasingly replaced by new media, the convergence paradigm that is currently emerging suggests that new and old media would interact in more complex ways than previously predicted. The paradigm shift that followed the digital revolution assumed that new media was going to change everything. When the dot com market crashed, there was a tendency to imagine that nothing had changed. The real truth lay somewhere in between as there were so many aspects of the current media environment to take into consideration. Many industry leaders are increasingly reverting to media convergence as a way of making sense in an era of disorientating change. In that respect, media convergence in theory is essentially an old concept taking on a new meaning. Media convergence, in reality, is more than just a shift in technology. It alters relationships between industries, technologies, audiences, genres and markets. Media convergence changes the rationality media industries operate in, and the way that media consumers process news and entertainment. Media convergence is essentially a process and not an outcome, so no single black box controls the flow of media. With proliferation of different media channels and increasing portability of new telecommunications and computing technologies, we have entered into an era where media constantly surrounds us.
Media convergence requires that media companies rethink existing assumptions about media from the consumer's point of view, as these affect marketing and programming decisions. Media producers must respond to newly empowered consumers. Conversely, it would seem that hardware is instead diverging whilst media content is converging. Media has developed into brands that can offer content in a number of forms. Two examples of this are Star Wars and The Matrix. Both are films, but are also books, video games, cartoons, and action figures. Branding encourages expansion of one concept, rather than the creation of new ideas. In contrast, hardware has diversified to accommodate media convergence. Hardware must be specific to each function. While most scholars argue that the flow of cross-media is accelerating, O'Donnell suggests, especially between films and video game, the semblance of media convergence is misunderstood by people outside of the media production industry. The conglomeration of media industry continues to sell the same story line in different media. For example, Batman is in comics, films, anime, and games. However, the data to create the image of batman in each media is created individually by different teams of creators. The same character and the same visual effect repetitively appear in different media is because of the synergy of media industry to make them similar as possible. In addition, convergence does not happen when the game of two different consoles is produced. No flows between two consoles because it is faster to create game from scratch for the industry.
One of the more interesting new media journalism forms is virtual reality. Reuters, a major international news service, has created and staffed a news “island” in the popular online virtual reality environment Second Life. Open to anyone, Second Life has emerged as a compelling 3D virtual reality for millions of citizens around the world who have created avatars (virtual representations of themselves) to populate and live in an altered state where personal flight is a reality, altered egos can flourish, and real money ( were spent during the 24 hours concluding at 10:19 a.m. eastern time January 7, 2008) can be made without ever setting foot into the real world. The Reuters Island in Second Life is a virtual version of the Reuters real-world news service but covering the domain of Second Life for the citizens of Second Life (numbering 11,807,742 residents as of January 5, 2008).
Media convergence in the digital era means the changes that are taking place with older forms of media and media companies. Media convergence has two roles, the first is the technological merging of different media channels – for example, magazines, radio programs, TV shows, and movies, now are available on the Internet through laptops, iPads, and smartphones. As discussed in Media Culture (by Campbell), convergence of technology is not new. It has been going on since the late 1920s. An example is RCA, the Radio Corporation of America, which purchased Victor Talking Machine Company and introduced machines that could receive radio and play recorded music. Next came the TV, and radio lost some of its appeal as people started watching television, which has both talking and music as well as visuals. As technology advances, convergence of media change to keep up. The second definition of media convergence Campbell discusses is cross-platform by media companies. This usually involves consolidating various media holdings, such as cable, phone, television (over the air, satellite, cable) and Internet access under one corporate umbrella. This is not for the consumer to have more media choices, this is for the benefit of the company to cut down on costs and maximize its profits. As stated in the article Convergence Culture and Media Work by Mark Deuze, “the convergence of production and consumption of media across companies, channels, genres, and technologies is an expression of the convergence of all aspects of everyday life: work and play, the local and the global, self and social identity."
History
Communication networks were designed to carry different types of information independently. The older media, such as television and radio, are broadcasting networks with passive audiences. Convergence of telecommunication technology permits the manipulation of all forms of information, voice, data, and video. Telecommunication has changed from a world of scarcity to one of seemingly limitless capacity. Consequently, the possibility of audience interactivity morphs the passive audience into an engaged audience. The historical roots of convergence can be traced back to the emergence of mobile telephony and the Internet, although the term properly applies only from the point in marketing history when fixed and mobile telephony began to be offered by operators as joined products. Fixed and mobile operators were, for most of the 1990s, independent companies. Even when the same organization marketed both products, these were sold and serviced independently.
In the 1990s, an implicit and often explicit assumption was that new media was going to replace the old media and Internet was going to replace broadcasting. In Nicholas Negroponte's Being Digital, Negroponte predicts the collapse of broadcast networks in favor of an era of narrow-casting. He also suggests that no government regulation can shatter the media conglomerate. "The monolithic empires of mass media are dissolving into an array of cottage industries... Media barons of today will be grasping to hold onto their centralized empires tomorrow.... The combined forces of technology and human nature will ultimately take a stronger hand in plurality than any laws Congress can invent." The new media companies claimed that the old media would be absorbed fully and completely into the orbit of the emerging technologies. George Gilder dismisses such claims saying, "The computer industry is converging with the television industry in the same sense that the automobile converged with the horse, the TV converged with the nickelodeon, the word-processing program converged with the typewriter, the CAD program converged with the drafting board, and digital desktop publishing converged with the Linotype machine and the letterpress." Gilder believes that computers had come not to transform mass culture but to destroy it.
Media companies put media convergence back to their agenda after the dot-com bubble burst. In 1994, Knight Ridder promulgated the concept of portable magazines, newspaper, and books: "Within news corporations it became increasingly obvious that an editorial model based on mere replication in the Internet of contents that had previously been written for print newspapers, radio, or television was no longer sufficient." The rise of digital communication in the late 20th century has made it possible for media organizations (or individuals) to deliver text, audio, and video material over the same wired, wireless, or fiber-optic connections. At the same time, it inspired some media organizations to explore multimedia delivery of information. This digital convergence of news media, in particular, was called "Mediamorphosis" by researcher Roger Fidler in his 1997 book by that name. Today, we are surrounded by a multi-level convergent media world where all modes of communication and information are continually reforming to adapt to the enduring demands of technologies, "changing the way we create, consume, learn and interact with each other".
Convergence culture
Henry Jenkins determines convergence culture to be the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want. The convergence culture is an important factor in transmedia storytelling. Convergence culture introduces new stories and arguments from one form of media into many. Transmedia storytelling is defined by Jenkins as a process "where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience. Ideally, each medium makes its own unique contribution to the unfolding of the story". For instance, The Matrix starts as a film, which is followed by two other instalments, but in a convergence culture it is not constrained to that form. It becomes a story not only told in the movies but in animated shorts, video games and comic books, three different media platforms. Online, a wiki is created to keep track of the story's expanding canon. Fan films, discussion forums, and social media pages also form, expanding The Matrix to different online platforms. Convergence culture took what started as a film and expanded it across almost every type of media. Bert is Evil (images) Bert and Bin Laden appeared in CNN coverage of anti-American protest following September 11. The association of Bert and Bin Laden links back to the Ignacio's Photoshop project for fun.
Convergence culture is a part of participatory culture. Because average people can now access their interests on many types of media they can also have more of a say. Fans and consumers are able to participate in the creation and circulation of new content. Some companies take advantage of this and search for feedback from their customers through social media and sharing sites such as YouTube. Besides marketing and entertainment, convergence culture has also affected the way we interact with news and information. We can access news on multiple levels of media from the radio, TV, newspapers, and the Internet. The Internet allows more people to be able to report the news through independent broadcasts and therefore allows a multitude of perspectives to be put forward and accessed by people in many different areas. Convergence allows news to be gathered on a much larger scale. For instance, photographs were taken of torture at Abu Ghraib. These photos were shared and eventually posted on the Internet. This led to the breaking of a news story in newspapers, on TV, and the Internet.
Media scholar Henry Jenkins has described the media convergence with participatory culture as:
Appliances
Some media observers expect that we will eventually access all media content through one device, or "black box". As such, media business practice has been to identify the next "black box" to invest in and provide media for. This has caused a number of problems. Firstly, as "black boxes" are invented and abandoned, the individual is left with numerous devices that can perform the same task, rather than one dedicated for each task. For example, one may own both a computer and a video games console, subsequently owning two DVD players. This is contrary to the streamlined goal of the "black box" theory, and instead creates clutter. Secondly, technological convergence tends to be experimental in nature. This has led to consumers owning technologies with additional functions that are harder, if not impractical, to use rather than one specific device. Many people would only watch the TV for the duration of the meal's cooking time, or whilst in the kitchen, but would not use the microwave as the household TV. These examples show that in many cases technological convergence is unnecessary or unneeded.
Furthermore, although consumers primarily use a specialized media device for their needs, other "black box" devices that perform the same task can be used to suit their current situation. As a 2002 Cheskin Research report explained: "...Your email needs and expectations are different whether you're at home, work, school, commuting, the airport, etc., and these different devices are designed to suit your needs for accessing content depending on where you are- your situated context." Despite the creation of "black boxes", intended to perform all tasks, the trend is to use devices that can suit the consumer's physical position. Due to the variable utility of portable technology, convergence occurs in high-end mobile devices. They incorporate multimedia services, GPS, Internet access, and mobile telephony into a single device, heralding the rise of what has been termed the "smartphone," a device designed to remove the need to carry multiple devices. Convergence of media occurs when multiple products come together to form one product with the advantages of all of them, also known as the black box. This idea of one technology, concocted by Henry Jenkins, has become known more as a fallacy because of the inability to actually put all technical pieces into one. For example, while people can have email and Internet on their phone, they still want full computers with Internet and email in addition. Mobile phones are a good example, in that they incorporate digital cameras, MP3 players, voice recorders, and other devices. For the consumer, it means more features in less space; for media conglomerates it means remaining competitive.
However, convergence has a downside. Particularly in initial forms, converged devices are frequently less functional and reliable than their component parts (e.g., a mobile phone's web browser may not render some web pages correctly, due to not supporting certain rendering methods, such as the iPhone browser not supporting Flash content). As the number of functions in a single device escalates, the ability of that device to serve its original function decreases. As Rheingold asserts, technological convergence holds immense potential for the "improvement of life and liberty in some ways and (could) degrade it in others". He believes the same technology has the potential to be "used as both a weapon of social control and a means of resistance". Since technology has evolved in the past ten years or so, companies are beginning to converge technologies to create demand for new products. This includes phone companies integrating 3G and 4G on their phones. In the mid 20th century, television converged the technologies of movies and radio, and television is now being converged with the mobile phone industry and the Internet. Phone calls are also being made with the use of personal computers. Converging technologies combine multiple technologies into one. Newer mobile phones feature cameras, and can hold images, videos, music, and other media. Manufacturers now integrate more advanced features, such as video recording, GPS receivers, data storage, and security mechanisms into the traditional cellphone.
Telecommunications
Telecommunications convergence or network convergence describes emerging telecommunications technologies, and network architecture used to migrate multiple communications services into a single network. Specifically, this involves the converging of previously distinct media such as telephony and data communications into common interfaces on single devices, such as most smart phones can make phone calls and search the web.
Messaging
Combination services include those that integrate SMS with voice, such as voice SMS. Providers include Bubble Motion, Jott, Kirusa, and SpinVox. Several operators have launched services that combine SMS with mobile instant messaging (MIM) and presence. Text-to-landline services also exist, where subscribers can send text messages to any landline phone and are charged at standard rates. The text messages are converted into spoken language. This service has been popular in America, where fixed and mobile numbers are similar. Inbound SMS has been converging to enable reception of different formats (SMS, voice, MMS, etc.). In April 2008, O2 UK launched voice-enabled shortcodes, adding voice functionality to the five-digit codes already used for SMS. This type of convergence is helpful for media companies, broadcasters, enterprises, call centres and help desks who need to develop a consistent contact strategy with the consumer. Because SMS is very popular today, it became relevant to include text messaging as a contact possibility for consumers. To avoid having multiple numbers (one for voice calls, another one for SMS), a simple way is to merge the reception of both formats under one number. This means that a consumer can text or call one number and be sure that the message will be received.
Mobile
"Mobile service provisions" refers not only to the ability to purchase mobile phone services, but the ability to wirelessly access everything: voice, Internet, audio, and video. Advancements in WiMAX and other leading edge technologies provide the ability to transfer information over a wireless link at a variety of speeds, distances, and non-line-of-sight conditions.
Multi-play
Multi-play is a marketing term describing the provision of different telecommunication services, such as Internet access, television, telephone, and mobile phone service, by organizations that traditionally only offered one or two of these services. Multi-play is a catch-all phrase; usually, the terms triple play (voice, video and data) or quadruple play (voice, video, data and wireless) are used to describe a more specific meaning. A dual play service is a marketing term for the provisioning of the two services: it can be high-speed Internet (digital subscriber line) and telephone service over a single broadband connection in the case of phone companies, or high-speed Internet (cable modem) and TV service over a single broadband connection in the case of cable TV companies. The convergence can also concern the underlying communication infrastructure. An example of this is a triple play service, where communication services are packaged allowing consumers to purchase TV, Internet, and telephony in one subscription. The broadband cable market is transforming as pay-TV providers move aggressively into what was once considered the telco space. Meanwhile, customer expectations have risen as consumer and business customers alike seek rich content, multi-use devices, networked products and converged services including on-demand video, digital TV, high speed Internet, VoIP, and wireless applications. It is uncharted territory for most broadband companies.
A quadruple play service combines the triple play service of broadband Internet access, television, and telephone with wireless service provisions. This service set is also sometimes humorously referred to as "The Fantastic Four" or "Grand Slam". A fundamental aspect of the quadruple play is not only the long-awaited broadband convergence but also the players involved. Many of them, from the largest global service providers to whom we connect today via wires and cables to the smallest of startup service providers are interested. Opportunities are attractive: the big three telecom services – telephony, cable television, and wireless—could combine their industries. In the UK, the merger of NTL: Telewest and Virgin Mobile resulted in a company offering a quadruple play of cable television, broadband Internet, home telephone, and mobile telephone services.
Home network
Early in the 21st century, home LAN convergence so rapidly integrated home routers, wireless access points, and DSL modems that users were hard put to identify the resulting box they used to connect their computers to their Internet service. A general term for such a combined device is a residential gateway.
VoIP
The U.S. Federal Communications Commission (FCC) has not been able to decide how to regulate VoIP (Internet Telephony) because the convergent technology is still growing and changing. In addition to its growth, FCC is tentative to set regulation on VoIP in order to promote competition in the telecommunication industry. There is not a clear line between telecommunication service and the information service because of the growth of the new convergent media. Historically, telecommunication is subject to state regulation. The state of California concerned about the increasing popularity of Internet telephony will eventually obliterate funding for the Universal Service Fund. Some states attempt to assert their traditional role of common carrier oversight onto this new technology. Meisel and Needles (2005) suggests that the FCC, federal courts, and state regulatory bodies on access line charges will directly impact the speed in which Internet telephony market grows. On one hand, the FCC is hesitant to regulate convergent technology because VoIP with different feature from the old Telecommunication; no fixed model to build legislature yet. On the other hand, the regulations is needed because Service over the Internet might be quickly replaced telecommunication service, which will affect the entire economy.
Convergence has also raised several debates about classification of certain telecommunications services. As the lines between data transmission, and voice and media transmission are eroded, regulators are faced with the task of how best to classify the converging segments of the telecommunication sector. Traditionally, telecommunication regulation has focused on the operation of physical infrastructure, networks, and access to network. No content is regulated in the telecommunication because the content is considered private. In contrast, film and Television are regulated by contents. The rating system regulates its distribution to the audience. Self-regulation is promoted by the industry. Bogle senior persuaded the entire industry to pay 0.1 percent levy on all advertising and the money was used to give authority to the Advertising Standards Authority, which keeps the government away from setting legislature in the media industry.
The premises to regulate the new media, two-ways communications, concerns much about the change from old media to new media. Each medium has different features and characteristics. First, Internet, the new medium, manipulates all form of information – voice, data and video. Second, the old regulation on the old media, such as radio and Television, emphasized its regulation on the scarcity of the channels. Internet, on the other hand, has the limitless capacity, due to the end-to-end design. Third, Two-ways communication encourages interactivity between the content producers and the audiences.
"...Fundamental basis for classification, therefore, is to consider the need for regulation in terms of either market failure or in the public interests"(Blackman). The Electronic Frontier Foundation, founded in 1990, is a non profit organization that defends free speech, privacy, innovation, and consumer rights. The Digital Millennium Copyright Act regulates and protect the digital content producers and consumers.
Trends
Network neutrality is an issue. Wu and Lessig set out two reasons for network neutrality: firstly, by removing the risk of future discrimination, it incentivizes people to invest more in the development of broadband applications; secondly, it enables fair competition between applications without network bias. The two reasons also coincide with FCC's interest to stimulate investment and enhance innovation in broadband technology and services. Despite regulatory efforts of deregulation, privatization, and liberalization, the infrastructure barrier has been a negative factor in achieving effective competition. Kim et al. argues that IP dissociates the telephony application from the infrastructure and Internet telephony is at the forefront of such dissociation. The neutrality of the network is very important for fair competition. As the former FCC Charman Michael Copps put it: "From its inception, the Internet was designed, as those present during the course of its creating will tell you, to prevent government or a corporation or anyone else from controlling it. It was designed to defeat discrimination against users, ideas and technologies". Because of these reasons, Shin concludes that regulator should make sure to regulate application and infrastructure separately.
The layered model was first proposed by Solum and Chug, Sicker, and Nakahata. Sicker, Warbach and Witt have supported using a layered model to regulate the telecommunications industry with the emergence of convergence services. Many researchers have different layered approach, but they all agree that the emergence of convergent technology will create challenges and ambiguities for regulations. The key point of the layered model is that it reflects the reality of network architecture, and current business model. The layered model consists of:
Access layer – where the physical infrastructure resides: copper wires, cable, or fiber optic.
Transport layer – the provider of service.
Application layer – the interface between the data and the users.
Content layer – the layer which users see.
Shin combines the layered model and network neutrality as the principle to regulate the convergent media industry.
Robotics
Medical applications of robotics have become increasingly prominent in the robotics literature.
The use of robots in service sectors is much less than the use of robots in manufacturing.
See also
Computer multitasking (the software equivalent of a converged device)
Dongle (can facilitate inclusion of non-converged devices)
Digital rhetoric
Generic Access Network
History of science and technology
UMA Today
IP Multimedia Subsystem (IMS)
Mobile VoIP
Next Generation Networks
Next generation network services
Post-convergent
Second screen
References
Bibliography
Further reading
External links
Amdocs MultiPlay Strategy WhitePaper
Technology Convergence Update with Bob Brown – Video
Crossover devices
Digital media
History of telecommunications
Science and technology studies
Technological change
Telecommunications systems
Technology systems | 0.782009 | 0.988418 | 0.772951 |
Astrobiology | Astrobiology (also xenology or exobiology) is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth.
Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth.
The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline.
Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications.
The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions.
Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications.
Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research.
Overview
The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin.
While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory.
The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive.
In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field.
The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars.
Theoretical foundations
Planetary habitability
Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability.
Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds.
Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry.
Environmental stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems).
Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy.
It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change.
Methods
Studying terrestrial extremophiles
Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methods within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to:
Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food.
Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts.
Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments.
Researching Earth's present environment
Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including:
Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances.
Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans.
Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth.
Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth.
Finding biosignatures on other worlds
Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include:
The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life.
The study of liquid bodies on icy moons: Discoveries of surface and subsurface bodies of liquid on moons such as Europa, Titan and Enceladus showed possible habitability zones, making them viable targets for the search for extraterrestrial life. , missions like Europa Clipper and Dragonfly are planned to search for biosignatures within these environments.
The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus.
Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets.
Talking to extraterrestrials
SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources.
Investigating the early Earth
Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include:
The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules.
The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field.
The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions.
The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth.
The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions.
The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth.
The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life.
The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules.
The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence.
The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth.
The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life.
The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms.
Research
The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells.
Research outcomes
, no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial.
Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists.
On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone."
Elements of astrobiology
Astronomy
Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway.
The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life.
An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise:
where:
N = The number of communicative civilizations
R* = The rate of formation of suitable stars (stars such as the Sun)
fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun)
ne = The number of Earth-sized worlds per planetary system
fl = The fraction of those Earth-sized planets where life actually develops
fi = The fraction of life sites where intelligence develops
fc = The fraction of communicative planets (those on which electromagnetic communications technology develops)
L = The "lifetime" of communicating civilizations
However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it.
Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth.
Biology
Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life.
Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they form an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist.
Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Rusavskia elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere.
Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist.
The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth.
The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life."
More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively".
In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets.
Philosophy
David Grinspoon called astrobiology a field of natural philosophy. Astrobiology intersects with philosophy by raising questions about the nature and existence of life beyond Earth. Philosophical implications include the definition of life itself, issues in the philosophy of mind and cognitive science in case intelligent life is found, epistemological questions about the nature of proof, ethical considerations of space exploration, along with the broader impact of discovering extraterrestrial life on human thought and society.
Dunér has emphasized philosophy of astrobiology as an ongoing existential exercise in individual and collective self-understanding, whose major task is constructing and debating concepts such as the concept of life. Key issues, for Dunér, are questions of resource money and monetary planning, epistemological questions regarding astrobiological knowledge, linguistics issues about interstellar communication, cognitive issues such as the definition of intelligence, along with the possibility of interplanetary contamination.
Persson also emphasized key philosophical questions in astrobiology. They include ethical justification of resources, the question of life in general, the epistemological issues and knowledge about being alone in the universe, ethics towards extraterrestrial life, the question of politics and governing uninhabited worlds, along with questions of ecology.
For von Hegner, the question of astrobiology and the possibility of astrophilosophy differs. For him, the discipline needs to bifurcate into astrobiology and astrophilosophy since discussions made possible by astrobiology, but which have been astrophilosophical in nature, have existed as long as there have been discussions about extraterrestrial life. Astrobiology is a self-corrective interaction among observation, hypothesis, experiment, and theory, pertaining to the exploration of all natural phenomena. Astrophilosophy consists of methods of dialectic analysis and logical argumentation, pertaining to the clarification of the nature of reality. Šekrst argues that astrobiology requires the affirmation of astrophilosophy, but not as a separate cognate to astrobiology. The stance of conceptual speciesm, according to Šekrst, permeates astrobiology since the very name astrobiology tries to talk about not just biology, but about life in a general way, which includes terrestrial life as a subset. This leads us to either redefine philosophy, or consider the need for astrophilosophy as a more general discipline, to which philosophy is just a subset that deals with questions such as the nature of the human mind and other anthropocentric questions.
Most of the philosophy of astrobiology deals with two main questions: the question of life and the ethics of space exploration. Kolb specifically emphasizes the question of viruses, for which the question whether they are alive or not is based on the definitions of life that include self-replication. Schneider tried to defined exo-life, but concluded that we often start with our own prejudices and that defining extraterrestrial life seems futile using human concepts. For Dick, astrobiology relies on metaphysical assumption that there is extraterrestrial life, which reaffirms questions in the philosophy of cosmology, such as fine-tuning or the anthropic principle.
Rare Earth hypothesis
The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds.
Missions
Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System.
Viking program
The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists.
Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
Beagle 2
Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna.
EXPOSE
EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit.
Mars Science Laboratory
The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars.
Tanpopo
The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet.
ExoMars rover
ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission is currently under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it is planned for a 2022 launch.
Mars 2020
Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel.
Europa Clipper
Europa Clipper is a mission planned by NASA for a 2025 launch that will conduct detailed reconnaissance of Jupiter's moon Europa and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites.
Dragonfly
Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts.
Proposed concepts
Icebreaker Life
Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation.
Journey to Enceladus and Titan
Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter.
Enceladus Life Finder
Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon.
Life Investigation For Enceladus
Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan.
Oceanus
Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water.
Explorer of Enceladus and Titan
Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency.
See also
The Living Cosmos
Citations
General references
The International Journal of Astrobiology , published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field.
Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe.
Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt.
Further reading
D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition).
Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology.
External links
Astrobiology.nasa.gov
UK Centre for Astrobiology
Spanish Centro de Astrobiología
Astrobiology Research at The Library of Congress
Astrobiology Survey – An introductory course on astrobiology
Summary - Search For Life Beyond Earth (NASA; 25 June 2021)
Origin of life
Astronomical sub-disciplines
Branches of biology
Speculative evolution | 0.775708 | 0.996429 | 0.772938 |
Operations research | Operations research (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve decision-making. The term management science is occasionally used as a synonym.
Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.
Overview
Operational research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem).
The major sub-disciplines (but not limited to) in modern operational research, as identified by the journal Operations Research and The Journal of the Operational Research Society are:
Computing and information technologies
Financial engineering
Manufacturing, service sciences, and supply chain management
Policy modeling and public sector work
Revenue management
Simulation
Stochastic models
Transportation theory
Game theory for strategies
Linear programming
Nonlinear programming
Integer programming in NP-complete problem specially for 0-1 integer linear programming for binary
Dynamic programming in Aerospace engineering and Economics
Information theory used in Cryptography, Quantum computing
Quadratic programming for solutions of Quadratic equation and Quadratic function
History
In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research.
Historical origins
In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead. Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences.
Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt. Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken.
Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules.
Second World War
The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included operational analysis (UK Ministry of Defence from 1962) and quantitative management.
During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army.
Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941.
In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty. Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones.
While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings. As a result of these findings Coastal Command changed their aircraft to using white undersurfaces.
Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics".
Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft. This story has been disputed, with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University, the result of work done by Abraham Wald.
When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses.
The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes.
Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction.
On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting.
After World War II
In 1947, under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR:
"To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective."
With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947.
In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953. Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members. Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming.
In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s – the one in 1956 with 120 participants – bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966.
With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently." Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks.
Problems addressed
Critical path analysis or project planning: identifying those processes in a multiple-dependency project which affect the overall duration of the project
Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost)
Network optimization: for instance, setup of telecommunications or power system networks to maintain quality of service during outages
Resource allocation problems
Facility location
Assignment Problems:
Assignment problem
Generalized assignment problem
Quadratic assignment problem
Weapon target assignment problem
Bayesian search theory: looking for a target
Optimal search
Routing, such as determining the routes of buses so that as few buses are needed as possible
Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products
Project production activities: managing the flow of work activities in a capital project in response to system variability through operations research tools for variability reduction and buffer allocation using a combination of allocation of capacity, inventory and time
Efficient messaging and customer response tactics
Automation: automating or integrating robotic systems in human-driven operations processes
Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs
Transportation: managing freight transportation and delivery systems (Examples: LTL shipping, intermodal freight transport, travelling salesman problem, driver scheduling problem)
Scheduling:
Personnel staffing
Manufacturing steps
Project tasks
Network data traffic: these are known as queueing models or queueing systems.
Sports events and their television coverage
Blending of raw materials in oil refineries
Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science
Cutting stock problem: Cutting small items out of bigger ones.
Finding the optimal parameter (weights) setting of an algorithm that generates the realisation of a figured bass in Baroque compositions (classical music) by using weighted local cost and transition cost rules
Operational research is also used extensively in government where evidence-based policy is used.
Management science
In 1967, Stafford Beer characterized the field of management science as "the business use of operations research". Like operational research itself, management science (MS) is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research.
The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups.
Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence.
The application of these models within the corporate sector became known as management science.
Related fields
Some of the fields that have considerable overlap with Operations Research and Management Science include:
Artificial Intelligence
Business analytics
Computer science
Data mining/Data science/Big data
Decision analysis
Decision intelligence
Engineering
Financial engineering
Forecasting
Game theory
Geography/Geographic information science
Graph theory
Industrial engineering
Inventory control
Logistics
Mathematical modeling
Mathematical optimization
Probability and statistics
Project management
Policy analysis
Queueing theory
Simulation
Social network/Transportation forecasting models
Stochastic processes
Supply chain management
Systems engineering
Applications
Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes:
Scheduling (of airlines, trains, buses etc.)
Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities)
Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station)
Hydraulics & Piping Engineering (managing flow of water from reservoirs)
Health Services (information and supply chain management)
Game Theory (identifying, understanding; developing strategies adopted by companies)
Urban Design
Computer Network Engineering (packet routing; timing; analysis)
Telecom & Data Communication Engineering (packet routing; timing; analysis)
Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods.
In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years, a number of non-quantified modeling methods have been developed. These include:
stakeholder based approaches including metagame analysis and drama theory
morphological analysis and various forms of influence diagrams
cognitive mapping
strategic choice
robustness analysis
Societies and journals
Societies
The International Federation of Operational Research Societies (IFORS) is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US, UK, France, Germany, Italy, Canada, Australia, New Zealand, Philippines, India, Japan and South Africa. For the institutionalization of Operations Research, the foundation of the (IFORS) in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957. The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO). Other important operational research organizations are Simulation Interoperability Standards Organization (SISO) and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)
In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR.
Journals of INFORMS
The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports. They are:
Decision Analysis
Information Systems Research
INFORMS Journal on Computing
INFORMS Transactions on Education (an open access journal)
Interfaces
Management Science
Manufacturing & Service Operations Management
Marketing Science
Mathematics of Operations Research
Operations Research
Organization Science
Service Science
Transportation Science
Other journals
These are listed in alphabetical order of their titles.
4OR-A Quarterly Journal of Operations Research: jointly published the Belgian, French and Italian Operations Research Societies (Springer);
Decision Sciences published by Wiley-Blackwell on behalf of the Decision Sciences Institute
European Journal of Operational Research (EJOR): Founded in 1975 and is presently by far the largest operational research journal in the world, with its around 9,000 pages of published papers per year. In 2004, its total number of citations was the second largest amongst Operational Research and Management Science journals;
INFOR Journal: published and sponsored by the Canadian Operational Research Society;
Journal of Defense Modeling and Simulation (JDMS): Applications, Methodology, Technology: a quarterly journal devoted to advancing the science of modeling and simulation as it relates to the military and defense.
Journal of the Operational Research Society (JORS): an official journal of The OR Society; this is the oldest continuously published journal of OR in the world, published by Taylor & Francis;
Military Operations Research (MOR): published by the Military Operations Research Society;
Omega - The International Journal of Management Science;
Operations Research Letters;
Opsearch: official journal of the Operational Research Society of India;
OR Insight: a quarterly journal of The OR Society published by Palgrave;
Pesquisa Operacional, the official journal of the Brazilian Operations Research Society
Production and Operations Management, the official journal of the Production and Operations Management Society
TOP: the official journal of the Spanish Statistics and Operations Research Society.
See also
Operations research topics
Black box analysis
Dynamic programming
Inventory theory
Optimal maintenance
Real options valuation
Artificial intelligence
Operations researchers
Operations researchers (category)
George Dantzig
Leonid Kantorovich
Tjalling Koopmans
Russell L. Ackoff
Stafford Beer
Alfred Blumstein
C. West Churchman
William W. Cooper
Robert Dorfman
Richard M. Karp
Ramayya Krishnan
Frederick W. Lanchester
Thomas L. Magnanti
Alvin E. Roth
Peter Whittle
Related fields
Behavioral operations research
Big data
Business engineering
Business process management
Database normalization
Engineering management
Geographic information systems
Industrial engineering
Industrial organization
Managerial economics
Military simulation
Operational level of war
Power system simulation
Project production management
Reliability engineering
Scientific management
Search-based software engineering
Simulation modeling
Strategic management
Supply chain engineering
System safety
Wargaming
References
Further reading
Classic books and articles
R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, 1957
Abraham Charnes, William W. Cooper, Management Models and Industrial Applications of Linear Programming, Volumes I and II, New York, John Wiley & Sons, 1961
Abraham Charnes, William W. Cooper, A. Henderson, An Introduction to Linear Programming, New York, John Wiley & Sons, 1953
C. West Churchman, Russell L. Ackoff & E. L. Arnoff, Introduction to Operations Research, New York: J. Wiley and Sons, 1957
George B. Dantzig, Linear Programming and Extensions, Princeton, Princeton University Press, 1963
Lester K. Ford, Jr., D. Ray Fulkerson, Flows in Networks, Princeton, Princeton University Press, 1962
Jay W. Forrester, Industrial Dynamics, Cambridge, MIT Press, 1961
L. V. Kantorovich, "Mathematical Methods of Organizing and Planning Production" Management Science, 4, 1960, 266–422
Ralph Keeney, Howard Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, New York, John Wiley & Sons, 1976
H. W. Kuhn, "The Hungarian Method for the Assignment Problem," Naval Research Logistics Quarterly, 1–2, 1955, 83–97
H. W. Kuhn, A. W. Tucker, "Nonlinear Programming," pp. 481–492 in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability
B. O. Koopman, Search and Screening: General Principles and Historical Applications, New York, Pergamon Press, 1980
Tjalling C. Koopmans, editor, Activity Analysis of Production and Allocation, New York, John Wiley & Sons, 1951
Charles C. Holt, Franco Modigliani, John F. Muth, Herbert A. Simon, Planning Production, Inventories, and Work Force, Englewood Cliffs, NJ, Prentice-Hall, 1960
Philip M. Morse, George E. Kimball, Methods of Operations Research, New York, MIT Press and John Wiley & Sons, 1951
Robert O. Schlaifer, Howard Raiffa, Applied Statistical Decision Theory, Cambridge, Division of Research, Harvard Business School, 1961
Classic textbooks
Taha, Hamdy A., "Operations Research: An Introduction", Pearson, 10th Edition, 2016
Frederick S. Hillier & Gerald J. Lieberman, Introduction to Operations Research, McGraw-Hill: Boston MA; 10th Edition, 2014
Robert J. Thierauf & Richard A. Grosse, "Decision Making Through Operations Research", John Wiley & Sons, INC, 1970
Harvey M. Wagner, Principles of Operations Research, Englewood Cliffs, Prentice-Hall, 1969
Wentzel (Ventsel), E. S. Introduction to Operations Research, Moscow: Soviet Radio Publishing House, 1964.
History
Saul I. Gass, Arjang A. Assad, An Annotated Timeline of Operations Research: An Informal History. New York, Kluwer Academic Publishers, 2005.
Saul I. Gass (Editor), Arjang A. Assad (Editor), Profiles in Operations Research: Pioneers and Innovators. Springer, 2011
Maurice W. Kirby (Operational Research Society (Great Britain)). Operational Research in War and Peace: The British Experience from the 1930s to 1970, Imperial College Press, 2003. ,
J. K. Lenstra, A. H. G. Rinnooy Kan, A. Schrijver (editors) History of Mathematical Programming: A Collection of Personal Reminiscences, North-Holland, 1991
Charles W. McArthur, Operations Analysis in the U.S. Army Eighth Air Force in World War II, History of Mathematics, Vol. 4, Providence, American Mathematical Society, 1990
C. H. Waddington, O. R. in World War 2: Operational Research Against the U-boat, London, Elek Science, 1973.
Richard Vahrenkamp: Mathematical Management – Operations Research in the United States and Western Europe, 1945 – 1990, in: Management Revue – Socio-Economic Studies, vol. 34 (2023), issue 1, pp. 69–91.
External links
What is Operations Research?
International Federation of Operational Research Societies
The Institute for Operations Research and the Management Sciences (INFORMS)
Occupational Outlook Handbook, U.S. Department of Labor Bureau of Labor Statistics
Industrial engineering
Mathematical optimization in business
Applied statistics
Engineering disciplines
Mathematical and quantitative methods (economics)
Mathematical economics
Decision-making | 0.774476 | 0.997902 | 0.772852 |
DPSIR | DPSIR (drivers, pressures, state, impact, and response model of intervention) is a causal framework used to describe the interactions between society and the environment. It seeks to analyze and assess environmental problems by bringing together various scientific disciplines, environmental managers, and stakeholders, and solve them by incorporating sustainable development. First, the indicators are categorized into "drivers" which put "pressures" in the "state" of the system, which in turn results in certain "impacts" that will lead to various "responses" to maintain or recover the system under consideration. It is followed by the organization of available data, and suggestion of procedures to collect missing data for future analysis. Since its formulation in the late 1990s, it has been widely adopted by international organizations for ecosystem-based study in various fields like biodiversity, soil erosion, and groundwater depletion and contamination. In recent times, the framework has been used in combination with other analytical methods and models, to compensate for its shortcomings. It is employed to evaluate environmental changes in ecosystems, identify the social and economic pressures on a system, predict potential challenges and improve management practices. The flexibility and general applicability of the framework make it a resilient tool that can be applied in social, economic, and institutional domains as well.
History
The Driver-Pressure-State-Impact-Response framework was developed by the European Environment Agency (EEA) in 1999. It was built upon several existing environmental reporting frameworks, like the Pressure-State-Response (PSR) framework developed by the Organization for Economic Co-operation and Development (OECD) in 1993, which itself was an extension of Rapport and Friend's Stress-Response (SR) framework (1979). The PSR framework simplified environmental problems and solutions into variables that stress the cause-effect relationship between human activities that exert pressure on the environment, the state of the environment, and society's response to the condition. Since it focused on anthropocentric pressures and responses, it did not effectively factor natural variability into the pressure category. This led to the development of the expanded Driving Force-State-Response (DSR) framework, by the United Nations Commission on Sustainable Development (CSD) in 1997. A primary modification was the expansion of the concept of “pressure” to include social, political, economic, demographic, and natural system pressures. However, by replacing “pressure” with “driving force”, the model failed to account for the underlying reasons for the pressure, much like its antecedent. It also did not address the motivations behind responses to changes in the state of the environment. The refined DPSIR model sought to address these shortcomings of its predecessors by addressing root causes of the human activities that impact the environment, by incorporating natural variability as a pressure on the current state and addressing responses to the impact of changes in state on human well-being. Unlike PSR and DSR, DPSIR is not a model, but a means of classifying and disseminating information related to environmental challenges. Since its conception, it has evolved into modified frameworks like Driver-Pressure-Chemical State-Ecological State-Response (DPCER), Driver-Pressure-State-Welfare-Response (DPSWR), and Driver-Pressure-State-Ecosystem-Response (DPSER).
The DPSIR Framework
Driver (Driving Force)
Driver refers to the social, demographic, and economic developments which influence the human activities that have a direct impact on the environment. They can further be subdivided into primary and secondary driving forces. Primary driving forces refer to technological and societal actors that motivate human activities like population growth and distribution of wealth. The developments induced by these drivers give rise to secondary driving forces, which are human activities triggering “pressures” and “impacts”, like land-use changes, urban expansion and industrial developments. Drivers can also be identified as underlying or immediate, physical or socio-economic, and natural or anthropogenic, based on the scope and sector in which they are being used.
Pressure
Pressure represents the consequence of the driving force, which in turn affects the state of the environment. They are usually depicted as unwanted and negative, based on the concept that any change in the environment caused by human activities is damaging and degrading. Pressures can have effects on the short run (e.g.: deforestation), or the long run (e.g.: climate change), which if known with sufficient certainty, can be expressed as a probability. They can be both human-induced, like emissions, fuel extraction, and solid waste generation, and natural processes, like solar radiation and volcanic eruptions. Pressures can also be sub-categorized as endogenic managed pressures, when they stem from within the system and can be controlled (e.g.: land claim, power generation), and as exogenic unmanaged pressures, when they stem from outside the system and cannot be controlled (e.g.: climate change, geomorphic activities).
State
State describes the physical, chemical and biological condition of the environment or observable temporal changes in the system. It may refer to natural systems (e.g.: atmospheric CO2 concentrations, temperature), socio-economic systems (e.g.: living conditions of humans, economic situations of an industry), or a combination of both (e.g.: number of tourists, size of current population). It includes a wide range of features, like physico-chemical characteristics of ecosystems, quantity and quality of resources or “carrying capacity”, management of fragile species and ecosystems, living conditions for humans, and exposure or the effects of pressures on humans. It is not intended to just be static, but to reflect current trends as well, like increasing eutrophication and change in biodiversity.
Impact
Impact refers to how changes in the state of the system affect human well-being. It is often measured in terms of damages to the environment or human health, like migration, poverty, and increased vulnerability to diseases, but can also be identified and quantified without any positive or negative connotation, by simply indicating a change in the environmental parameters. Impact can be ecologic (e.g.: reduction of wetlands, biodiversity loss), socio-economic (e.g.: reduced tourism), or a combination of both. Its definition may vary depending on the discipline and methodology applied. For instance, it refers to the effect on living beings and non-living domains of ecosystems in biosciences (e.g.: modifications in the chemical composition of air or water), whereas it is associated with the effects on human systems related to changes in the environmental functions in socio-economic sciences (e.g.: physical and mental health).
Response
Response refers to actions taken to correct the problems of the previous stages, by adjusting the drivers, reducing the pressure on the system, bringing the system back to its initial state, and mitigating the impacts. It can be associated uniquely with policy action, or to different levels of the society, including groups and/or individuals from the private, government or non-governmental sectors. Responses are mostly designed and/or implemented as political actions of protection, mitigation, conservation, or promotion. A mix of effective top-down political action and bottom-up social awareness can also be developed as responses, such as eco-communities or improved waste recycling rates.
Criticisms and Limitations
Despite the adaptability of the framework, it has faced several criticisms. One of the main goals of the framework is to provide environmental managers, scientists of various disciplines, and stakeholders with a common forum and language to identify, analyze and assess environmental problems and consequences. However, several notable authors have mentioned that it lacks a well-defined set of categories, which undermines the comparability between studies, even if they are similar. For instance, climate change can be considered as a natural driver, but is primarily caused by greenhouse gases (GSG) produced by human activities, which may be categorized under “pressure”. A wastewater treatment plant is considered a response while dealing with water pollution, but a pressure when effluent runoff leading to eutrophication is taken into account. This ambivalence of variables associated with the framework has been criticized as a lack of good communication between researchers and between stakeholders and policymakers. Another criticism is the misguiding simplicity of the framework, which ignores the complex synergy between the categories. For instance, an impact can be caused by various different state conditions and responses to other impacts, which is not addressed by DPSIR. Some authors also argue that the framework is flawed as it does not clearly illustrate the cause-effect linkage for environmental problems. The reasons behind these contextual differences seem to be differences in opinions, characteristics of specific case studies, misunderstanding of the concepts and inadequate knowledge of the system under consideration.
DPSIR was initially proposed as a conceptual framework rather than a practical guidance, by global organizations. This means that at a local level, analyses using the framework can cause some significant problems. DPSIR does not encourage the examination of locally specific attributes for individual decisions, which when aggregated, could have potentially large impacts on sustainability. For instance, a farmer who chooses a particular way of livelihood may not create any consequential alterations on the system, but the aggregation of farmers making similar choices will have a measurable and tangible effect. Any efforts to evaluate sustainability without considering local knowledge could lead to misrepresentations of local situations, misunderstandings of what works in particular areas and even project failure.
While there is no explicit hierarchy of authority in the DPSIR framework, the power difference between “developers” and the “developing” could be perceived as the contributor to the lack of focus on local, informal responses at the scale of drivers and pressures, thus compromising the validity of any analysis conducted using it. The “developers” refer to the Non-Governmental Organizations (NGOs), State mechanisms and other international organizations with the privilege to access various resources and power to use knowledge to change the world, and the “developing” refers to local communities. According to this criticism, the latter is less capable of responding to environmental problems than the former. This undermines valuable indigenous knowledge about various components of the framework in a particular region, since the inclusion of the knowledge is almost exclusively left at the discretion of the “developers”.
Another limitation of the framework is the exclusion of social and economic developments on the environment, particularly for future scenarios. Furthermore, DPSIR does not explicitly prioritize responses and fails to determine the effectiveness of each response individually, when working with complex systems. This has been one of the most criticized drawbacks of the framework, since it fails to capture the dynamic nature of real-world problems, which cannot be expressed by simple causal relations.
Applications
Despite its criticisms, DPSIR continues to be widely used to frame and assess environmental problems to identify appropriate responses. Its main objective is to support sustainable management of natural resources. DPSIR structures indicators related to the environmental problem addressed with reference to the political objectives and focuses on supposed causal relationships effectively, such that it appeals to policy actors. Some examples include the assessment of the pressure of alien species, evaluation of impacts of developmental activities on the coastal environment and society, identification of economic elements affecting global wildfire activities, and cost-benefit analysis (CBA) and gross domestic product (GDP) correction.
To compensate for its shortcomings, DPSIR is also used in conjunction with several analytical methods and models. It has been used in conjunction with Multiple-Criteria Decision Making (MCDM) for desertification risk management, with Analytic Hierarchy Process (AHP) to study urban green electricity power, and with Tobit model to assess freshwater ecosystems. The framework itself has also been modified to assess specific systems, like DPSWR, which focuses on the impacts on human welfare alone, by shifting ecological impact to the state category. Another approach is a differential DPSIR (ΔDPSIR), which evaluates the changes in drivers, pressures and state after implementing a management response, making it valuable both as a scientific output and a system management tool. The flexibility offered by the framework makes it an effective tool with numerous applications, provided the system is properly studied and understood by the stakeholders.
References
External links
DPSIR-Model of the European Environment Agency (EEA)
Environmental terminology
Industrial ecology | 0.788903 | 0.979639 | 0.77284 |
VRIO | VRIO (value, rarity, imitability, and organization) is a business analysis framework for strategic management. As a form of internal analysis, VRIO evaluates all the resources and capabilities of a firm. It was first proposed by Jay Barney in 1991.
VRIO is an initialism for the four question framework asked about a resource or capability to determine its competitive potential:
The question of value: Is this resource or capability valuable to the firm?
The question of rarity: Is control of the resource or capability limited?
The question of imitability: Is there a significant cost disadvantage to a firm obtaining or developing the resource or capability?
The question of organization (ability to exploit the resource or capability): "Is the firm organized, ready, and able to exploit the resource/capability?" "Is the firm organized to capture value?"
Overview
Value
The question of value is whether the resource or capability is valuable to the firm, where the definition of valuable is whether the resource or capability works to exploit an opportunity or mitigate a threat in the marketplace. Generally, this exploitation of opportunity or mitigation of threat will result in an increase in revenues or a decrease in costs. Occasionally, some resources or capabilities could be considered strengths in one industry and weaknesses in a different one.
Six common examples of opportunities firms could attempt to exploit are:
technological change,
demographic change,
cultural change,
economic climate,
specific international events,
legal and political conditions.
Furthermore, five threats that a resource or capability could mitigate are:
the threat of buyers,
threat of suppliers,
threat of entry,
threat of rivalry,
threat of substitutes.
The identification of possibly valuable resources or capabilities can be done by looking into a company's value chain, and whether a company's assets allows it to operate more effectively in parts of the value chain.
Rarity
Having rarity in a firm can lead to competitive advantage. Rarity is when a firm has a valuable resource or capability that is absolutely unique among a set of current and potential competitors. A firm's resources and capabilities must be both short in supply and persist over time to be a source of sustained competitive advantage. If both short supply and persistence over time are not met, then the resources and capabilities a firm has cannot maintain a sustained competitive advantage. If a resource is not rare, then perfect competition dynamics are likely to be observed.
Imitability
The primary question of imitability asked in the VRIO framework in internal analysis is: “Do firms without a resource or capability face a cost disadvantage in obtaining or developing it compared to firms that already possess it?”
Firms with valuable and rare resources, which are hard to imitate by other firms, can gain the first-mover advantages in the market and can hence gain competitive advantage.
A firm can either exploit an external opportunity or neutralize an external threat by using rare and valuable resources. When the firm's competitors discover this competitive advantage, either ignore the profit gained by the competitive advantage and continue to operate in their old ways or analyze and duplicate the competitive strategy of its rival. If there is little cost in obtaining the rare and valuable resource, other firms can imitate the competitive advantage to gain competitive parity. However, sometimes it is hard for other firms to get access to the resources and imitate the innovative company's strategy. As a result, innovative companies that implement strategies based on costly-to-imitate and valuable resources can gain long-term competitive advantage.
Forms of imitation
In most cases, imitation appears in two ways, direct duplication or substitution. After observing other firms’ competitive advantage, a firm can directly imitate the resource possessed by the innovative firm. If the cost to imitate is high, the competitive advantage will be sustained. If not, the competitive advantage will be temporary. Otherwise, an imitating firm can attempt to use a substitute in order to gain similar competitive advantage of the innovative firm.
Cost of imitation
Cost of imitation is usually high in order to gain a competitive advantage due to the following reasons:
Unique Historical Conditions – an innovative firm gains low-cost access to rare resources in a particular time and space,
Causal Ambiguity – an imitating firm cannot tell the factors that lead to the competitive advantage of an innovative firm,
Social Complexity – when the resources involved in gaining competitive advantage is based on interpersonal relationship, culture and other social background,
Patents – a source of long-term competitive advantage certificated by authority in a few industries such as pharmaceuticals.
Organization
If a company is successfully organised, it can enjoy a period of sustained competitive advantage. Components of successful organization include, formal reporting structures, management control systems and compensation policies.
Formal reporting structures are simply a description of who in the firm reports to whom.
Management control systems include both formal and informal means to make sure that managers’ decisions align with a firm's strategies. Formal control systems can consist of budgeting and reporting activities that keep top management informed of decisions made by employee's lower down in the firm. Informal controls can include a company's culture and encouraging employees to monitor each other.
Firms incentivize their employees to behave a desired way through compensation policies. These policies can include bonuses, stocks or salary increases but can also include non-monetary incentives such as additional vacation days or a larger office.
These components of organization are known as complementary capabilities and resources because alone they do not provide much value. However, in combination with a firm's other resources and capabilities, it can result in sustained competitive advantage.
See also
PEST analysis
SWOT analysis
Management
Strategic management
Strategic planning
System dynamics
Resource-based view
References
Barney, Jay B and Hesterly, William S. Strategic Management and Competitive Advantage: Concepts. 2005 Pearson Education, Inc., Upper Saddle River, New Jersey, 07458.
Strategic Management Journal, 5, pp. 171–180. Barney, J.B. (1991). “Firm resources and sustained competitive advantage.” Journal of Management, 19, pp. 99–120.
Hill, C.W.L., and G.R. Jones (1998). Strategic Management Theory: An Integrated Approach, 4th. Boston: Houghton Mifflin.
Barney, J. B., & Hesterly, W. S. (2010). VRIO Framework. In Strategic Management and Competitive Advantage (pp. 68–86). New Jersey: Pearson.
Business intelligence terms
Management theory | 0.778178 | 0.993076 | 0.77279 |
Social Progress Index | The Social Progress Index (SPI) measures the extent to which countries provide for the social and environmental needs of their citizens. Fifty-four indicators in the areas of basic human needs, foundations of well-being, and opportunity to progress show the relative performance of nations. The index is published by the nonprofit Social Progress Imperative, and is based on the writings of Amartya Sen, Douglass North, and Joseph Stiglitz. The SPI measures the well-being of a society by observing social and environmental outcomes directly rather than the economic factors. The social and environmental factors include wellness (including health, shelter and sanitation), equality, inclusion, sustainability and personal freedom and safety.
Introduction and methodology
The index combines three dimensions:
Basic human needs
Foundations of well-being
Opportunity
Each dimension includes four components, which are each composed of between three and five specific outcome indicators. The included indicators are selected because they are measured appropriately, with a consistent methodology, by the same organization across all (or essentially all) of the countries in the sample. Together, this framework aims to capture a broad range of interrelated factors revealed by the scholarly literature and practitioner experience as underpinning social progress.
Two key features of the Social Progress Index are:
the exclusion of economic variables
the use of outcome measures rather than inputs
Social Progress Imperative evaluated hundreds of possible indicators while developing the Social Progress Index, including engaging researchers at the Massachusetts Institute of Technology (MIT) to determine what indicators best differentiated the performance of nations. The index uses outcome measures when there are sufficient data available or the closest possible proxies.
Social Progress Index Rankings
Data are for the year 2022.
Criticism
The index's measure of good governance has been criticized for using data biased against the Global South, and some critics have noted that many of the criteria are based on progressive Western Values. There has also been debate on the relevance or accuracy of many of the measurements for gender equality. A 2016 survey of online users browsing the SPI website indicated that as one of the index's flaws, 34% of respondents found the data incomplete and/or inaccurate, primarily referencing environmental hazards, energy usage, specific health issues, employment availability and quality, income inequality, gender inequality, and corruption as areas not sufficiently taken into account.
From an econometric stand point, the Index appears to be similar to other efforts aimed at overcoming the limitation of traditional economic measures such as the gross domestic product (GDP). A notable criticism is that although the Social Progress Index can be seen as a superset of indicators used by earlier econometric models such as Gross National Well-being Index 2005, Bhutan Gross National Happiness Index of 2012, and World Happiness Report of 2012, unlike them, it ignores measures of subjective life satisfaction and psychological well-being. Other critics point out that "there remain certain dimensions that are currently not included in the SPI. These are the concentration of wealth in the top 1 percent of the population, efficiency of the judicial system, and quality of the transportation infrastructure."
Some critics argue for caution. Though words such as "inclusive capitalism" are now bandied around increasingly to signal a new age, free from ideological battlegrounds between public and private, much of what the organization's founders say about it, in the view of critics, confirms that the index is more about "business inclusivity" than "inclusive capitalism".
See also
Broad measures of economic progress
Disability-adjusted life year
Economics
Green national product
Gender Development Index
Genuine progress indicator
Happiness economics
Happy Planet Index
Human Development Index
Progressive utilization theory
Legatum Prosperity Index
Leisure satisfaction
OECD Better Life Index
Postmaterialism
Psychometrics
Where-to-be-born Index
Wikiprogress
World Values Survey
References
External links
2013 establishments
International quality of life rankings
Macroeconomic indicators
Political concepts
Social science indices
Sustainability metrics and indices | 0.776836 | 0.994753 | 0.77276 |
Abiogenesis | Abiogenesis is the natural process by which life arises from non-living matter, such as simple organic compounds. The prevailing scientific hypothesis is that the transition from non-living to living entities on Earth was not a single event, but a process of increasing complexity involving the formation of a habitable planet, the prebiotic synthesis of organic molecules, molecular self-replication, self-assembly, autocatalysis, and the emergence of cell membranes. The transition from non-life to life has never been observed experimentally, but many proposals have been made for different stages of the process.
The study of abiogenesis aims to determine how pre-life chemical reactions gave rise to life under conditions strikingly different from those on Earth today. It primarily uses tools from biology and chemistry, with more recent approaches attempting a synthesis of many sciences. Life functions through the specialized chemistry of carbon and water, and builds largely upon four key families of chemicals: lipids for cell membranes, carbohydrates such as sugars, amino acids for protein metabolism, and nucleic acid DNA and RNA for the mechanisms of heredity. Any successful theory of abiogenesis must explain the origins and interactions of these classes of molecules.
Many approaches to abiogenesis investigate how self-replicating molecules, or their components, came into existence. Researchers generally think that current life descends from an RNA world, although other self-replicating and self-catalyzing molecules may have preceded RNA. Other approaches ("metabolism-first" hypotheses) focus on understanding how catalysis in chemical systems on the early Earth might have provided the precursor molecules necessary for self-replication. The classic 1952 Miller–Urey experiment demonstrated that most amino acids, the chemical constituents of proteins, can be synthesized from inorganic compounds under conditions intended to replicate those of the early Earth. External sources of energy may have triggered these reactions, including lightning, radiation, atmospheric entries of micro-meteorites and implosion of bubbles in sea and ocean waves.
While the last universal common ancestor of all modern organisms (LUCA) is thought to have been quite different from the origin of life, investigations into LUCA can guide research into early universal characteristics. A genomics approach has sought to characterise LUCA by identifying the genes shared by Archaea and Bacteria, members of the two major branches of life (with Eukaryotes included in the archaean branch in the two-domain system). It appears there are 355 genes common to all life; their functions imply that the LUCA was anaerobic with the Wood–Ljungdahl pathway, deriving energy by chemiosmosis, and maintaining its hereditary material with DNA, the genetic code, and ribosomes. Although the LUCA lived over 4 billion years ago (4 Gya), researchers believe it was far from the first form of life. Earlier cells might have had a leaky membrane and been powered by a naturally occurring proton gradient near a deep-sea white smoker hydrothermal vent.
Earth remains the only place in the universe known to harbor life. Geochemical and fossil evidence from the Earth informs most studies of abiogenesis. The Earth was formed at 4.54 Gya, and the earliest evidence of life on Earth dates from at least 3.8 Gya from Western Australia. Some studies have suggested that fossil micro-organisms may have lived within hydrothermal vent precipitates dated 3.77 to 4.28 Gya from Quebec, soon after ocean formation 4.4 Gya during the Hadean.
Overview
Life consists of reproduction with (heritable) variations. NASA defines life as "a self-sustaining chemical system capable of Darwinian [i.e., biological] evolution." Such a system is complex; the last universal common ancestor (LUCA), presumably a single-celled organism which lived some 4 billion years ago, already had hundreds of genes encoded in the DNA genetic code that is universal today. That in turn implies a suite of cellular machinery including messenger RNA, transfer RNA, and ribosomes to translate the code into proteins. Those proteins included enzymes to operate its anaerobic respiration via the Wood–Ljungdahl metabolic pathway, and a DNA polymerase to replicate its genetic material.
The challenge for abiogenesis (origin of life) researchers is to explain how such a complex and tightly interlinked system could develop by evolutionary steps, as at first sight all its parts are necessary to enable it to function. For example, a cell, whether the LUCA or in a modern organism, copies its DNA with the DNA polymerase enzyme, which is in turn produced by translating the DNA polymerase gene in the DNA. Neither the enzyme nor the DNA can be produced without the other. The evolutionary process could have involved molecular self-replication, self-assembly such as of cell membranes, and autocatalysis via RNA ribozymes. Nonetheless, the transition of non-life to life has never been observed experimentally, nor has there been a satisfactory chemical explanation.
The preconditions to the development of a living cell like the LUCA are clear enough, though disputed in their details: a habitable world is formed with a supply of minerals and liquid water. Prebiotic synthesis creates a range of simple organic compounds, which are assembled into polymers such as proteins and RNA. On the other side, the process after the LUCA is readily understood: biological evolution caused the development of a wide range of species with varied forms and biochemical capabilities. However, the derivation of living things such as LUCA from simple components is far from understood.
Although Earth remains the only place where life is known, the science of astrobiology seeks evidence of life on other planets. The 2015 NASA strategy on the origin of life aimed to solve the puzzle by identifying interactions, intermediary structures and functions, energy sources, and environmental factors that contributed to the diversity, selection, and replication of evolvable macromolecular systems, and mapping the chemical landscape of potential primordial informational polymers. The advent of polymers that could replicate, store genetic information, and exhibit properties subject to selection was, it suggested, most likely a critical step in the emergence of prebiotic chemical evolution. Those polymers derived, in turn, from simple organic compounds such as nucleobases, amino acids, and sugars that could have been formed by reactions in the environment. A successful theory of the origin of life must explain how all these chemicals came into being.
Pre-1960s conceptual history
Spontaneous generation
One ancient view of the origin of life, from Aristotle until the 19th century, is of spontaneous generation. This theory held that "lower" animals such as insects were generated by decaying organic substances, and that life arose by chance. This was questioned from the 17th century, in works like Thomas Browne's Pseudodoxia Epidemica. In 1665, Robert Hooke published the first drawings of a microorganism. In 1676, Antonie van Leeuwenhoek drew and described microorganisms, probably protozoa and bacteria. Van Leeuwenhoek disagreed with spontaneous generation, and by the 1680s convinced himself, using experiments ranging from sealed and open meat incubation and the close study of insect reproduction, that the theory was incorrect. In 1668 Francesco Redi showed that no maggots appeared in meat when flies were prevented from laying eggs. By the middle of the 19th century, spontaneous generation was considered disproven.
Panspermia
Another ancient idea dating back to Anaxagoras in the 5th century BC is panspermia, the idea that life exists throughout the universe, distributed by meteoroids, asteroids, comets and planetoids. It does not attempt to explain how life originated in itself, but shifts the origin of life on Earth to another heavenly body. The advantage is that life is not required to have formed on each planet it occurs on, but rather in a more limited set of locations, or even a single location, and then spread about the galaxy to other star systems via cometary or meteorite impact. Panspermia did not get much scientific support because it was largely used to deflect the need of an answer instead of explaining observable phenomena. Although the interest in panspermia grew when the study of meteorites found traces of organic materials in them, it is currently accepted that life started locally on Earth.
"A warm little pond": primordial soup
The idea that life originated from non-living matter in slow stages appeared in Herbert Spencer's 1864–1867 book Principles of Biology, and in William Turner Thiselton-Dyer's 1879 paper "On spontaneous generation and evolution". On 1 February 1871 Charles Darwin wrote about these publications to Joseph Hooker, and set out his own speculation, suggesting that the original spark of life may have begun in a "warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, , present, that a compound was chemically formed ready to undergo still more complex changes." Darwin went on to explain that "at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed."
Alexander Oparin in 1924 and J. B. S. Haldane in 1929 proposed that the first molecules constituting the earliest cells slowly self-organized from a primordial soup, and this theory is called the Oparin–Haldane hypothesis. Haldane suggested that the Earth's prebiotic oceans consisted of a "hot dilute soup" in which organic compounds could have formed. J. D. Bernal showed that such mechanisms could form most of the necessary molecules for life from inorganic precursors. In 1967, he suggested three "stages": the origin of biological monomers; the origin of biological polymers; and the evolution from molecules to cells.
Miller–Urey experiment
In 1952, Stanley Miller and Harold Urey carried out a chemical experiment to demonstrate how organic molecules could have formed spontaneously from inorganic precursors under prebiotic conditions like those posited by the Oparin–Haldane hypothesis. It used a highly reducing (lacking oxygen) mixture of gases—methane, ammonia, and hydrogen, as well as water vapor—to form simple organic monomers such as amino acids. Bernal said of the Miller–Urey experiment that "it is not enough to explain the formation of such molecules, what is necessary, is a physical-chemical explanation of the origins of these molecules that suggests the presence of suitable sources and sinks for free energy." However, current scientific consensus describes the primitive atmosphere as weakly reducing or neutral, diminishing the amount and variety of amino acids that could be produced. The addition of iron and carbonate minerals, present in early oceans, however, produces a diverse array of amino acids. Later work has focused on two other potential reducing environments: outer space and deep-sea hydrothermal vents.
Producing a habitable Earth
Evolutionary history
Early universe with first stars
Soon after the Big Bang, which occurred roughly 14 Gya, the only chemical elements present in the universe were hydrogen, helium, and lithium, the three lightest atoms in the periodic table. These elements gradually accreted and began orbiting in disks of gas and dust. Gravitational accretion of material at the hot and dense centers of these protoplanetary disks formed stars by the fusion of hydrogen. Early stars were massive and short-lived, producing all the heavier elements through stellar nucleosynthesis. Element formation through stellar nucleosynthesis proceeds to its most stable element Iron-56. Heavier elements were formed during supernovae at the end of a stars lifecycle. Carbon, currently the fourth most abundant chemical element in the universe (after hydrogen, helium, and oxygen), was formed mainly in white dwarf stars, particularly those bigger than twice the mass of the sun. As these stars reached the end of their lifecycles, they ejected these heavier elements, among them carbon and oxygen, throughout the universe. These heavier elements allowed for the formation of new objects, including rocky planets and other bodies. According to the nebular hypothesis, the formation and evolution of the Solar System began 4.6 Gya with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
Emergence of Earth
The age of the Earth is 4.54 Gya as found by radiometric dating of calcium-aluminium-rich inclusions in carbonaceous chrondrite meteorites, the oldest material in the Solar System. The Hadean Earth (from its formation until 4 Gya) was at first inhospitable to any living organisms. During its formation, the Earth lost a significant part of its initial mass, and consequentially lacked the gravity to hold molecular hydrogen and the bulk of the original inert gases. Soon after initial accretion of Earth at 4.48 Ga, its collision with Theia, a hypothesised impactor, is thought to have created the ejected debris that would eventually form the Moon. This impact would have removed the Earth's primary atmosphere, leaving behind clouds of viscous silicates and carbon dioxide. This unstable atmosphere was short-lived and condensed shortly after to form the bulk silicate Earth, leaving behind an atmosphere largely consisting of water vapor, nitrogen, and carbon dioxide, with smaller amounts of carbon monoxide, hydrogen, and sulfur compounds. The solution of carbon dioxide in water is thought to have made the seas slightly acidic, with a pH of about 5.5.
Condensation to form liquid oceans is theorised to have occurred as early as the Moon-forming impact. This scenario has found support from the dating of 4.404 Gya zircon crystals with high δ18O values from metamorphosed quartzite of Mount Narryer in Western Australia. The Hadean atmosphere has been characterized as a "gigantic, productive outdoor chemical laboratory," similar to volcanic gases today which still support some abiotic chemistry. Despite the likely increased volcanism from early plate tectonics, the Earth may have been a predominantly water world between 4.4 and 4.3 Gya. It is debated whether or not crust was exposed above this ocean due to uncertainties of what early plate tectonics looked like. For early life to have developed, it is generally thought that a land setting is required, so this question is essential to determining when in Earth's history life evolved. The post-Moon-forming impact Earth likely existed with little if any continental crust, a turbulent atmosphere, and a hydrosphere subject to intense ultraviolet light from a T Tauri stage Sun, from cosmic radiation, and from continued asteroid and comet impacts. Despite all this, niche environments likely existed conducive to life on Earth in the Late-Hadean to Early-Archaean.
The Late Heavy Bombardment hypothesis posits that a period of intense impact occurred at ~3.9 Gya during the Hadean. A cataclysmic impact event would have had the potential to sterilise all life on Earth by volatilising liquid oceans and blocking the Sun needed for photosynthesising primary producers, pushing back the earliest possible emergence of life to after Late Heavy Bombardment. Recent research questions both the intensity of the Late Heavy Bombardment as well as its potential for sterilisation. Uncertainties as to whether Late Heavy Bombardment was one giant impact or a period of greater impact rates greatly changed the implication of its destructive power. The 3.9 Ga date arises from dating of Apollo mission sample returns collected mostly near the Imbrium Basin, biasing the age of recorded impacts. Impact modelling of the lunar surface reveals that rather than a cataclysmic event at 3.9 Ga, multiple small-scale, short-lived periods of bombardment likely occurred. Terrestrial data backs this idea by showing multiple periods of ejecta in the rock record both before and after the 3.9 Ga marker, suggesting that the early Earth was subject to continuous impacts that would not have had as great an impact on extinction as previously thought. If the Late Heavy Bombardment did not take place, this allows for the emergence of life to have taken place far before 3.9 Ga.
If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from late impacts and the then high levels of ultraviolet radiation from the sun. Geothermically heated oceanic crust could have yielded far more organic compounds through deep hydrothermal vents than the Miller–Urey experiments indicated. The available energy is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live.
Earliest evidence of life
The exact timing at which life emerged on Earth is unknown. Minimum age estimates are based on evidence from the geologic rock record. The earliest physical evidence of life so far found consists of microbialites in the Nuvvuagittuq Greenstone Belt of Northern Quebec, in banded iron formation rocks at least 3.77 and possibly as old as 4.32 Gya. The micro-organisms lived within hydrothermal vent precipitates, soon after the 4.4 Gya formation of oceans during the Hadean. The microbes resembled modern hydrothermal vent bacteria, supporting the view that abiogenesis began in such an environment.
Biogenic graphite has been found in 3.7 Gya metasedimentary rocks from southwestern Greenland and in microbial mat fossils from 3.49 Gya cherts in the Pilbara region of Western Australia. Evidence of early life in rocks from Akilia Island, near the Isua supracrustal belt in southwestern Greenland, dating to 3.7 Gya, have shown biogenic carbon isotopes. In other parts of the Isua supracrustal belt, graphite inclusions trapped within garnet crystals are connected to the other elements of life: oxygen, nitrogen, and possibly phosphorus in the form of phosphate, providing further evidence for life 3.7 Gya. In the Pilbara region of Western Australia, compelling evidence of early life was found in pyrite-bearing sandstone in a fossilized beach, with rounded tubular cells that oxidized sulfur by photosynthesis in the absence of oxygen. Carbon isotope ratios on graphite inclusions from the Jack Hills zircons suggest that life could have existed on Earth from 4.1 Gya.
The Pilbara region of Western Australia contains the Dresser Formation with rocks 3.48 Gya, including layered structures called stromatolites. Their modern counterparts are created by photosynthetic micro-organisms including cyanobacteria. These lie within undeformed hydrothermal-sedimentary strata; their texture indicates a biogenic origin. Parts of the Dresser formation preserve hot springs on land, but other regions seem to have been shallow seas. A molecular clock analysis suggests the LUCA emerged prior to the Late Heavy Bombardment (3.9 Gya).
Producing molecules: prebiotic synthesis
All chemical elements except for hydrogen and helium derive from stellar nucleosynthesis. The basic chemical ingredients of life – the carbon-hydrogen molecule (CH), the carbon-hydrogen positive ion (CH+) and the carbon ion (C+) – were produced by ultraviolet light from stars. Complex molecules, including organic molecules, form naturally both in space and on planets. Organic molecules on the early Earth could have had either terrestrial origins, with organic molecule synthesis driven by impact shocks or by other energy sources, such as ultraviolet light, redox coupling, or electrical discharges; or extraterrestrial origins (pseudo-panspermia), with organic molecules formed in interstellar dust clouds raining down on to the planet.
Observed extraterrestrial organic molecules
An organic compound is a chemical whose molecules contain carbon. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Organic compounds are relatively common in space, formed by "factories of complex molecular synthesis" which occur in molecular clouds and circumstellar envelopes, and chemically evolve after reactions are initiated mostly by ionizing radiation. Purine and pyrimidine nucleobases including guanine, adenine, cytosine, uracil, and thymine have been found in meteorites. These could have provided the materials for DNA and RNA to form on the early Earth. The amino acid glycine was found in material ejected from comet Wild 2; it had earlier been detected in meteorites. Comets are encrusted with dark material, thought to be a tar-like organic substance formed from simple carbon compounds under ionizing radiation. A rain of material from comets could have brought such complex organic molecules to Earth. It is estimated that during the Late Heavy Bombardment, meteorites may have delivered up to five million tons of organic prebiotic elements to Earth per year.
PAH world hypothesis
Polycyclic aromatic hydrocarbons (PAH) are the most common and abundant polyatomic molecules in the observable universe, and are a major store of carbon. They seem to have formed shortly after the Big Bang, and are associated with new stars and exoplanets. They are a likely constituent of Earth's primordial sea. PAHs have been detected in nebulae, and in the interstellar medium, in comets, and in meteorites.
The PAH world hypothesis posits PAHs as precursors to the RNA world. A star, HH 46-IR, resembling the sun early in its life, is surrounded by a disk of material which contains molecules including cyanide compounds, hydrocarbons, and carbon monoxide. PAHs in the interstellar medium can be transformed through hydrogenation, oxygenation, and hydroxylation to more complex organic compounds used in living cells.
Nucleobases and nucleotides
The majority of organic compounds introduced on Earth by interstellar dust particles have helped to form complex molecules, thanks to their peculiar surface-catalytic activities. Studies of the 12C/13C isotopic ratios of organic compounds in the Murchison meteorite suggest that the RNA component uracil and related molecules, including xanthine, were formed extraterrestrially. NASA studies of meteorites suggest that all four DNA nucleobases (adenine, guanine and related organic molecules) have been formed in outer space. The cosmic dust permeating the universe contains complex organics ("amorphous organic solids with a mixed aromatic–aliphatic structure") that could be created rapidly by stars. Glycolaldehyde, a sugar molecule and RNA precursor, has been detected in regions of space including around protostars and on meteorites.
Laboratory synthesis
As early as the 1860s, experiments demonstrated that biologically relevant molecules can be produced from interaction of simple carbon sources with abundant inorganic catalysts. The spontaneous formation of complex polymers from abiotically generated monomers under the conditions posited by the "soup" theory is not straightforward. Besides the necessary basic organic monomers, compounds that would have prohibited the formation of polymers were also formed in high concentration during the Miller–Urey and Joan Oró experiments. Biology uses essentially 20 amino acids for its coded protein enzymes, representing a very small subset of the structurally possible products. Since life tends to use whatever is available, an explanation is needed for why the set used is so small. Formamide is attractive as a medium that potentially provided a source of amino acid derivatives from simple aldehyde and nitrile feedstocks.
Sugars
Alexander Butlerov showed in 1861 that the formose reaction created sugars including tetroses, pentoses, and hexoses when formaldehyde is heated under basic conditions with divalent metal ions like calcium. R. Breslow proposed that the reaction was autocatalytic in 1959.
Nucleobases
Nucleobases, such as guanine and adenine, can be synthesized from simple carbon and nitrogen sources, such as hydrogen cyanide (HCN) and ammonia. Formamide produces all four ribonucleotides when warmed with terrestrial minerals. Formamide is ubiquitous in the Universe, produced by the reaction of water and HCN. It can be concentrated by the evaporation of water. HCN is poisonous only to aerobic organisms (eukaryotes and aerobic bacteria), which did not yet exist. It can play roles in other chemical processes such as the synthesis of the amino acid glycine.
DNA and RNA components including uracil, cytosine and thymine can be synthesized under outer space conditions, using starting chemicals such as pyrimidine found in meteorites. Pyrimidine may have been formed in red giant stars or in interstellar dust and gas clouds. All four RNA-bases may be synthesized from formamide in high-energy density events like extraterrestrial impacts.
Other pathways for synthesizing bases from inorganic materials have been reported. Freezing temperatures are advantageous for the synthesis of purines, due to the concentrating effect for key precursors such as hydrogen cyanide. However, while adenine and guanine require freezing conditions for synthesis, cytosine and uracil may require boiling temperatures. Seven amino acids and eleven types of nucleobases formed in ice when ammonia and cyanide were left in a freezer for 25 years. S-triazines (alternative nucleobases), pyrimidines including cytosine and uracil, and adenine can be synthesized by subjecting a urea solution to freeze-thaw cycles under a reductive atmosphere, with spark discharges as an energy source. The explanation given for the unusual speed of these reactions at such a low temperature is eutectic freezing, which crowds impurities in microscopic pockets of liquid within the ice, causing the molecules to collide more often.
Peptides
Prebiotic peptide synthesis is proposed to have occurred through a number of possible routes. Some center on high temperature/concentration conditions in which condensation becomes energetically favorable, while others focus on the availability of plausible prebiotic condensing agents.
Experimental evidence for the formation of peptides in uniquely concentrated environments is bolstered by work suggesting that wet-dry cycles and the presence of specific salts can greatly increase spontaneous condensation of glycine into poly-glycine chains. Other work suggests that while mineral surfaces, such as those of pyrite, calcite, and rutile catalyze peptide condensation, they also catalyze their hydrolysis. The authors suggest that additional chemical activation or coupling would be necessary to produce peptides at sufficient concentrations. Thus, mineral surface catalysis, while important, is not sufficient alone for peptide synthesis.
Many prebiotically plausible condensing/activating agents have been identified, including the following: cyanamide, dicyanamide, dicyandiamide, diaminomaleonitrile, urea, trimetaphosphate, NaCl, CuCl2, (Ni,Fe)S, CO, carbonyl sulfide (COS), carbon disulfide (CS2), SO2, and diammonium phosphate (DAP).
An experiment reported in 2024 used a saffire substrate with a web of thin cracks under a heat flow, similar to the environment of deep-ocean vents, as a mechanism to separate and concentrate prebiotically relevant building blocks from a dilute mixture, purifying their concentration by up to three orders of magnitude. The authors propose this as a plausible model for the origin of complex biopolymers. This presents another physical process that allows for concentrated peptide precursors to combine in the right conditions. A similar role of increasing amino acid concentration has been suggested for clays as well.
While all of these scenarios involve the condensation of amino acids, the prebiotic synthesis of peptides from simpler molecules such as CO, NH3 and C, skipping the step of amino acid formation, is very efficient.
Producing suitable vesicles
The largest unanswered question in evolution is how simple protocells first arose and differed in reproductive contribution to the following generation, thus initiating the evolution of life. The lipid world theory postulates that the first self-replicating object was lipid-like. Phospholipids form lipid bilayers in water while under agitation—the same structure as in cell membranes. These molecules were not present on early Earth, but other amphiphilic long-chain molecules also form membranes. These bodies may expand by insertion of additional lipids, and may spontaneously split into two offspring of similar size and composition. Lipid bodies may have provided sheltering envelopes for information storage, allowing the evolution and preservation of polymers like RNA that store information. Only one or two types of amphiphiles have been studied which might have led to the development of vesicles. There is an enormous number of possible arrangements of lipid bilayer membranes, and those with the best reproductive characteristics would have converged toward a hypercycle reaction, a positive feedback composed of two mutual catalysts represented by a membrane site and a specific compound trapped in the vesicle. Such site/compound pairs are transmissible to the daughter vesicles leading to the emergence of distinct lineages of vesicles, which would have allowed natural selection.
A protocell is a self-organized, self-ordered, spherical collection of lipids proposed as a stepping-stone to the origin of life. A functional protocell has (as of 2014) not yet been achieved in a laboratory setting. Self-assembled vesicles are essential components of primitive cells. The theory of classical irreversible thermodynamics treats self-assembly under a generalized chemical potential within the framework of dissipative systems. The second law of thermodynamics requires that overall entropy increases, yet life is distinguished by its great degree of organization. Therefore, a boundary is needed to separate ordered life processes from chaotic non-living matter.
Irene Chen and Jack W. Szostak suggest that elementary protocells can give rise to cellular behaviors including primitive forms of differential reproduction, competition, and energy storage. Competition for membrane molecules would favor stabilized membranes, suggesting a selective advantage for the evolution of cross-linked fatty acids and even the phospholipids of today. Such micro-encapsulation would allow for metabolism within the membrane and the exchange of small molecules, while retaining large biomolecules inside. Such a membrane is needed for a cell to create its own electrochemical gradient to store energy by pumping ions across the membrane. Fatty acid vesicles in conditions relevant to alkaline hydrothermal vents can be stabilized by isoprenoids which are synthesized by the formose reaction; the advantages and disadvantages of isoprenoids incorporated within the lipid bilayer in different microenvironments might have led to the divergence of the membranes of archaea and bacteria.
Laboratory experiments have shown that vesicles can undergo an evolutionary process under pressure cycling conditions. Simulating the systemic environment in tectonic fault zones within the Earth's crust, pressure cycling leads to the periodic formation of vesicles. Under the same conditions, random peptide chains are being formed, which are being continuously selected for their ability to integrate into the vesicle membrane. A further selection of the vesicles for their stability potentially leads to the development of functional peptide structures, associated with an increase in the survival rate of the vesicles.
Producing biology
Energy and entropy
Life requires a loss of entropy, or disorder, as molecules organize themselves into living matter. At the same time, the emergence of life is associated with the formation of structures beyond a certain threshold of complexity. The emergence of life with increasing order and complexity does not contradict the second law of thermodynamics, which states that overall entropy never decreases, since a living organism creates order in some places (e.g. its living body) at the expense of an increase of entropy elsewhere (e.g. heat and waste production).
Multiple sources of energy were available for chemical reactions on the early Earth. Heat from geothermal processes is a standard energy source for chemistry. Other examples include sunlight, lightning, atmospheric entries of micro-meteorites, and implosion of bubbles in sea and ocean waves. This has been confirmed by experiments and simulations.
Unfavorable reactions can be driven by highly favorable ones, as in the case of iron-sulfur chemistry. For example, this was probably important for carbon fixation. Carbon fixation by reaction of CO2 with H2S via iron-sulfur chemistry is favorable, and occurs at neutral pH and 100 °C. Iron-sulfur surfaces, which are abundant near hydrothermal vents, can drive the production of small amounts of amino acids and other biomolecules.
Chemiosmosis
In 1961, Peter Mitchell proposed chemiosmosis as a cell's primary system of energy conversion. The mechanism, now ubiquitous in living cells, powers energy conversion in micro-organisms and in the mitochondria of eukaryotes, making it a likely candidate for early life. Mitochondria produce adenosine triphosphate (ATP), the energy currency of the cell used to drive cellular processes such as chemical syntheses. The mechanism of ATP synthesis involves a closed membrane in which the ATP synthase enzyme is embedded. The energy required to release strongly bound ATP has its origin in protons that move across the membrane. In modern cells, those proton movements are caused by the pumping of ions across the membrane, maintaining an electrochemical gradient. In the first organisms, the gradient could have been provided by the difference in chemical composition between the flow from a hydrothermal vent and the surrounding seawater, or perhaps meteoric quinones that were conducive to the development of chemiosmotic energy across lipid membranes if at a terrestrial origin.
The RNA world
The RNA world hypothesis describes an early Earth with self-replicating and catalytic RNA but no DNA or proteins. Many researchers concur that an RNA world must have preceded the DNA-based life that now dominates. However, RNA-based life may not have been the first to exist. Another model echoes Darwin's "warm little pond" with cycles of wetting and drying.
RNA is central to the translation process. Small RNAs can catalyze all the chemical groups and information transfers required for life. RNA both expresses and maintains genetic information in modern organisms; and the chemical components of RNA are easily synthesized under the conditions that approximated the early Earth, which were very different from those that prevail today. The structure of the ribosome has been called the "smoking gun", with a central core of RNA and no amino acid side chains within 18 Å of the active site that catalyzes peptide bond formation.
The concept of the RNA world was proposed in 1962 by Alexander Rich, and the term was coined by Walter Gilbert in 1986. There were initial difficulties in the explanation of the abiotic synthesis of the nucleotides cytosine and uracil. Subsequent research has shown possible routes of synthesis; for example, formamide produces all four ribonucleotides and other biological molecules when warmed in the presence of various terrestrial minerals.
RNA replicase can function as both code and catalyst for further RNA replication, i.e. it can be autocatalytic. Jack Szostak has shown that certain catalytic RNAs can join smaller RNA sequences together, creating the potential for self-replication. The RNA replication systems, which include two ribozymes that catalyze each other's synthesis, showed a doubling time of the product of about one hour, and were subject to natural selection under the experimental conditions. If such conditions were present on early Earth, then natural selection would favor the proliferation of such autocatalytic sets, to which further functionalities could be added. Self-assembly of RNA may occur spontaneously in hydrothermal vents. A preliminary form of tRNA could have assembled into such a replicator molecule.
Possible precursors to protein synthesis include the synthesis of short peptide cofactors or the self-catalysing duplication of RNA. It is likely that the ancestral ribosome was composed entirely of RNA, although some roles have since been taken over by proteins. Major remaining questions on this topic include identifying the selective force for the evolution of the ribosome and determining how the genetic code arose.
Eugene Koonin has argued that "no compelling scenarios currently exist for the origin of replication and translation, the key processes that together comprise the core of biological systems and the apparent pre-requisite of biological evolution. The RNA World concept might offer the best chance for the resolution of this conundrum but so far cannot adequately account for the emergence of an efficient RNA replicase or the translation system."
From RNA to directed protein synthesis
In line with the RNA world hypothesis, much of modern biology's templated protein biosynthesis is done by RNA molecules—namely tRNAs and the ribosome (consisting of both protein and rRNA components). The most central reaction of peptide bond synthesis is understood to be carried out by base catalysis by the 23S rRNA domain V. Experimental evidence has demonstrated successful di- and tripeptide synthesis with a system consisting of only aminoacyl phosphate adaptors and RNA guides, which could be a possible stepping stone between an RNA world and modern protein synthesis. Aminoacylation ribozymes that can charge tRNAs with their cognate amino acids have also been selected in in vitro experimentation. The authors also extensively mapped fitness landscapes within their selection to find that chance emergence of active sequences was more important that sequence optimization.
Early functional peptides
The first proteins would have had to arise without a fully-fledged system of protein biosynthesis. As discussed above, numerous mechanisms for the prebiotic synthesis of polypeptides exist. However, these random sequence peptides would not have likely had biological function. Thus, significant study has gone into exploring how early functional proteins could have arisen from random sequences. First, some evidence on hydrolysis rates shows that abiotically plausible peptides likely contained significant "nearest-neighbor" biases. This could have had some effect on early protein sequence diversity. In other work by Anthony Keefe and Jack Szostak, mRNA display selection on a library of 6*1012 80-mers was used to search for sequences with ATP binding activity. They concluded that approximately 1 in 1011 random sequences had ATP binding function. While this is a single example of functional frequency in the random sequence space, the methodology can serve as a powerful simulation tool for understanding early protein evolution.
Phylogeny and LUCA
Starting with the work of Carl Woese from 1977, genomics studies have placed the last universal common ancestor (LUCA) of all modern life-forms between Bacteria and a clade formed by Archaea and Eukaryota in the phylogenetic tree of life. It lived over 4 Gya. A minority of studies have placed the LUCA in Bacteria, proposing that Archaea and Eukaryota are evolutionarily derived from within Eubacteria; Thomas Cavalier-Smith suggested in 2006 that the phenotypically diverse bacterial phylum Chloroflexota contained the LUCA.
In 2016, a set of 355 genes likely present in the LUCA was identified. A total of 6.1 million prokaryotic genes from Bacteria and Archaea were sequenced, identifying 355 protein clusters from among 286,514 protein clusters that were probably common to the LUCA. The results suggest that the LUCA was anaerobic with a Wood–Ljungdahl (reductive Acetyl-CoA) pathway, nitrogen- and carbon-fixing, thermophilic. Its cofactors suggest dependence upon an environment rich in hydrogen, carbon dioxide, iron, and transition metals. Its genetic material was probably DNA, requiring the 4-nucleotide genetic code, messenger RNA, transfer RNA, and ribosomes to translate the code into proteins such as enzymes. LUCA likely inhabited an anaerobic hydrothermal vent setting in a geochemically active environment. It was evidently already a complex organism, and must have had precursors; it was not the first living thing. The physiology of LUCA has been in dispute.
Leslie Orgel argued that early translation machinery for the genetic code would be susceptible to error catastrophe. Geoffrey Hoffmann however showed that such machinery can be stable in function against "Orgel's paradox". Metabolic reactions that have also been inferred in LUCA are the incomplete reverse Krebs cycle, gluconeogenesis, the pentose phosphate pathway, glycolysis, reductive amination, and transamination.
Suitable geological environments
A variety of geologic and environmental settings have been proposed for an origin of life. These theories are often in competition with one another as there are many differing views of prebiotic compound availability, geophysical setting, and early life characteristics. The first organism on Earth likely looked different from LUCA. Between the first appearance of life and where all modern phylogenies began branching, an unknown amount of time passed, with unknown gene transfers, extinctions, and evolutionary adaptation to various environmental niches. One major shift is believed to be from the RNA world to an RNA-DNA-protein world. Modern phylogenies provide more pertinent genetic evidence about LUCA than about its precursors.
The most popular hypotheses for settings for the origin of life are deep sea hydrothermal vents and surface bodies of water. Surface waters can be classified into hot springs, moderate temperature lakes and ponds, and cold settings.
Deep sea hydrothermal vents
Hot fluids
Early micro-fossils may have come from a hot world of gases such as methane, ammonia, carbon dioxide, and hydrogen sulfide, toxic to much current life. Analysis of the tree of life places thermophilic and hyperthermophilic bacteria and archaea closest to the root, suggesting that life may have evolved in a hot environment. The deep sea or alkaline hydrothermal vent theory posits that life began at submarine hydrothermal vents. William Martin and Michael Russell have suggested "that life evolved in structured iron monosulphide precipitates in a seepage site hydrothermal mound at a redox, pH, and temperature gradient between sulphide-rich hydrothermal fluid and iron(II)-containing waters of the Hadean ocean floor. The naturally arising, three-dimensional compartmentation observed within fossilized seepage-site metal sulphide precipitates indicates that these inorganic compartments were the precursors of cell walls and membranes found in free-living prokaryotes. The known capability of FeS and NiS to catalyze the synthesis of the acetyl-methylsulphide from carbon monoxide and methylsulphide, constituents of hydrothermal fluid, indicates that pre-biotic syntheses occurred at the inner surfaces of these metal-sulphide-walled compartments".
These form where hydrogen-rich fluids emerge from below the sea floor, as a result of serpentinization of ultra-mafic olivine with seawater and a pH interface with carbon dioxide-rich ocean water. The vents form a sustained chemical energy source derived from redox reactions, in which electron donors (molecular hydrogen) react with electron acceptors (carbon dioxide); see iron–sulfur world theory. These are exothermic reactions.
Chemiosmotic gradient
Russell demonstrated that alkaline vents created an abiogenic proton motive force chemiosmotic gradient, ideal for abiogenesis. Their microscopic compartments "provide a natural means of concentrating organic molecules," composed of iron-sulfur minerals such as mackinawite, endowed these mineral cells with the catalytic properties envisaged by Günter Wächtershäuser. This movement of ions across the membrane depends on a combination of two factors:
Diffusion force caused by concentration gradient—all particles including ions tend to diffuse from higher concentration to lower.
Electrostatic force caused by electrical potential gradient—cations like protons H+ tend to diffuse down the electrical potential, anions in the opposite direction.
These two gradients taken together can be expressed as an electrochemical gradient, providing energy for abiogenic synthesis. The proton motive force can be described as the measure of the potential energy stored as a combination of proton and voltage gradients across a membrane (differences in proton concentration and electrical potential).
The surfaces of mineral particles inside deep-ocean hydrothermal vents have catalytic properties similar to those of enzymes and can create simple organic molecules, such as methanol (CH3OH) and formic, acetic, and pyruvic acids out of the dissolved CO2 in the water, if driven by an applied voltage or by reaction with H2 or H2S.
The research reported by Martin in 2016 supports the thesis that life arose at hydrothermal vents, that spontaneous chemistry in the Earth's crust driven by rock–water interactions at disequilibrium thermodynamically underpinned life's origin and that the founding lineages of the archaea and bacteria were H2-dependent autotrophs that used CO2 as their terminal acceptor in energy metabolism. Martin suggests, based upon this evidence, that the LUCA "may have depended heavily on the geothermal energy of the vent to survive". Pores at deep sea hydrothermal vents are suggested to have been occupied by membrane-bound compartments which promoted biochemical reactions. Metabolic intermediates in the Krebs cycle, gluconeogenesis, amino acid bio-synthetic pathways, glycolysis, the pentose phosphate pathway, and including sugars like ribose, and lipid precursors can occur non-enzymatically at conditions relevant to deep-sea alkaline hydrothermal vents.
If the deep marine hydrothermal setting was the site for the origin of life, then abiogenesis could have happened as early as 4.0-4.2 Gya. If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from impacts and the then high levels of ultraviolet radiation from the sun. The available energy in hydrothermal vents is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live. Arguments against a hydrothermal origin of life state that hyperthermophily was a result of convergent evolution in bacteria and archaea, and that a mesophilic environment would have been more likely. This hypothesis, suggested in 1999 by Galtier, was proposed one year before the discovery of the Lost City Hydrothermal Field, where white-smoker hydrothermal vents average ~45-90 °C. Moderate temperatures and alkaline seawater at Lost City are now the favoured hydrothermal vent setting in contrast to acidic, high temperature (~350 °C) black-smokers.
Arguments against a vent setting
Production of prebiotic organic compounds at hydrothermal vents is estimated to be 1x108 kg yr−1. While a large amount of key prebiotic compounds, such as methane, are found at vents, they are in far lower concentrations than estimates of a Miller-Urey Experiment environment. In the case of methane, the production rate at vents is around 2-4 orders of magnitude lower than predicted amounts in a Miller-Urey Experiment surface atmosphere.
Other arguments against an oceanic vent setting for the origin of life include the inability to concentrate prebiotic materials due to strong dilution from seawater. This open-system cycles compounds through minerals that make up vents, leaving little residence time to accumulate. All modern cells rely on phosphates and potassium for nucleotide backbone and protein formation respectively, making it likely that the first life forms also shared these functions. These elements were not available in high quantities in the Archaean oceans as both primarily come from the weathering of continental rocks on land, far from vent settings. Submarine hydrothermal vents are not conducive to condensation reactions needed for polymerisation to form macromolecules.
An older argument was that key polymers were encapsulated in vesicles after condensation, which supposedly would not happen in saltwater because of the high concentrations of ions. However, while it is true that salinity inhibits vesicle formation from low-diversity mixtures of fatty acids, vesicle formation from a broader, more realistic mix of fatty-acid and 1-alkanol species is more resilient.
Surface bodies of water
Surface bodies of water provide environments able to dry out and be rewetted. Continued wet-dry cycles allow the concentration of prebiotic compounds and condensation reactions to polymerise macromolecules. Moreover, lake and ponds on land allow for detrital input from the weathering of continental rocks which contain apatite, the most common source of phosphates needed for nucleotide backbones. The amount of exposed continental crust in the Hadean is unknown, but models of early ocean depths and rates of ocean island and continental crust growth make it plausible that there was exposed land. Another line of evidence for a surface start to life is the requirement for UV for organism function. UV is necessary for the formation of the U+C nucleotide base pair by partial hydrolysis and nucleobase loss. Simultaneously, UV can be harmful and sterilising to life, especially for simple early lifeforms with little ability to repair radiation damage. Radiation levels from a young Sun were likely greater, and, with no ozone layer, harmful shortwave UV rays would reach the surface of Earth. For life to begin, a shielded environment with influx from UV-exposed sources is necessary to both benefit and protect from UV. Shielding under ice, liquid water, mineral surfaces (e.g. clay) or regolith is possible in a range of surface water settings. While deep sea vents may have input from raining down of surface exposed materials, the likelihood of concentration is lessened by the ocean's open system.
Hot springs
Most branching phylogenies are thermophilic or hyperthermophilic, making it possible that the Last universal common ancestor (LUCA) and preceding lifeforms were similarly thermophilic. Hot springs are formed from the heating of groundwater by geothermal activity. This intersection allows for influxes of material from deep penetrating waters and from surface runoff that transports eroded continental sediments. Interconnected groundwater systems create a mechanism for distribution of life to wider area.
Mulkidjanian and co-authors argue that marine environments did not provide the ionic balance and composition universally found in cells, or the ions required by essential proteins and ribozymes, especially with respect to high K+/Na+ ratio, Mn2+, Zn2+ and phosphate concentrations. They argue that the only environments that mimic the needed conditions on Earth are hot springs similar to ones at Kamchatka. Mineral deposits in these environments under an anoxic atmosphere would have suitable pH (while current pools in an oxygenated atmosphere would not), contain precipitates of photocatalytic sulfide minerals that absorb harmful ultraviolet radiation, have wet-dry cycles that concentrate substrate solutions to concentrations amenable to spontaneous formation of biopolymers created both by chemical reactions in the hydrothermal environment, and by exposure to UV light during transport from vents to adjacent pools that would promote the formation of biomolecules. The hypothesized pre-biotic environments are similar to hydrothermal vents, with additional components that help explain peculiarities of the LUCA.
A phylogenomic and geochemical analysis of proteins plausibly traced to the LUCA shows that the ionic composition of its intracellular fluid is identical to that of hot springs. The LUCA likely was dependent upon synthesized organic matter for its growth. Experiments show that RNA-like polymers can be synthesized in wet-dry cycling and UV light exposure. These polymers were encapsulated in vesicles after condensation. Potential sources of organics at hot springs might have been transport by interplanetary dust particles, extraterrestrial projectiles, or atmospheric or geochemical synthesis. Hot springs could have been abundant in volcanic landmasses during the Hadean.
Temperate surface bodies of water
A mesophilic start in surface bodies of waters hypothesis has evolved from Darwin's concept of a 'warm little pond' and the Oparin-Haldane hypothesis. Freshwater bodies under temperate climates can accumulate prebiotic materials while providing suitable environmental conditions conducive to simple life forms. The climate during the Archaean is still a highly debated topic, as there is uncertainty about what continents, oceans, and the atmosphere looked like then. Atmospheric reconstructions of the Archaean from geochemical proxies and models state that sufficient greenhouse gases were present to maintain surface temperatures between 0-40 °C. Under this assumption, there is a greater abundance of moderate temperature niches in which life could begin.
Strong lines of evidence for mesophily from biomolecular studies include Galtier's G+C nucleotide thermometer. G+C are more abundant in thermophiles due to the added stability of an additional hydrogen bond not present between A+T nucleotides. rRNA sequencing on a diverse range of modern lifeforms show that LUCA's reconstructed G+C content was likely representative of moderate temperatures.
Although most modern phylogenies are thermophilic or hyperthermophilic, it is possible that their widespread diversity today is a product of convergent evolution and horizontal gene transfer rather than an inherited trait from LUCA. The reverse gyrase topoisomerase is found exclusively in thermophiles and hyperthermophiles as it allows for coiling of DNA. The reverse gyrase enzyme requires ATP to function, both of which are complex biomolecules. If an origin of life is hypothesised to involve a simple organism that had not yet evolved a membrane, let alone ATP, this would make the existence of reverse gyrase improbable. Moreover, phylogenetic studies show that reverse gyrase had an archaeal origin, and that it was transferred to bacteria by horizontal gene transfer. This implies that reverse gyrase was not present in the LUCA.
Icy surface bodies of water
Cold-start origin of life theories stem from the idea there may have been cold enough regions on the early Earth that large ice cover could be found. Stellar evolution models predict that the Sun's luminosity was ~25% weaker than it is today. Fuelner states that although this significant decrease in solar energy would have formed an icy planet, there is strong evidence for liquid water to be present, possibly driven by a greenhouse effect. This would create an early Earth with both liquid oceans and icy poles.
Ice melts that form from ice sheets or glaciers melts create freshwater pools, another niche capable of experiencing wet-dry cycles. While these pools that exist on the surface would be exposed to intense UV radiation, bodies of water within and under ice are sufficiently shielded while remaining connected to UV exposed areas through ice cracks. Suggestions of impact melting of ice allow freshwater paired with meteoritic input, a popular vessel for prebiotic components. Near-seawater levels of sodium chloride are found to destabilize fatty acid membrane self-assembly, making freshwater settings appealing for early membranous life.
Icy environments would trade the faster reaction rates that occur in warm environments for increased stability and accumulation of larger polymers. Experiments simulating Europa-like conditions of ~20 °C have synthesised amino acids and adenine, showing that Miller-Urey type syntheses can still occur at cold temperatures. In an RNA world, the ribozyme would have had even more functions than in a later DNA-RNA-protein-world. For RNA to function, it must be able to fold, a process that is hindered by temperatures above 30 °C. While RNA folding in psychrophilic organisms is slower, the process is more successful as hydrolysis is also slower. Shorter nucleotides would not suffer from higher temperatures.
Inside the continental crust
An alternative geological environment has been proposed by the geologist Ulrich Schreiber and the physical chemist Christian Mayer: the continental crust. Tectonic fault zones could present a stable and well-protected environment for long-term prebiotic evolution. Inside these systems of cracks and cavities, water and carbon dioxide present the bulk solvents. Their phase state would depend on the local temperature and pressure conditions and could vary between liquid, gaseous and supercritical. When forming two separate phases (e.g., liquid water and supercritical carbon dioxide in depths of little more than 1 km), the system provides optimal conditions for phase transfer reactions. Concurrently, the contents of the tectonic fault zones are being supplied by a multitude of inorganic educts (e.g., carbon monoxide, hydrogen, ammonia, hydrogen cyanide, nitrogen, and even phosphate from dissolved apatite) and simple organic molecules formed by hydrothermal chemistry (e.g. amino acids, long-chain amines, fatty acids, long-chain aldehydes). Finally, the abundant mineral surfaces provide a rich choice of catalytic activity.
An especially interesting section of the tectonic fault zones is located at a depth of approximately 1000 m. For the carbon dioxide part of the bulk solvent, it provides temperature and pressure conditions near the phase transition point between the supercritical and the gaseous state. This leads to a natural accumulation zone for lipophilic organic molecules that dissolve well in supercritical CO2, but not in its gaseous state, leading to their local precipitation. Periodic pressure variations such as caused by geyser activity or tidal influences result in periodic phase transitions, keeping the local reaction environment in a constant non-equilibrium state. In presence of amphiphilic compounds (such as the long chain amines and fatty acids mentioned above), subsequent generations of vesicles are being formed that are constantly and efficiently being selected for their stability. The resulting structures could provide hydrothermal vents as well as hot springs with raw material for further development.
Homochirality
Homochirality is the geometric uniformity of materials composed of chiral (non-mirror-symmetric) units. Living organisms use molecules that have the same chirality (handedness): with almost no exceptions, amino acids are left-handed while nucleotides and sugars are right-handed. Chiral molecules can be synthesized, but in the absence of a chiral source or a chiral catalyst, they are formed in a 50/50 (racemic) mixture of both forms. Known mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction; asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, statistical fluctuations during racemic synthesis, and spontaneous symmetry breaking.
Once established, chirality would be selected for. A small bias (enantiomeric excess) in the population can be amplified into a large one by asymmetric autocatalysis, such as in the Soai reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalyzing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other.
Homochirality may have started in outer space, as on the Murchison meteorite the amino acid L-alanine (left-handed) is more than twice as frequent as its D (right-handed) form, and L-glutamic acid is more than three times as abundant as its D counterpart. Amino acids from meteorites show a left-handed bias, whereas sugars show a predominantly right-handed bias: this is the same preference found in living organisms, suggesting an abiogenic origin of these compounds.
In a 2010 experiment by Robert Root-Bernstein, "two D-RNA-oligonucleotides having inverse base sequences (D-CGUA and D-AUGC) and their corresponding L-RNA-oligonucleotides (L-CGUA and L-AUGC) were synthesized and their affinity determined for Gly and eleven pairs of L- and D-amino acids". The results suggest that homochirality, including codon directionality, might have "emerged as a function of the origin of the genetic code".
See also
Autopoiesis
Manganese metallic nodules
Notes
References
Sources
International Symposium on the Origin of Life on the Earth (held at Moscow, 19–24 August 1957)
Proceedings of the SPIE held at San Jose, California, 22–24 January 2001
Proceedings of the SPIE held at San Diego, California, 31 July–2 August 2005
External links
Making headway with the mysteries of life's origins – Adam Mann (PNAS; 14 April 2021)
Exploring Life's Origins a virtual exhibit at the Museum of Science (Boston)
How life began on Earth – Marcia Malory (Earth Facts; 2015)
The Origins of Life – Richard Dawkins et al. (BBC Radio; 2004)
Life in the Universe – Essay by Stephen Hawking (1996)
Astrobiology
Evolutionarily significant biological phenomena
Evolutionary biology
Global events
Natural events
Prebiotic chemistry | 0.77327 | 0.999338 | 0.772758 |
Vasudhaiva Kutumbakam | Vasudhaiva Kutumbakam is a Sanskrit phrase found in Hindu texts such as the Maha Upanishad, which means "The World Is One Family". The idea of the phrase remains relevant today as it emphasizes a global perspective, prioritizing the collective well-being over individual or family interests. It encourages to think about the welfare of others, fostering global solidarity and responsibility, especially in addressing crucial issues like climate change, sustainable development, peace, and tolerance of differences.
Translation
The phrase consists of: ; ; and .
History
अयं निजः परो वेति गणना लघुचेतसाम्। (Ayam Nijah Paro Veti Ganana Laghucetasam)
उदारचरितानां तु वसुधैव कुटुम्बकम्॥ (Udaracaritanam Tu Vasudhaiva Kutumbakam)
The original Verse appears in Chapter 6 of the Maha Upanishad Vi.71-73., it is considered the most important moral value in the Indian society. This verse of Maha Upanishad is engraved in the entrance hall of the Parliament Of India.
Subsequent shlokas go on to say that those who have no attachments go on to find the Brahman (The One Supreme, Universal Spirit That Is The Origin And Support Of The Phenomenal Universe). The context of this verse is to describe as one of the attributes of an individual who has attained the highest level of spiritual progress, and one who is capable of performing his worldly duties without attachment to material possessions.
Influences
The text has been influential in the major Hindu literature that followed it. The popular Bhagvad Gita, the most translated of the Itihasa genre of literature in Hinduism, for example, calls the Vasudhaiva Kutumbakam adage of the Maha Upanishad, as the "Loftiest Vedantic Thought".
The ancient idea of Vasudhaiva is considered relevant today. It promotes a global perspective and prioritizes the greater good over individual or family interests. It encourages considering the welfare of others, fostering global solidarity and responsibility on various issues, including climate change, sustainable development, peace, and tolerance of differences.
Dr N. Radhakrishnan, former director of the Gandhi Smriti and Darshan Samiti, believes that the Gandhian vision of holistic development and respect for all forms of life; nonviolent conflict resolution embedded in the acceptance of nonviolence both as a creed and strategy; were an extension of the ancient Indian concept of Vasudhaiva Kutumbakam.
References In The Modern World
India's Prime Minister Narendra Modi used this phrase in a speech at World Culture Festival, organized by Art of Living, adding that "Indian culture is very rich and has inculcated in each one of us with great values, we are the people who have come from Aham Brahmasmi to Vasudhaiva Kutumbakam, we are the people who have come from Upanishads to Upgraha.(Satellite)."
It was used in the logo of the 7th International Earth Science Olympiad, which was held in Mysore, India in 2013. It was designed to emphasize on the integration of the Earth’s subsystems in the school curriculum. It was designed by R. Shankar and Shwetha B. Shetty of Mangalore University.
The theme and the logo for India’s G20 Presidency from December 1, 2022, till November 30, 2023 has a mention of “Vasudhaiva Kutumbakam” or “One Earth-One Family-One Future”. The logo was selected after scrutiny of 2400 pan-India submissions invited through a logo design contest. However, due to opposition from China, which claimed that Sanskrit is not one of the six official languages of the United Nations, the phrase failed to appear in most official G20 documents.
See also
Unity In Diversity
Religious Syncretism
Hinduism
We Are The World
Yaadhum Oore Yaavarum Kelir
References
Bibliography
Further reading
Sanskrit words and phrases
Hindu philosophical concepts | 0.775473 | 0.996478 | 0.772741 |
Anna Karenina principle | The Anna Karenina principle states that a deficiency in any one of a number of factors dooms an endeavor to failure. Consequently, a successful endeavor (subject to this principle) is one for which every possible deficiency has been avoided.
The name of the principle derives from Leo Tolstoy's 1877 novel Anna Karenina, which begins:
In other words: happy families share a common set of attributes which lead to happiness, while any of a variety of attributes can cause an unhappy family. This concept has been generalized to apply to several fields of study.
In statistics, the term Anna Karenina principle is used to describe significance tests: there are any number of ways in which a dataset may violate the null hypothesis and only one in which all the assumptions are satisfied.
Examples
Failed domestication
The Anna Karenina principle was popularized by Jared Diamond in his 1997 book Guns, Germs and Steel. Diamond uses this principle to illustrate why so few wild animals have been successfully domesticated throughout history, as a deficiency in any one of a great number of factors can render a species undomesticable. Therefore, all successfully domesticated species are not so because of a particular positive trait, but because of a lack of any number of possible negative traits. In chapter 9, six groups of reasons for failed domestication of animals are defined:
Diet – To be a candidate for domestication, a species must be easy to feed. Finicky eaters make poor candidates. Non-finicky omnivores make the best candidates.
Growth rate – The animal must grow fast enough to be economically feasible. Elephant farmers, for example, would have to wait perhaps twelve years for their herd to reach adult size.
Captive breeding – The species must breed well in captivity. Species having mating rituals prohibiting breeding in a farm-like environment make poor candidates for domestication. These rituals could include the need for privacy or long, protracted mating chases.
Disposition – Some species are too ill-tempered to be good candidates for domestication. Farmers must not be at risk of life or injury every time they enter the animal pen. The zebra is of special note in the book, as it was recognized by local cultures and Europeans alike as extremely valuable and useful to domesticate, but it proved impossible to tame. Horses in Africa proved to be susceptible to disease and attack by a wide variety of animals, while the very characteristics that made the zebra hardy and survivable in the harsh environment of Africa also made it fiercely independent.
Tendency to panic – Species are genetically predisposed to react to danger in different ways. A species that immediately takes flight is a poor candidate for domestication. A species that freezes, or mingles with the herd for cover in the face of danger, is a good candidate. Deer in North America have proven almost impossible to domesticate and have difficulty breeding in captivity. In contrast, horses thrived from when they were re-introduced to North America in the sixteenth century.
Social structure – Species of lone, independent animals make poor candidates. A species that has a strong, well-defined social hierarchy is more likely to be domesticated. A species that can imprint on a human as the head of the hierarchy is best. Different social groups must also be tolerant of one another.
Ecological risk assessment
Ecologist Dwayne Moore describes applications of the Anna Karenina principle in ecology:
Aristotle's version
Much earlier, Aristotle states the same principle in the Nicomachean Ethics (Book 2):
Order in chaos of maladaptation
Many experiments and observations of groups of humans, animals, trees, grassy plants, stockmarket prices, and changes in the banking sector proved the modified Anna Karenina principle.
This effect is proved for many systems: from the adaptation of healthy people to a change in climate conditions to the analysis of fatal outcomes in oncological and cardiological clinics. The same effect is found in the stock market. The applicability of these two statistical indicators of stress, simultaneous increase of variance and correlations, for diagnosis of social stress in large groups was examined in the prolonged stress period preceding the 2014 Ukrainian economic and political crisis. There was a simultaneous increase in the total correlation between the 19 major public fears in the Ukrainian society (by about 64%) and also in their statistical dispersion (by 29%) during the pre-crisis years.
General mathematical backgrounds
Vladimir Arnold in his book Catastrophe Theory describes "The Principle of Fragility of Good Things" which in a sense supplements the Principle of Anna Karenina: good systems must meet simultaneously a number of requirements; therefore, they are more fragile:
See also
O-ring theory of economic development
References
Principles
Statistical hypothesis testing
Leo Tolstoy | 0.776402 | 0.995195 | 0.772671 |
Survey methodology | Survey methodology is "the study of survey methods".
As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.
Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied; such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses all exemplify quantitative research that uses survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, such as marketing research, psychology, health-care provision and sociology.
Overview
A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent.
Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it.
The most important methodological challenges of a survey methodologist include making decisions on how to:
Identify and select potential sample members.
Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond)
Evaluate and test questions.
Select the mode for posing questions and collecting responses.
Train and supervise interviewers (if they are involved).
Check data files for accuracy and internal consistency.
Adjust survey estimates to correct for identified errors.
Complement survey data with new data sources (if appropriate)
Selecting samples
The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest. The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.
Modes of data collection
There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including
costs,
coverage of the target population,
flexibility of asking questions,
respondents' willingness to participate and
response accuracy.
Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:
Telephone
Mail (post)
Online surveys
Mobile surveys
Personal in-home surveys
Personal mall or street intercept survey
Mixed modes
Research designs
There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies.
Cross-sectional studies
In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once. A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design.
Successive independent samples studies
A successive independent samples design draws multiple random samples from a population at one or more times. This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.
Longitudinal studies
Longitudinal studies take measure of the same random sample at multiple time points. Unlike with a successive independent samples design, this design measures the differences in individual participants' responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents' experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally.
However, longitudinal studies are both expensive and difficult to do. It is harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. In addition, such studies sometimes require data collection to be confidential or anonymous, which creates additional difficulty in linking participants' responses over time. One potential solution is the use of a self-generated identification code (SGIC). These codes usually are created from elements like 'month of birth' and 'first letter of the mother's middle name.' Some recent anonymous SGIC approaches have also attempted to minimize use of personalized data even further, instead using questions like 'name of your first pet. Depending on the approach used, the ability to match some portion of the sample can be lost.
In addition, the overall attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers.
Questionnaires
Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately. Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate.
Questionnaires as tools
A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample. Demographic variables include such measures as ethnicity, socioeconomic status, race, and age. Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale. Self-report scales are also used to examine the disparities among people on scale items. These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid.
Reliability and validity of self-report measures
Reliable measures of self-report are defined by their consistency. Thus, a reliable self-report measure produces consistent results every time it is executed. A test's reliability can be measured a few ways. First, one can calculate a test-retest reliability. A test-retest reliability entails conducting the same questionnaire to a large sample at two different times. For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest. Self-report measures will generally be more reliable when they have many items measuring a construct. Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested. Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment. Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure. Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure.
Composing a questionnaire
Six steps can be employed to construct a questionnaire that will produce reliable and valid results. First, one must decide what kind of information should be collected. Second, one must decide how to conduct the questionnaire. Thirdly, one must construct a first draft of the questionnaire. Fourth, the questionnaire should be revised. Next, the questionnaire should be pretested. Finally, the questionnaire should be edited and the procedures for its use should be specified.
Guidelines for the effective wording of questions
The way that a question is phrased can have a large impact on how a research participant will answer the question. Thus, survey researchers must be conscious of their wording when writing survey questions. It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another. There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions. Free response questions are open-ended, whereas closed questions are usually multiple choice. Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding. Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder. In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words. Each question should be edited for "readability" and should avoid leading or loaded questions. Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias.
A respondent's answer to an open-ended question can be coded into a response scale afterwards, or analysed using more qualitative methods.
Order of questions
Survey researchers should carefully construct the order of questions in a questionnaire. For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end. Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence. Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming.
Translating a questionnaire
Translation is crucial to collecting comparable survey data. Questionnaires are translated from a source language into one or more target languages, such as translating from English into Spanish and German. A team approach is recommended in the translation process to include translators, subject-matter experts and persons helpful to the process.
Survey translation best practice includes parallel translation, team discussions, and pretesting with real-life people. It is not a mechanical word placement process. The model TRAPD - Translation, Review, Adjudication, Pretest, and Documentation - originally developed for the European Social Surveys, is now "widely used in the global survey research community, although not always labeled as such or implemented in its complete form". For example, sociolinguistics provides a theoretical framework for questionnaire translation and complements TRAPD. This approach states that for the questionnaire translation to achieve the equivalent communicative effect as the source language, the translation must be linguistically appropriate while incorporating the social practices and cultural norms of the target language.
Nonresponse reduction
The following ways have been recommended for reducing nonresponse in telephone and face-to-face surveys:
Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made, or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey.
Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached.
Short introduction. The interviewer should always start with a short introduction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead to a slightly higher responding rate.
Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study.
Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.
A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions).
Other studies showed that quality of response degraded toward the end of long surveys.
Some researchers have also discussed the recipient's role or profession as a potential factor affecting how nonresponse is managed. For example, faxes are not commonly used to distribute surveys, but in a recent study were sometimes preferred by pharmacists, since they frequently receive faxed prescriptions at work but may not always have access to a generally-addressed piece of mail.
Interviewer effects
Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race, gender,
and relative body weight (BMI).
These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes,
interviewer sex responses to questions involving gender issues,
and interviewer BMI answers to eating and dieting-related questions.
While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects.
The role of big data
Since 2018, survey methodologists have started to examine how big data can complement survey methodology to allow researchers and practitioners to improve the production of survey statistics and its quality. Big data has low cost per data point, applies analysis techniques via machine learning and data mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms digital data. There have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020, 2023, and a conference forthcoming in 2025, a special issue in the Social Science Computer Review, a special issue in the Journal of the Royal Statistical Society, and a special issue in EP J Data Science, and a book called Big Data Meets Social Sciences edited by Craig A. Hill and five other Fellows of the American Statistical Association.
See also
Survey data collection
Data Documentation Initiative
Enterprise feedback management (EFM)
Likert scale
Official statistics
Paid survey
Quantitative marketing research
Questionnaire construction
Ratio estimator
Social research
Total survey error
References
Further reading
Abramson, J. J. and Abramson, Z. H. (1999). Survey Methods in Community Medicine: Epidemiological Research, Programme Evaluation, Clinical Trials (5th edition). London: Churchill Livingstone/Elsevier Health Sciences
Adèr, H. J., Mellenbergh, G. J., and Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
Dillman, D.A. (1978) Mail and telephone surveys: The total design method. New York: Wiley.
Engel. U., Jann, B., Lynn, P., Scherpenzeel, A. and Sturgis, P. (2014). Improving Survey Methods: Lessons from Recent Research. New York: Routledge.
Groves, R.M. (1989). Survey Errors and Survey Costs Wiley.
Griffith, James. (2014) "Survey Research in Military Settings." in Routledge Handbook of Research Methods in Military Studies edited by Joseph Soeters, Patricia Shields and Sebastiaan Rietjens.pp. 179–193. New York: Routledge.
Leung, Wai-Ching (2001) "Conducting a Survey", in Student BMJ, (British Medical Journal, Student Edition), May 2001
Ornstein, M.D. (1998). "Survey Research." Current Sociology 46(4): iii-136.
Prince, S. a, Adamo, K. B., Hamel, M., Hardt, J., Connor Gorber, S., & Tremblay, M. (2008). A comparison of direct versus self-report measures for assessing physical activity in adults: a systematic review. International Journal of Behavioral Nutrition and Physical Activity, 5(1), 56. http://doi.org/10.1186/1479-5868-5-56
Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.). McGraw–Hill Higher Education. (pp. 143–192)
Singh, S. (2003). Advanced Sampling Theory with Applications: How Michael Selected Amy. Kluwer Academic Publishers, The Netherlands.
Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan.(2014). Routledge Handbook of Research Methods in Military Studies New York: Routledge.
Shackman, G. What is Program Evaluation? A Beginners Guide 2018
External links
Psychometrics
Quantitative research
Product testing | 0.778708 | 0.992242 | 0.772667 |
Human biology | Human biology is an interdisciplinary area of academic study that examines humans through the influences and interplay of many diverse fields such as genetics, evolution, physiology, anatomy, epidemiology, anthropology, ecology, nutrition, population genetics, and sociocultural influences. It is closely related to the biomedical sciences, biological anthropology and other biological fields tying in various aspects of human functionality. It wasn't until the 20th century when biogerontologist, Raymond Pearl, founder of the journal Human Biology, phrased the term "human biology" in a way to describe a separate subsection apart from biology.
It is also a portmanteau term that describes all biological aspects of the human body, typically using the human body as a type organism for Mammalia, and in that context it is the basis for many undergraduate University degrees and modules.
Most aspects of human biology are identical or very similar to general mammalian biology. In particular, and as examples, humans :
maintain their body temperature
have an internal skeleton
have a circulatory system
have a nervous system to provide sensory information and operate and coordinate muscular activity.
have a reproductive system in which they bear live young and produce milk.
have an endocrine system and produce and eliminate hormones and other bio-chemical signalling agents
have a respiratory system where air is inhaled into lungs and oxygen is used to produce energy.
have an immune system to protect against disease
Excrete waste as urine and feces.
History
The study of integrated human biology started in the 1920s, sparked by Charles Darwin's theories which were re-conceptualized by many scientists. Human attributes, such as child growth and genetics, were put into question and thus human biology was created.
Typical human attributes
The key aspects of human biology are those ways in which humans are substantially different from other mammals.
Humans have a very large brain in a head that is very large for the size of the animal. This large brain has enabled a range of unique attributes including the development of complex languages and the ability to make and use a complex range of tools.
The upright stance and bipedal locomotion is not unique to humans but humans are the only species to rely almost exclusively on this mode of locomotion. This has resulted in significant changes in the structure of the skeleton including the articulation of the pelvis and the femur and in the articulation of the head.
In comparison with most other mammals, humans are very long lived with an average age at death in the developed world of nearly 80 years old. Humans also have the longest childhood of any mammal with sexual maturity taking 12 to 16 years on average to be completed.
Humans lack fur. Although there is a residual covering of fine hair, which may be more developed in some people, and localised hair covering on the head, axillary and pubic regions, in terms of protection from cold, humans are almost naked. The reason for this development is still much debated.
The human eye can see objects in colour but is not well adapted to low light conditions. The sense of smell and of taste are present but are relatively inferior to a wide range of other mammals. Human hearing is efficient but lacks the acuity of some other mammals. Similarly human sense of touch is well developed especially in the hands where dextrous tasks are performed but the sensitivity is still significantly less than in other animals, particularly those equipped with sensory bristles such as cats.
Scientific investigation
Human biology tries to understand and promotes research on humans as living beings as a scientific discipline. It makes use of various scientific methods, such as experiments and observations, to detail the biochemical and biophysical foundations of human life describe and formulate the underlying processes using models. As a basic science, it provides the knowledge base for medicine. A number of sub-disciplines include anatomy, cytology, histology and morphology.
Medicine
The capabilities of the human brain and the human dexterity in making and using tools, has enabled humans to understand their own biology through scientific experiment, including dissection, autopsy, prophylactic medicine which has, in turn, enable humans to extend their life-span by understanding and mitigating the effects of diseases.
Understanding human biology has enabled and fostered a wider understanding of mammalian biology and by extension, the biology of all living organisms.
Nutrition
Human nutrition is typical of mammalian omnivorous nutrition requiring a balanced input of carbohydrates, fats, proteins, vitamins, and minerals. However, the human diet has a few very specific requirements. These include two specific amino acids, alpha-linolenic acid and linoleic acid without which life is not sustainable in the medium to long term. All other fatty acids can be synthesized from dietary fats. Similarly, human life requires a range of vitamins to be present in food and if these are missing or are supplied at unacceptably low levels, metabolic disorders result which can end in death. The human metabolism is similar to most other mammals except for the need to have an intake of Vitamin C to prevent scurvy and other deficiency diseases. Unusually amongst mammals, a human can synthesize Vitamin D3 using natural UV light from the sun on the skin. This capability may be widespread in the mammalian world but few other mammals share the almost naked skin of humans. The darker the human's skin, the less it can manufacture Vitamin D3.
Other organisms
Human biology also encompasses all those organisms that live on or in the human body. Such organisms range from parasitic insects such as fleas and ticks, parasitic helminths such as liver flukes through to bacterial and viral pathogens. Many of the organisms associated with human biology are the specialised biome in the large intestine and the biotic flora of the skin and pharyngeal and nasal region. Many of these biotic assemblages help protect humans from harm and assist in digestion, and are now known to have complex effects on mood, and well-being.
Social behaviour
Humans in all civilizations are social animals and use their language skills and tool making skills to communicate.
These communication skills enable civilizations to grow and allow for the production of art, literature and music, and for the development of technology. All of these are wholly dependent on the human biological specialisms.
The deployment of these skills has allowed the human race to dominate the terrestrial biome to the detriment of most of the other species.
References
External links
Human Biology Association
Biology Dictionary
Humans | 0.781281 | 0.988973 | 0.772666 |
Developmental systems theory | Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
Overview
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any particular entity and thereby maintains an explanatory openness on all empirical fronts. For example, there is vigorous resistance to the widespread assumptions that one can legitimately speak of genes ‘for’ specific phenotypic characters or that adaptation consists of evolution ‘shaping’ the more or less passive species, as opposed to adaptation consisting of organisms actively selecting, defining, shaping and often creating their niches.
Developmental systems theory: Topics
Six Themes of DST
Joint Determination by Multiple Causes: Development is a product of multiple interacting sources.
Context Sensitivity and Contingency: Development depends on the current state of the organism.
Extended Inheritance: An organism inherits resources from the environment in addition to genes.
Development as a process of construction: The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge.
Distributed Control: Idea that no single source of influence has central control over an organism's development.
Evolution As Construction: The evolution of an entire developmental system, including whole ecosystems of which given organisms are parts, not just the changes of a particular being or population.
A computing metaphor
To adopt a computing metaphor, the reductionists (whom developmental systems theory opposes) assume that causal factors can be divided into ‘processes’ and ‘data’, as in the Harvard computer architecture. Data (inputs, resources, content, and so on) is required by all processes, and must often fall within certain limits if the process in question is to have its ‘normal’ outcome. However, the data alone is helpless to create this outcome, while the process may be ‘satisfied’ with a considerable range of alternative data.
Developmental systems theory, by contrast, assumes that the process/data distinction is at best misleading and at worst completely false, and that while it may be helpful for very specific pragmatic or theoretical reasons to treat a structure now as a process and now as a datum, there is always a risk (to which reductionists routinely succumb) that this methodological convenience will be promoted into an ontological conclusion. In fact, for the proponents of DST, either all structures are both process and data, depending on context, or even more radically, no structure is either.
Fundamental asymmetry
For reductionists there is a fundamental asymmetry between different causal factors, whereas for DST such asymmetries can only be justified by specific purposes, and argue that many of the (generally unspoken) purposes to which such (generally exaggerated) asymmetries have been put are scientifically illegitimate. Thus, for developmental systems theory, many of the most widely applied, asymmetric and entirely legitimate distinctions biologists draw (between, say, genetic factors that create potential and environmental factors that select outcomes or genetic factors of determination and environmental factors of realisation) obtain their legitimacy from the conceptual clarity and specificity with which they are applied, not from their having tapped a profound and irreducible ontological truth about biological causation. One problem might be solved by reversing the direction of causation correctly identified in another. This parity of treatment is especially important when comparing the evolutionary and developmental explanations for one and the same character of an organism.
DST approach
One upshot of this approach is that developmental systems theory also argues that what is inherited from generation to generation is a good deal more than simply genes (or even the other items, such as the fertilised zygote, that are also sometimes conceded). As a result, much of the conceptual framework that justifies ‘selfish gene’ models is regarded by developmental systems theory as not merely weak but actually false. Not only are major elements of the environment built and inherited as materially as any gene but active modifications to the environment by the organism (for example, a termite mound or a beaver’s dam) demonstrably become major environmental factors to which future adaptation is addressed. Thus, once termites have begun to build their monumental nests, it is the demands of living in those very nests to which future generations of termite must adapt.
This inheritance may take many forms and operate on many scales, with a multiplicity of systems of inheritance complementing the genes. From position and maternal effects on gene expression to epigenetic inheritance to the active construction and intergenerational transmission of enduring niches, development systems theory argues that not only inheritance but evolution as a whole can be understood only by taking into account a far wider range of ‘reproducers’ or ‘inheritance systems’ – genetic, epigenetic, behavioural and symbolic – than neo-Darwinism’s ‘atomic’ genes and gene-like ‘replicators’. DST regards every level of biological structure as susceptible to influence from all the structures by which they are surrounded, be it from above, below, or any other direction – a proposition that throws into question some of (popular and professional) biology’s most central and celebrated claims, not least the ‘central dogma’ of Mendelian genetics, any direct determination of phenotype by genotype, and the very notion that any aspect of biological (or psychological, or any other higher form) activity or experience is capable of direct or exhaustive genetic or evolutionary ‘explanation’.
Developmental systems theory is plainly radically incompatible with both neo-Darwinism and information processing theory. Whereas neo-Darwinism defines evolution in terms of changes in gene distribution, the possibility that an evolutionarily significant change may arise and be sustained without any directly corresponding change in gene frequencies is an elementary assumption of developmental systems theory, just as neo-Darwinism’s ‘explanation’ of phenomena in terms of reproductive fitness is regarded as fundamentally shallow. Even the widespread mechanistic equation of ‘gene’ with a specific DNA sequence has been thrown into question, as have the analogous interpretations of evolution and adaptation.
Likewise, the wholly generic, functional and anti-developmental models offered by information processing theory are comprehensively challenged by DST’s evidence that nothing is explained without an explicit structural and developmental analysis on the appropriate levels. As a result, what qualifies as ‘information’ depends wholly on the content and context out of which that information arises, within which it is translated and to which it is applied.
Criticism
Philosopher Neven Sesardić, while not dismissive of developmental systems theory, argues that its proponents forget that the role between levels of interaction is ultimately an empirical issue, which cannot be settled by a priori speculation; Sesardić observes that while the emergence of lung cancer is a highly complicated process involving the combined action of many factors and interactions, it is not unreasonable to believe that smoking has an effect on developing lung cancer. Therefore, though developmental processes are highly interactive, context dependent, and extremely complex, it is incorrect to conclude main effects of heredity and environment are unlikely to be found in the "messiness". Sesardić argues that the idea that changing the effect of one factor always depends on what is happening in other factors is an empirical claim, as well as a false one; for example, the bacterium Bacillus thuringiensis produces a protein that is toxic to caterpillars. Genes from this bacterium have been placed into plants vulnerable to caterpillars and the insects proceed to die when they eat part of the plant, as they consume the toxic protein. Thus, developmental approaches must be assessed on a case by case basis and in Sesardić's view, DST does not offer much if only posed in general terms. Hereditarian Psychologist Linda Gottfredson differentiates the "fallacy of so–called "interactionism"" from the technical use of gene-environment interaction to denote a non–additive environmental effect conditioned upon genotype. “Interactionism's” over–generalization cannot render attempts to identify genetic and environmental contributions meaningless. Where behavioural genetics attempts to determine portions of variation accounted for by genetics, environmental–developmentalistics like DST attempt to determine the typical course of human development and erroneously conclude the common theme is readily changed.
Another Sesardić argument counters another DST claim of impossibility of determining contribution of trait influence (genetic vs. environment). It necessarily follows a trait cannot be causally attributed to environment as genes and environment are inseparable in DST. Yet DST, critical of genetic heritability, advocates developmentalist research of environmental effects, a logical inconsistency. Barnes et al., made similar criticisms observing that the innate human capacity for language (deeply genetic) does not determine the specific language spoken (a contextually environmental effect). It is then, in principle, possible to separate the effects of genes and environment. Similarly, Steven Pinker argues if genes and environment couldn't actually be separated then speakers have a deterministic genetic disposition to learn a specific native language upon exposure. Though seemingly consistent with the idea of gene–environment interaction, Pinker argues it is nonetheless an absurd position since empirical evidence shows ancestry has no effect on language acquisition — environmental effects are often separable from genetic ones.
Related theories
Developmental systems theory is not a narrowly defined collection of ideas, and the boundaries with neighbouring models are porous. Notable related ideas (with key texts) include:
The Baldwin effect
Evolutionary developmental biology
Neural Darwinism
Probabilistic epigenesis
Relational developmental systems
See also
Systems theory
Complex adaptive system
Developmental psychobiology
The Dialectical Biologist - a 1985 book by Richard Levins and Richard Lewontin which describe a related approach.
Living systems
References
Bibliography
Reprinted as:
Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press.
Dawkins, R. (1982). The Extended Phenotype. Oxford: Oxford University Press.
Oyama, S. (1985). The Ontogeny of Information: Developmental Systems and Evolution. Durham, N.C.: Duke University Press.
Edelman, G.M. (1987). Neural Darwinism: Theory of Neuronal Group Selection. New York: Basic Books.
Edelman, G.M. and Tononi, G. (2001). Consciousness. How Mind Becomes Imagination. London: Penguin.
Goodwin, B.C. (1995). How the Leopard Changed its Spots. London: Orion.
Goodwin, B.C. and Saunders, P. (1992). Theoretical Biology. Epigenetic and Evolutionary Order from Complex Systems. Baltimore: Johns Hopkins University Press.
Jablonka, E., and Lamb, M.J. (1995). Epigenetic Inheritance and Evolution. The Lamarckian Dimension. London: Oxford University Press.
Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford University Press.
Levins, R. and Lewontin, R. (1985). The Dialectical Biologist. London: Harvard University Press.
Neumann-Held, E.M. (1999). The gene is dead- long live the gene. Conceptualizing genes the constructionist way. In P. Koslowski (ed.). Sociobiology and Bioeconomics: The Theory of Evolution in Economic and Biological Thinking, pp. 105–137. Berlin: Springer.
Waddington, C.H. (1957). The Strategy of the Genes. London: Allen and Unwin.
Further reading
Depew, D.J. and Weber, B.H. (1995). Darwinism Evolving. System Dynamics and the Genealogy of Natural Selection. Cambridge, Massachusetts: MIT Press.
Eigen, M. (1992). Steps Towards Life. Oxford: Oxford University Press.
Gray, R.D. (2000). Selfish genes or developmental systems? In Singh, R.S., Krimbas, C.B., Paul, D.B., and Beatty, J. (2000). Thinking about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge University Press: Cambridge. (184-207).
Koestler, A., and Smythies, J.R. (1969). Beyond Reductionism. London: Hutchinson.
Lehrman, D.S. (1953). A critique of Konrad Lorenz’s theory of instinctive behaviour. Quarterly Review of Biology 28: 337-363.
Thelen, E. and Smith, L.B. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, Massachusetts: MIT Press.
External links
William Bechtel, Developmental Systems Theory and Beyond presentation, winter 2006.
Biological systems
Systems theory
Evolutionary biology | 0.8023 | 0.963026 | 0.772635 |
Biocapacity | The biocapacity or biological capacity of an ecosystem is an estimate of its production of certain biological materials such as natural resources, and its absorption and filtering of other materials such as carbon dioxide from the atmosphere.
Biocapacity is used together with ecological footprint as a method of measuring human impact on the environment. Biocapacity and ecological footprint are tools created by the Global Footprint Network, used in sustainability studies around the world.
Biocapacity is expressed in terms of global hectares per person, thus is dependent on human population. A global hectare is an adjusted unit that represents the average biological productivity of all productive hectares on Earth in a given year (because not all hectares produce the same amount of ecosystem services). Biocapacity is calculated from United Nations population and land use data, and may be reported at various regional levels, such as a city, a country, or the world as a whole.
For example, there were roughly 12.2 billion hectares of biologically productive land and water areas on this planet in 2016. Dividing by the number of people alive in that year, 7.4 billion, gives a biocapacity for the Earth of 1.6 global hectares per person. These 1.6 global hectares includes the areas for wild species that compete with people for space.
Applications of biocapacity
An increase in global population can result in a decrease in biocapacity. This is usually due to the fact that the Earth's resources have to be shared; therefore, there becomes little to supply the increasing demand of the increasing population. Currently, this issue can be resolved by outsourcing. However, resources will run out due to the increasing demands and as a result a collapse of an ecosystem can be the consequence of such actions. When the ecological footprint becomes greater than the biocapacity of the population, a biocapacity deficit is suspected.
'Global biocapacity' is a term sometimes used to describe the total capacity of an ecosystem to support various continuous activity and changes. When the ecological footprint of a population exceeds the biocapacity of the environment it lives in, this is called an 'biocapacity deficit'. Such a deficit comes from three sources: overusing one's own ecosystems ("overshoot"), net imports, or use of the global commons. Latest data from Global Footprint Network suggests that humanity was using an equivalence of 1.7 Earths in 2016. The dominant factor of global ecological overshoot comes from carbon dioxide emissions stemming from fossil fuel burning. Additional stresses of greenhouse gases, climate change, and ocean acidification can also aggravate the problem.
In reference to the definition of biocapacity: 1.7 Earths means the renewable resources are being liquidated because they are being consumed faster than the resources can regenerate. Therefore, it will take one year and eight months for the resources humanity uses in one year to be able to regenerate again, including absorbing all the waste we generate. So instead of taking one year's worth of resources per year, we are yearly consuming resources that should last us one year and eight months.
In addition, if this matter becomes severe, an ecological reserve will be set on areas to preserve their ecosystems. Awareness about our depleting resources include: agricultural land, forest resources and rangeland. Biocapacity used in correlation to ecological footprint can therefore suggest whether a specific population, region, country or part of a world is living in the means of their capital. Accordingly, the study of biocapacity and ecological footprint is known as the Ecological Footprint Analysis (EFA).
Biocapacity is also affected by the technology used during the year. With new technologies emerging, it is not clear whether the technology in that year is good or bad but the technology does impact resource supply and demand, which in turn affects biocapacity. Hence what is considered “useful” can change from year to year (e.g. use of corn (maize) stover for cellulosic ethanol production would result in corn stover becoming a useful material, and thus increase the biocapacity of maize cropland).
Moreover, environmentalists have created ecological footprint calculators for a single person(s) to determine whether they are encompassing more than what is available for them in their population. Consequently, biocapacity results will be applied to their ecological footprint to determine how much they may contribute or take away from sustainable development.
In general, biocapacity is the amount of resources available to people at a specific moment in time to a specific population (supply) and to differentiate between ecological footprint – which is the environmental demand of a regional ecosystem. Biocapacity is able to determine the human impacts on Earth. By determining productivity of land (i.e. the resources available for human consumption), biocapacity will be able to predict and perhaps examine the effects on the ecosystems closely based on collected results of human consumption. The biocapacity of an area is calculated by multiplying the actual physical area by the yield factor with the appropriate equivalence factor. Biocapacity is usually expressed in global hectares (gha). Since global hectares is able to convert human consumptions like food and water into a measurement, biocapacity can be applied to determine the carrying capacity of the Earth. Likewise, because an economy is tied to various production factors such as natural resources, biocapacity can also be applied to determine human capital.
See also
List of countries by ecological footprint
Global Footprint Network
Global Hectare
Human population
Carrying Capacity
Ecological reserve
Sustainable Development
Ecological Footprint
World Energy Consumption
References
Other resources
Videos
Finding Australia’s biocapacity Dr Mathis Wackernagel explains biocapacity and how it’s calculated.
Ecological Balance Sheets for 180+ Countries Global Footprint Network
Peer-reviewed Articles
The importance of resource security for poverty eradication;
Defying the Footprint Oracle: Implications of Country Resource Trends
Data
Results from the National Footprint and Biocapacity Accounts
Human overpopulation
Sustainability
Population ecology | 0.783637 | 0.985944 | 0.772622 |
Economic sector | One classical breakdown of economic activity distinguishes three sectors:
Primary: involves the retrieval and production of raw-material commodities, such as corn, coal, wood or iron. Miners, farmers and fishermen are all workers in the primary sector.
Secondary: involves the transformation of raw or intermediate materials into goods, as in steel into cars, or textiles into clothing. Builders and dressmakers work in the secondary sector.
Tertiary: involves the supplying of services to consumers and businesses, such as babysitting, cinemas or banking. Shopkeepers and accountants work in the tertiary sector.
In the 20th century, economists began to suggest that traditional tertiary services could be further distinguished from "quaternary" and quinary service sectors. Economic activity in the hypothetical quaternary sector comprises information- and knowledge-based services, while quinary services include industries related to human services and hospitality.
Economic theories divide economic sectors further into economic industries.
Historic evolution
An economy may include several sectors that evolved in successive phases:
The ancient economy built mainly on the basis of subsistence farming.
The Industrial Revolution lessened the role of subsistence farming, converting land-use to more extensive and monocultural forms of agriculture over the last three centuries. Economic growth took place mostly in the mining, construction and manufacturing industries.
In the economies of modern consumer societies, services, finance, and technology—the knowledge economy—play an increasingly significant role.
Even in modern times, developing countries tend to rely more on the first two sectors, in contrast to developed countries.
By ownership
An economy can also be divided along different lines:
Public sector or state sector
Private sector or privately run businesses
Voluntary sector
See also
Three-sector theory
Jean Fourastié
Industry classification
International Standard Industrial Classification
Industry Classification Benchmark
North American Industry Classification System – a sample application of sector-oriented analysis
Division of labour
Economic development
References
01
Business analysis
Business management | 0.778121 | 0.992869 | 0.772572 |
Convention (norm) | A convention influences a set of agreed, stipulated, or generally accepted standards, social norms, or other criteria, often taking the form of a custom.
In physical sciences, numerical values (such as constants, quantities, or scales of measurement) are called conventional if they do not represent a measured property of nature, but originate in a convention, for example an average of many measurements, agreed between the scientists working with these values.
General
A convention is a selection from among two or more alternatives, where the rule or alternative is agreed upon among participants. Often the word refers to unwritten customs shared throughout a community. For instance, it is conventional in many societies that strangers being introduced shake hands. Some conventions are explicitly legislated; for example, it is conventional in the United States and in Germany that motorists drive on the right side of the road, whereas in Australia, New Zealand, Japan, Nepal, India and the United Kingdom motorists drive on the left. The standardization of time is a human convention based on the solar cycle or calendar. The extent to which justice is conventional (as opposed to natural or objective) is historically an important debate among philosophers.
The nature of conventions has raised long-lasting philosophical discussion. Quine, Davidson, and David Lewis published influential writings on the subject. Lewis's account of convention received an extended critique in Margaret Gilbert's On Social Facts (1989), where an alternative account is offered. Another view of convention comes from Ruth Millikan's Language: A Biological Model (2005), once more against Lewis.
According to David Kalupahana, The Buddha described conventions—whether linguistic, social, political, moral, ethical, or even religious—as arising dependent on specific conditions. According to his paradigm, when conventions are considered absolute realities, they contribute to dogmatism, which in turn leads to conflict. This does not mean that conventions should be absolutely ignored as unreal and therefore useless. Instead, according to Buddhist thought, a wise person adopts a Middle Way without holding conventions to be ultimate or ignoring them when they are fruitful.
Customary or social conventions
Social
In sociology, a social rule refers to any social convention commonly adhered to in a society. These rules are not written in law or otherwise formalized. In social constructionism, there is a great focus on social rules. It is argued that these rules are socially constructed, that these rules act upon every member of a society, but at the same time, are re-produced by the individuals.
Sociologists representing symbolic interactionism argue that social rules are created through the interaction between the members of a society. The focus on active interaction highlights the fluid, shifting character of social rules. These are specific to the social context, a context that varies through time and place. That means a social rule changes over time within the same society. What was acceptable in the past may no longer be the case. Similarly, rules differ across space: what is acceptable in one society may not be so in another.
Social rules reflect what is acceptable or normal behaviour in any situation. Michel Foucault's concept of discourse is closely related to social rules as it offers a possible explanation how these rules are shaped and change. It is the social rules that tell people what is normal behaviour for any specific category. Thus, social rules tell a woman how to behave in a womanly manner, and a man, how to be manly. Other such rules are as follows:
Strangers being introduced shake hands, as in Western societies, but:
Bow toward each other, in Korea, Japan and China
Wai each other in Thailand
Do not bow at each other, in the Jewish tradition
In the United States, eye contact, a nod of the head toward each other, and a smile, with no bowing; the palm of the hand faces sideways, neither upward nor downward, in a business handshake.
Present business cards to each other, in business meetings (both-handed in Japan)
Click heels together, while saluting in some military contexts
In most places it's always polite to ask before kissing or hugging, this is called public display of affection.
A property norm is to place things back where we found them.
A property norm is used to identify which commodities are accepted as money.
A sexual norm can refer to a personal or a social norm. Most cultures have social norms regarding sexuality, and define normal sexuality to consist only of certain sex acts between individuals who meet specific criteria of age, consanguinity, race/ethnicity, and/or social role and socioeconomic status. In the west outside the traditional norm between consenting adults what is considered not normal is what falls under what is regarded as paraphilia or sexual perversion.
A form of marriage, polygyny or polyandry, is right or wrong in a given society, as is homosexual marriage considered wrong in many of the societies. An religious more for an example is that a woman or man must not cohabitate, live together, when romantically involved until they have gotten married. Adultery is considered wrong that is not violating sexual fidelity when there is union of a couple in marriage.
A men's and women's dress code.
Avoid using rude hand gestures like pointing at people, swear words, offensive language etc.,
A woman's curtsey in some societies
In the Middle East, never displaying the sole of the foot toward another, as this would be seen as a grave insult.
In many schools, though seats for students are not assigned they are still "claimed" by certain students, and sitting in someone else's seat is considered an insult.
To reciprocate when something is done for us.
Etiquette norms, like asking to be excused from the gathering's table, be ready to pay for your bill particularly in the case you asked people to dinner, it is a faux pas to refuse an offer of food as a guest.
Contraception norms, not to limit access to them by women who require it, some cultures limit contraception.
Recreational drug use restrictions on access or as popularly accepted in the culture where it is used as an example alcohol, nicotine, cannabis and hashish, there is a disincentive and prohibition for controlled substances where use and sale is prohibited like MDMA and party drugs.
The belief that certain forms of discrimination are unethical because they take something away from the person by restrictions and by being ostracised. Furthermore, can "Restrict women's and girls' rights, access to empowerment opportunities and resources".
A person has a duty of care for the aged persons within the family. This is particularly true in countries of Asia. Much of aged care falls under unpaid labor.
Refuse to favor known persons, as this would be an abuse of power relationship.
Do not make a promise if you know that you can not keep it.
Do not ask for money if you know that you can not pay it back to that person or place.
"Practice honesty and not deceive the innocent with false promises to obtain economic benefits or gratuities."
It is suitable to make a Pledge of Allegiance in the United States, when prompted to in some social contexts.
An gentlemen's agreement, or gentleman's agreement, is an informal and legally non-binding agreement between two or more parties. We follow through on our business dealings, when we say we will do something then we do it and will not falter to do so.
Do not divulge the privacy of others.
Treat friends and family nonviolently, be faithful and honest in a couple, to treat with respect the beliefs, activities or aims of our parents, show respect for beliefs, religious and cultural symbols of others.
Tolerate and respect people with functional diversity, particularly when they wish to integrate in a game or sports equipment. Also tolerate different points of view than your own, even if contrary, and do not try and change their beliefs by force.
Give the seat to people with children, pregnant or elderly, in public and private transportation.
Face the front, do not go elevator surfing, and do not push extra buttons in an elevator or stand too close to someone if there are few people.
In a library, it is polite to have talk in the same noise volume as that of a classroom.
In a cinema, it is correct to not talk during a movie because people are there to watch the film, also it is correct to not have phones on as the light and sound will distract other patrons.
If you are going to be punctual, notify friends or acquaintances if you will be late.
If you cannot show up to an outing, restaurant, theater, cinema, etc., it's proper to give the reason over your phone or address sometime prior.
It is a norm to speak one at a time.
A religious vow is a special promise. It made in a religious sense or in ceremonies such as in marriages when there is a couple who are being promised to marriage called "marriage vows", they are also promising one another to be faithful and take care of their children.
Helping somebody in need, for social responsibility or to prevent harm. See the parable of the Good Samaritan.
Do not go to a non-fast food restaurant or bar unless you have enough to make a good tip, depending on the place.
Examples of US social norms or customs turned into laws include the following:
People under 21 cannot buy alcohol.
You must be 16 to drive.
Firearms are legal and relatively accessible to anyone who wants one.
In a city you cannot cross the street wherever you like, you must use a zebra crossing. You can be fined if the police catch you breaking this rule.
It is a social norm to provide tips in the US to waitresses and waiters.
There are numerous gender-specific norms that influence society:
Girls should wear pink; boys should wear blue.
Men should be strong and not show any emotion.
Women should be caring and nurturing.
Men should do repairs at the house and be the one to work and make money; while women are expected to take care of the housework and children.
A man should pay for the woman's meal when going out to dinner.
Men should open doors for women at bars, clubs, workplace, and should clear the way for the exit.
Government
In government, convention is a set of unwritten rules that participants in the government must follow. These rules can be ignored only if justification is clear, or can be provided. Otherwise, consequences follow. Consequences may include ignoring some other convention that has until now been followed. According to the traditional doctrine (Dicey), conventions cannot be enforced in courts, because they are non-legal sets of rules. Convention is particularly important in the Westminster System of government, where many of the rules are unwritten.
See also
A Dictionary of Slang and Unconventional English
Conventional electrical unit
Conventional insulin therapy
Conventional landing gear
Conventional pollutant
Conventional sex
Conventional superconductor
Conventional treatment
Conventional tillage
Conventional wastewater treatment
Conventional wisdom
Conventionalism
Conventionally grown
De facto standard
Non-conventional trademark
Standard (disambiguation)
Trope (literature)
Unconventional computing
Unconventional superconductor
Unconventional wind turbines
References
External links
Rescorla, Michael (2007) Convention – Stanford Encyclopedia of Philosophy
Law-Ref.org – an index of important international conventions
Concepts in ethics
Consensus reality
Social agreement
Social concepts | 0.775771 | 0.995749 | 0.772474 |
Natalism | Natalism (also called pronatalism or the pro-birth position) is a policy paradigm or personal value that promotes the reproduction of human life as an important objective of humanity and therefore advocates high birthrate.
According to the Merriam-Webster dictionary, the term, as it relates to the belief itself, dates from 1971 and comes from , formed from , birthrate.
Just like there seems to be an almost universal population decline associated with cultural modernization, attempts at a political response are also growing. According to the UN, the share of countries with pronatalist policies had grown from 20% in 2005 to 28% in 2019.
Motives
Generally, natalism promotes child-bearing and parenthood as desirable for social reasons and to ensure the continuance of humanity. Some philosophers have noted that if humans fail to have children, humans would become extinct.
Religion
Many religions encourage procreation, and religiousness in members can sometimes correlate to higher rates of fertility. Judaism, Islam, and major branches of Christianity, including the Church of Jesus Christ of Latter-day Saints and the Catholic Church, encourage procreation. In 1979 one research paper indicated that Amish people had an average of 6.8 children per family. Among some conservative Protestants, the Quiverfull movement advocates for large families and views children as blessings from God.
Those who adhere to a more traditionalist framing may therefore seek to limit access to abortion and contraception, as well. The 1968 encyclical Humanae Vitae e.g. criticized artificial contraception and advocated for a natalist position.
Politics
Beginning around the early 2020s, the threat of "global demographic collapse" began to become a cause célèbre among wealthy tech and venture-capitalist circles as well as the political right. In Europe, Hungarian prime minister Viktor Orbán has made natalism a key plank of his political platform. In the United States, key figures include Kevin Dolan, organizer of the Natal Conference, Simone and Malcolm Collins, founders of Pronatalist.org, and Elon Musk, who has repeatedly used his public platform to discuss global birth rates.
The right-wing proponents of pronatalism argue that falling birthrates could lead to economic stagnation, diminished innovation, and an unsustainable burden on social systems due to an aging population. The movement suggests that without a significant increase in birth rates, the sustainability of civilizations could be in danger; Elon Musk has called it a "much bigger risk" than global warming.
Intention to have children
An intention to have children is a substantial fertility factor in actually ending up doing so, but childless individuals who intend to have children immediately or within two or three years are generally more likely to succeed than those who intend to have children in the long term.
There are many determinants of the intention to have children, including:
the preference of family size, which influences that of the children through early adulthood. Likewise, the extended family influences fertility intentions, with increased numbers of nephews and nieces increasing the preferred number of children. These effects may be observed in the case of Mormon or modern Israeli demographics.
social pressure from kin and friends to have another child, such as overall cultural normativity.
social support. However, a study from West Germany came to the conclusion that both men receiving no support at all and men receiving support from many different people have a lower probability of intending to have another child, with the latter probably related to coordination problems.
happiness, with happier people tending to want more children. However, other research has shown that the social acceptability of the choice to have or not have children plays a significant factor in reproductive decisions. The social stigma, marginalization, and even domestic violence that accompanies those without children, by choice or chance, is a significant factor in their feelings of happiness or belonging within their communities.
secure housing situation, and feeling of overall economic stability more generally.
Concrete policies
Natalism in public policy typically seeks to create financial and social incentives for populations to reproduce, such as providing tax incentives that reward having and supporting children.
Some countries with population decline offer incentives to the people to have large families as a means of national efforts to reverse declining populations. Incentives may include a one-time baby bonus, or ongoing child benefit payments or tax reductions. Some impose penalties or taxes on those with fewer children. Some nations, such as Japan, Singapore, and South Korea, have implemented, or tried to implement, interventionist natalist policies, creating incentives for larger families among native stock.
Paid maternity and paternity leave policies can also be used as an incentive. For example, Sweden has generous parental leave wherein parents are entitled to share 16 months' paid leave per child, the cost divided between both employer and state. However, it appears not to work as desired.
Postcommunist
Russia
Natalist thinking was common during Soviet times. After a brief adherence to the strict Communist doctrine in 1920s and attempts to raise children communally, coupled with the government-provided healthcare, the Soviet government switched to neo-traditionalism, promoting family values and sobriety, banning abortions and making divorces harder to obtain, advancing natalist ideals that made mockery of irresponsible parents. The expanded opportunities for female employment caused a population crisis in the 1930s, government had expanded access to child care starting at the age of two. After the Great Patriotic war the skewed ratio of men to women prompted additional financial assistance to women that had children or were pregnant. Despite the promotion and long maternity leave with maintenance of employment and salary, modernization still caused birthrates to continue to slide into the 1970s.
The end of the USSR in 1991 was accompanied by a large drop in fertility. In 2006, Vladimir Putin made demographics an important issue, instituting a two-pronged approach of direct financial rewards and socio-cultural policies. The notable example of the former is the maternal-capital program where the woman is provided with subsidies that can be spent only on improved housing or the education of a child (and can also be saved for the retirement).
Hungary
The Hungarian government of Viktor Orbán in 2019 announced pecuniary incentives (including eliminating taxes for mothers with more than three children, and reducing credit payments and easier access to loans), and expanding day care and kindergarten access.
Critics
Natalism has been criticized on human-rights and environmental grounds. Most antinatalists, malthusians, reproductive rights advocates and environmentalists see natalism as a driver of reproductive injustice, population growth, and ecological overshoot. In politics, journalists have linked the pronatalist movement with far-right eugenics.
See also
Anti-abortion movements
Child tax credit
Tax on childlessness (Roman Jus trium liberorum, Romanian Decree 770) such as Bachelor tax
Population decline
Gender role
References
Sources
Further reading
Calder, Vanessa Brown, and Chelsea Follett (August 10, 2023). Freeing American Families: Reforms to Make Family Life Easier and More Affordable, Policy Analysis no. 955, Cato Institute, Washington, DC.
Caplan, Bryan. Selfish Reasons To Have More Kids, (Basic Books, 2012).
Last, Jonathan V.. What to Expect When No One's Expecting, (Encounter Books, 2013)
Lovett, Laura L. Conceiving the Future: Pronatalism, Reproduction, and the Family in the United States, 1890-1938 (University of North Carolina Press, 2007) ]http://www.jstor.org/stable/10.5149/9780807868102_lovett.1 online]
McKeown, John. God's babies: Natalism and Bible interpretation in modern America (Open Book Publishers, 2014) online.
Human population planning
Philosophy of biology | 0.775616 | 0.995942 | 0.772469 |
In situ | is a Latin phrase meaning "in place" or "on site", derived from ("in") and (ablative of situs, "place"). It denotes an object's existence or a process's occurrence within its original environment. This concept, widely applied across disciplines, enhances analytical accuracy by preserving contextual factors critical to the subject under investigation. In contrast, ex situ methods, which involve relocation, risk altering or disrupting inherent contexts.
In situ methodologies are frequently employed in the natural sciences. Geologists analyze soil composition and rock formations in the field, while environmental scientists monitor ecosystems on-site, ensuring observations reflect true environmental states. Biologists study organisms within their natural habitats, uncovering behaviors and ecological relationships that may not manifest in artificial settings. Chemistry and experimental physics employ in situ techniques to examine substances and reactions in their original states, enabling real-time observation of dynamic processes.
Applied sciences use in situ methodologies to develop solutions to tangible problems. Aerospace engineering utilizes on-site inspection and monitoring technologies to evaluate systems within operational environments, avoiding service interruptions. Medicine, especially oncology, employs the term to describe early-stage cancers confined to their original location. Identifying a tumor as in situ indicates that it has not invaded neighboring tissues, a critical factor in determining prognosis and treatment strategies. In space science, in situ planetary exploration involves direct observation and data collection from celestial bodies, circumventing the logistical challenges of sample-return missions.
In the humanities, particularly archaeology, the concept of in situ is applied to preserve the contextual integrity of the subject under examination. Archaeologists study artifacts at their discovery sites to maintain the spatial relationships and environmental factors that contribute to accurate historical interpretations. The arts embrace the in situ concept when creating or displaying artwork within its intended context. Artists may design pieces specifically for certain locations, such as sculptures integrated into public parks or installations that interact with architectural spaces. Displaying art in situ strengthens the connection between the work and its surroundings by situating the piece within a broader environmental or cultural framework.
Aerospace engineering
In the aerospace industry, in situ refers to inspection and monitoring technologies used to assess the condition of systems or components within their operational environment, without requiring disassembly or removal from service. Various non-destructive and structural monitoring methods are available for detecting in situ damage during service, including infrared thermography, speckle shearing interferometry (also known as shearography), and ultrasonic testing, which are used to characterize damage from impacts on composite structures. Each method has its limitations—infrared thermography may be less effective on materials with low emissivity, shearography requires controlled environmental conditions, and ultrasonic testing can be time-consuming for large structures. However, their combined use has proven effective in damage assessment. A study demonstrated the use of live monitoring with AC and DC sensors to identify cracks, delaminations, and fiber fractures in composite laminates by detecting changes in electrical resistance and capacitance.
Archaeology
In archaeology, in situ refers to artifacts or other materials that remain at their original site, undisturbed since they were left by past peoples. Documenting the precise location, depth, and surrounding materials of in situ finds allows archaeologists to reconstruct detailed accounts of historical events and practices. While artifacts are often carefully extracted for analysis, features—such as hearths, postholes, and building foundations—typically must be documented in situ to preserve contextual information as excavation progresses to deeper layers. The documentation process includes not only written descriptions in site notebooks but also scaled drawings, mapping, and high-resolution photography. Advanced techniques such as 3D scanning and Geographic Information Systems (GIS) are employed to capture more complex details. Artifacts found out of context (ex situ) lack their original interpretive value; however, they can still offer insights into the types and locations of undiscovered in situ artifacts, thereby informing future excavations.
In the case of underwater shipwrecks, the Convention on the Protection of the Underwater Cultural Heritage articulates principles that signatory states are required to follow. Among these is the recommendation that in situ preservation be prioritized as the preferred approach. This preference partly arises from the unique preservation conditions underwater, where reduced oxygen levels and stable temperatures can keep artifacts intact for extended periods. Removing shipwrecks from their submerged context can lead to rapid deterioration upon exposure to air, such as the oxidization of iron components.
During the excavation of burial sites or surface deposits, in situ specifically refers to the detailed recording and cataloging of human remains as found in their original positions. The excavation of mass graves, in particular, shows the complexity of preserving remains in their in situ state, where they may be entangled with soil, clothing, and other artifacts. With dozens or even hundreds of bodies to recover, researchers need to document the remains in their original context before determining details such as identity, cause of death, and other forensic factors.
Art
In the arts, the term in situ was embraced by artists and critics in the late 1960s and 1970s to describe artworks created specifically for particular locations. These works are designed with careful consideration of the site's contextual attributes, making the relationship between the artwork and its environment central to their impact. Unlike pieces that are merely placed in a location, in situ artworks are conceived in dialogue with their settings, engaging with the location's history, geography, and social functions.
This approach is exemplified in the works of Christo and Jeanne-Claude, artists known for their site-specific environmental installations. Many of their projects involved wrapping large-scale landmarks and natural features in fabric, creating temporary transformations of familiar spaces that invite viewers to reconsider their surroundings in unexpected ways—The Pont Neuf Wrapped (1985) and Wrapped Reichstag (1995) are emblematic of the in situ approach. Similarly, American land artists, such as Robert Smithson and Michael Heizer, extended this concept into the natural landscape, where the art became inseparable from the earth itself. In a broader context, in situ has become an essential term in aesthetics and art criticism, signifying an artistic strategy that emphasizes the inseparability of a work from its site.
Astronomy
A fraction of the globular star clusters in the Milky Way Galaxy, as well as those in other massive galaxies, might have formed in situ. The rest might have been accreted from now-defunct dwarf galaxies.
In astronomy, in situ also refers to in situ planet formation, in which planets are hypothesized to have formed at the orbital distance they are currently observed
rather than to have migrated from a different orbit (referred to as ex situ formation).
Biology and biomedical engineering
In biology and biomedical engineering, in situ means to examine the phenomenon exactly in place where it occurs (i.e., without moving it to some special medium).
In the case of observations or photographs of living animals, it means that the organism was observed (and photographed) in the wild, exactly as it was found and exactly where it was found. This means it was not taken out of the area. The organism had not been moved to another (perhaps more convenient) location such as an aquarium.
This phrase in situ when used in laboratory science such as cell science can mean something intermediate between in vivo and in vitro. For example, examining a cell within a whole organ intact and under perfusion may be in situ investigation. This would not be in vivo as the donor is sacrificed by experimentation, but it would not be the same as working with the cell alone (a common scenario for in vitro experiments). For instance, an example of biomedical engineering in situ involves the procedures to directly create an implant from a patient's own tissue within the confines of the Operating Room.
In vitro was among the first attempts to qualitatively and quantitatively analyze natural occurrences in the lab. Eventually, the limitation of in vitro experimentation was that they were not conducted in natural environments. To compensate for this problem, in vivo experimentation allowed testing to occur in the original organism or environment. To bridge the dichotomy of benefits associated with both methodologies, in situ experimentation allowed the controlled aspects of in vitro to become coalesced with the natural environmental compositions of in vivo experimentation.
In conservation of genetic resources, "in situ conservation" (also "on-site conservation") is the process of protecting an endangered plant or animal species in its natural habitat, as opposed to ex situ conservation (also "off-site conservation").
Chemistry and chemical engineering
In chemistry, in situ typically means "in the reaction mixture."
There are numerous situations in which chemical intermediates are synthesized in situ in various processes. This may be done because the species is unstable, and cannot be isolated, or simply out of convenience. Examples of the former include the Corey-Chaykovsky reagent and adrenochrome.
In biomedical engineering, protein nanogels made by the in situ polymerization method provide a versatile platform for storage and release of therapeutic proteins. It has tremendous applications for cancer treatment, vaccination, diagnosis, regenerative medicine, and therapies for loss-of-function genetic diseases.
In chemical engineering, in situ often refers to industrial plant "operations or procedures that are performed in place." For example, aged catalysts in industrial reactors may be regenerated in place (in situ) without being removed from the reactors.
Civil engineering
In architecture and building, in situ refers to construction which is carried out at the building site using raw materials - as opposed to prefabricated construction, in which building components are made in a factory and then transported to the building site for assembly. For example, concrete slabs may be cast in situ (also "cast-in-place") or prefabricated.
In situ techniques are often more labour-intensive, and take longer, but the materials are cheaper, and the work is versatile and adaptable. Prefabricated techniques are usually much quicker, therefore saving money on labour costs, but factory-made parts can be expensive. They are also inflexible, and must often be designed on a grid, with all details fully calculated in advance. Finished units may require special handling due to excessive dimensions.
The phrase may also refer to those assets which are present at or near a project site. In this case, it is used to designate the state of an unmodified sample taken from a given stockpile.
Site construction usually involves grading the existing soil surface so that material is "cut" out of one area and "filled" in another area creating a flat pad on an existing slope. The term "in situ" distinguishes soil still in its existing condition from soil modified (filled) during construction. The differences in the soil properties for supporting building loads, accepting underground utilities, and infiltrating water persist indefinitely.
Computer science
For example, a file backup may be restored over a running system, without needing to take the system down to perform the restore. In the context of a database, a restore would allow the database system to continue to be available to users while a restore happened. An in situ upgrade would allow an operating system, firmware or application to be upgraded while the system was still running, perhaps without the need to reboot it, depending on the sophistication of the system.
Another use of the term in-situ that appears in Computer Science focuses primarily on the use of technology and user interfaces to provide continuous access to situationally relevant information in various locations and contexts. Examples include athletes viewing biometric data on smartwatches to improve their performance, a presenter looking at tips on a smart glass to reduce their speaking rate during a speech, or technicians receiving online and stepwise instructions for repairing an engine.
An algorithm is said to be an in situ algorithm, or in-place algorithm, if the extra amount of memory required to execute the algorithm is O(1), that is, does not exceed a constant no matter how large the input ---except for space for recursive calls on the "call stack." Typically such an algorithm operates on data objects directly in place rather than making copies of them.
For example, heapsort is an in situ sorting algorithm, which sorts the elements of an array in place. Quicksort is an in situ sorting algorithm, but in the worst case it requires linear space on the call stack (this can be reduced to log space). Merge sort is generally not written as an in situ algorithm.
In designing user interfaces, for example, if a word processor displays an image and allows the image to be edited without launching a separate image editor, this is called in situ editing.
AJAX partial page data updates is another example of in situ in a Web UI/UX context. Web 2.0 included AJAX and the concept of asynchronous requests to servers to replace a portion of a web page with new data, without reloading
the entire page, as the early HTML model dictated. Arguably, all asynchronous data transfers or any background task is in situ as the normal state is normally unaware of background tasks, usually notified on completion
by a callback mechanism.
With big data, in situ data would mean bringing the computation to where data is located, rather than the other way like in traditional RDBMS systems where data is moved to computational space. This is also known as in-situ processing.
Design and advertising
In design and advertising the term typically means the superimposing of theoretical design elements onto photographs of real world locations. This is a pre-visualization tool to aid in illustrating a proof of concept.
Earth, ocean and atmospheric sciences
In physical geography and the Earth sciences, in situ typically describes natural material or processes prior to transport. For example, in situ is used in relation to the distinction between weathering and erosion, the difference being that erosion requires a transport medium (such as wind, ice, or water), whereas weathering occurs in situ. Geochemical processes are also often described as occurring to material in situ.
In oceanography and ocean sciences, in situ generally refers to observational methods made by obtaining direct samples of the ocean state, such as that obtained by shipboard surveying using a lowered CTD rosette that directly measure ocean salinity, temperature, pressure and other biogeochemical quantities like dissolved oxygen. Historically a reversing thermometer would be used to record the ocean temperature at a particular depth and a Niskin or Nansen bottle used to capture and bring water samples back to the ocean surface for further analysis of the physical, chemical or biological composition.
In the atmospheric sciences, in situ refers to obtained through direct contact with the respective subject, such as a radiosonde measuring a parcel of air or an anemometer measuring wind, as opposed to remote sensing such as weather radar or satellites.
Economics
In economics, in situ is used when referring to the in place storage of a product, usually a natural resource. More generally, it refers to any situation where there is no out-of-pocket cost to store the product so that the only storage cost is the opportunity cost of waiting longer to get your money when the product is eventually sold. Examples of in situ storage would be oil and gas wells, all types of mineral and gem mines, stone quarries, timber that has reached an age where it could be harvested, and agricultural products that do not need a physical storage facility such as hay.
Electrochemistry
In electrochemistry, the phrase in situ refers to performing electrochemical experiments under operating conditions of the electrochemical cell, i.e., under potential control. This is opposed to doing ex situ experiments that are performed under the absence of potential control. Potential control preserves the electrochemical environment essential to maintain the double layer structure intact and the electron transfer reactions occurring at that particular potential in the electrode/electrolyte interphasial region.
Environmental remediation
In situ can refer to where a clean up or remediation of a polluted site is performed using and stimulating the natural processes in the soil, contrary to ex situ where contaminated soil is excavated and cleaned elsewhere, off site.
Experimental physics
In transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM), in situ refers to the observation of materials as they are exposed to external stimuli within the microscope, under conditions that mimic their natural environments. This enables real-time observation of material behavior at the nanoscale. External stimuli in in situ TEM/STEM experiments include mechanical loading and pressure, temperature changes, electrical currents (biasing), radiation, and environmental factors—such as exposure to gas, liquid, and magnetic field—or any combination of these. These conditions allow researchers to study atomic-level processes such as phase transformations, chemical reactions, or mechanical deformations, providing insights into material behavior and properties essential for advancements in materials science.
Experimental psychology
In psychology experiments, in situ typically refers to those experiments done in a field setting as opposed to a laboratory setting.
Gastronomy
In gastronomy, "in situ" refers to the art of cooking with the different resources that are available at the site of the event. Here a person is not going to the restaurant, but the restaurant comes to the person's home.
Law
In legal contexts, in situ is often used for its literal meaning. For example, in Hong Kong, "in situ land exchange" involves the government exchanging the original or expired lease of a piece of land with a new grant or re-grant with the same piece of land or a portion of that.
In the field of recognition of governments under public international law the term in situ is used to distinguish between an exiled government and a government with effective control over the territory, i.e. the government in situ.
Linguistics
In linguistics, specifically syntax, an element may be said to be in situ if it is pronounced in the position where it is interpreted. For example, questions in languages such as Chinese have in situ wh-elements, with structures comparable to "John bought what?" with what in the same position in the sentence as the grammatical object would be in its affirmative counterpart (for example, "John bought bread"). An example of an English wh-element that is not in situ (see wh-movement): "What did John buy?"
Literature
In literature in situ is used to describe a condition. The Rosetta Stone, for example, was originally erected in a courtyard, for public viewing. Most pictures of the famous stone are not in situ pictures of it erected, as it would have been originally. The stone was uncovered as part of building material, within a wall. Its in situ condition today is that it is erected, vertically, on public display at the British Museum in London, England.
Medicine
In cancer/oncology: in situ means that malignant cells are present as a tumor but have not metastasized, or invaded beyond the layer or tissue type where it arose. This can happen anywhere in the body, such as the skin, breast tissue, or lung. For example, a cancer of epithelial origin with such features is called carcinoma in situ, and is defined as not having invaded beyond the basement membrane.
This type of tumor can often, depending on where it is located, be removed by surgery.
In anatomy: in situ refers to viewing structures as they appear in normal healthy bodies. For example, one can open up a cadaver's abdominal cavity and view the liver in situ or one can look at an isolated liver that has been removed from the cadaver's body.
In nursing, "in situ" describes any devices or appliances on the patient's body that remain in their desired and optimal position.
In medical simulation, "in situ" refers to the practice of clinical professionals using high fidelity patient simulators to train for clinical practice in patient care environments, such as wards, operating rooms, and other settings, rather than in dedicated simulation training facilities.
In biomedical, protein nanogels made by the in situ polymerization method provide a versatile platform for storage and release of therapeutic proteins. It has tremendous applications for cancer treatment, vaccination, diagnosis, regenerative medicine, and therapies for loss-of-function genetic diseases.
Mining
In situ leaching or in situ recovery refers to the mining technique of injecting lixiviant underground to dissolve ore and bringing the pregnant leach solution to surface for extraction. Commonly used in uranium mining but has also been used for copper mining.
Petroleum production
In situ refers to recovery techniques which apply heat or solvents to heavy crude oil or bitumen reservoirs beneath the Earth's crust. There are several varieties of in situ techniques, but the ones which work best in the oil sands use heat (steam).
The most common type of in situ petroleum production is referred to as SAGD (steam-assisted gravity drainage) this is becoming very popular in the Alberta Oil Sands.
RF transmission
In radio frequency (RF) transmission systems, in situ is often used to describe the location of various components while the system is in its standard transmission mode, rather than operation in a test mode. For example, if an in situ wattmeter is used in a commercial broadcast transmission system, the wattmeter can accurately measure power while the station is "on air."
Space science
Future space exploration or terraforming may rely on obtaining supplies in situ, such as previous plans to power the Orion space vehicle with fuel minable on the Moon. The Mars Direct mission concept is based primarily on the in situ fuel production using the Sabatier reaction, which produces methane and water from a reaction of hydrogen and carbon dioxide.
In the space sciences, in situ refers to measurements of the particle and field environment that the satellite is embedded in, such as the detection of energetic particles in the solar wind, or magnetic field measurements from a magnetometer.
Urban planning
In urban planning, in-situ upgrading is an approach to and method of upgrading informal settlements.
Vacuum technology
In vacuum technology, in situ baking refers to heating parts of the vacuum system while they are under vacuum in order to drive off volatile substances that may be absorbed or adsorbed on the walls so they cannot cause outgassing.
Road assistance
The term in situ, used as "repair in situ", means to repair a vehicle at the place where it has a breakdown.
See also
In situ conservation
Ex situ conservation
List of colossal sculptures in situ
List of Latin phrases
Notes
References
Latin words and phrases
Latin legal terminology
Latin biological phrases
Latin medical words and phrases
Animal test conditions
Scientific terminology | 0.774381 | 0.997528 | 0.772466 |
Global catastrophe scenarios | Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic (caused by humans), such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
Anthropogenic
Experts at the Future of Humanity Institute at the University of Oxford and the Centre for the Study of Existential Risk at the University of Cambridge prioritize anthropogenic over natural risks due to their much greater estimated likelihood. They are especially concerned by, and consequently focus on, risks posed by advanced technology, such as artificial intelligence and biotechnology.
Artificial intelligence
The creators of a superintelligent entity could inadvertently give it goals that lead it to annihilate the human race. It has been suggested that if AI systems rapidly become super-intelligent, they may take unforeseen actions or out-compete humanity. According to philosopher Nick Bostrom, it is possible that the first super-intelligence to emerge would be able to bring about almost any possible outcome it valued, as well as to foil virtually any attempt to prevent it from achieving its objectives. Thus, even a super-intelligence indifferent to humanity could be dangerous if it perceived humans as an obstacle to unrelated goals. In Bostrom's book Superintelligence, he defines this as the control problem. Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have echoed these concerns, with Hawking theorizing that such an AI could "spell the end of the human race".
In 2009, the Association for the Advancement of Artificial Intelligence (AAAI) hosted a conference to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness, as depicted in science-fiction, is probably unlikely, but there are other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.
A survey of AI experts estimated that the chance of human-level machine learning having an "extremely bad (e.g., human extinction)" long-term effect on humanity is 5%. A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by super-intelligence by 2100. Eliezer Yudkowsky believes risks from artificial intelligence are harder to predict than any other known risks due to bias from anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims they underestimate the potential power of AI.
Biotechnology
Biotechnology can pose a global catastrophic risk in the form of bioengineered organisms (viruses, bacteria, fungi, plants, or animals). In many cases the organism will be a pathogen of humans, livestock, crops, or other organisms we depend upon (e.g. pollinators or gut bacteria). However, any organism able to catastrophically disrupt ecosystem functions, e.g. highly competitive weeds, outcompeting essential crops, poses a biotechnology risk.
A biotechnology catastrophe may be caused by accidentally releasing a genetically engineered organism from controlled environments, by the planned release of such an organism which then turns out to have unforeseen and catastrophic interactions with essential natural or agro-ecosystems, or by intentional usage of biological agents in biological warfare or bioterrorism attacks. Pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics. For example, a group of Australian researchers unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents. The modified virus became highly lethal even in vaccinated and naturally resistant mice. The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated.
Biological weapons, whether used in war or terrorism, could result in human extinction. Terrorist applications of biotechnology have historically been infrequent. To what extent this is due to a lack of capabilities or motivation is not resolved. However, given current development, more risk from novel, engineered pathogens is to be expected in the future. Exponential growth has been observed in the biotechnology sector, and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades. They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users). In 2008, a survey by the Future of Humanity Institute estimated a 2% probability of extinction from engineered pandemics by 2100.
Noun and Chyba propose three categories of measures to reduce risks from biotechnology and natural pandemics: Regulation or prevention of potentially dangerous research, improved recognition of outbreaks, and developing facilities to mitigate disease outbreaks (e.g. better and/or more widely distributed vaccines).
Chemical weapons
By contrast with nuclear and biological weapons, chemical warfare, while able to create multiple local catastrophes, is unlikely to create a global one.
Choice to have fewer children
Population decline through a preference for fewer children. If developing world demographics are assumed to become developed world demographics, and if the latter are extrapolated, some projections suggest an extinction before the year 3000. John A. Leslie estimates that if the reproduction rate drops to the German or Japanese level the extinction date will be 2400. However, some models suggest the demographic transition may reverse itself due to evolutionary biology.
Climate change
Human-caused climate change has been driven by technology since the 19th century or earlier. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, increased spread of known infectious diseases such as malaria, and rapid mutation of microorganisms.
A common belief is that the current climate crisis could spiral into human extinction. In November 2017, a statement by 15,364 scientists from 184 countries indicated that increasing levels of greenhouse gases from use of fossil fuels, human population growth, deforestation, and overuse of land for agricultural production, particularly by farming ruminants for meat consumption, are trending in ways that forecast an increase in human misery over coming decades. An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies". Carl Sagan and others have raised the prospect of extreme runaway global warming turning Earth into an uninhabitable Venus-like planet. Some scholars argue that much of the world would become uninhabitable under severe global warming, but even these scholars do not tend to argue that it would lead to complete human extinction, according to Kelsey Piper of Vox. All the IPCC scenarios, including the most pessimistic ones, predict temperatures compatible with human survival. The question of human extinction under "unlikely" outlier models is not generally addressed by the scientific literature. Factcheck.org judges that climate change fails to pose an established "existential risk", stating: "Scientists agree climate change does pose a threat to humans and ecosystems, but they do not envision that climate change will obliterate all people from the planet."
Cyberattack
Cyberattacks have the potential to destroy everything from personal data to electric grids. Christine Peterson, co-founder and past president of the Foresight Institute, believes a cyberattack on electric grids has the potential to be a catastrophic risk. She notes that little has been done to mitigate such risks, and that mitigation could take several decades of readjustment.
Environmental disaster
An environmental or ecological disaster, such as world crop failure and collapse of ecosystem services, could be induced by the present trends of overpopulation, economic development, and non-sustainable agriculture. Most environmental scenarios involve one or more of the following: Holocene extinction event, scarcity of water that could lead to approximately half the Earth's population being without safe drinking water, pollinator decline, overfishing, massive deforestation, desertification, climate change, or massive water pollution episodes. Detected in the early 21st century, a threat in this direction is colony collapse disorder, a phenomenon that might foreshadow the imminent extinction of the Western honeybee. As the bee plays a vital role in pollination, its extinction would severely disrupt the food chain.
An October 2017 report published in The Lancet stated that toxic air, water, soils, and workplaces were collectively responsible for nine million deaths worldwide in 2015, particularly from air pollution which was linked to deaths by increasing susceptibility to non-infectious diseases, such as heart disease, stroke, and lung cancer. The report warned that the pollution crisis was exceeding "the envelope on the amount of pollution the Earth can carry" and "threatens the continuing survival of human societies".
A May 2020 analysis published in Scientific Reports found that if deforestation and resource consumption continue at current rates they could culminate in a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" within the next several decades. The study says humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest." The authors also note that "while violent events, such as global war or natural catastrophic events, are of immediate concern to everyone, a relatively slow consumption of the planetary resources may be not perceived as strongly as a mortal danger for the human civilization."
Evolution
Some scenarios envision that humans could use genetic engineering or technological modifications to split into normal humans and a new species – posthumans. Such a species could be fundamentally different from any previous life form on Earth, e.g. by merging humans with technological systems. Such scenarios assess the risk that the "old" human species will be outcompeted and driven to extinction by the new, posthuman entity.
Experimental accident
Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System. Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. All of these worries have so far proven unfounded.
For example, scientists worried that the first nuclear test might ignite the atmosphere. Early in the development of thermonuclear weapons there were some concerns that a fusion reaction could "ignite" the atmosphere in a chain reaction that would engulf Earth. Calculations showed the energy would dissipate far too quickly to sustain a reaction.
Others worried that the RHIC or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. It has been pointed out that much more energetic collisions take place currently in Earth's atmosphere.
Though these particular concerns have been challenged, the general concern about new experiments remains.
Mineral resource exhaustion
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and the paradigm founder of ecological economics, has argued that the carrying capacity of Earth—that is, Earth's capacity to sustain human populations and consumption levels—is bound to decrease sometime in the future as Earth's finite stock of mineral resources is presently being extracted and put to use; and consequently, that the world economy as a whole is heading towards an inevitable future collapse, leading to the demise of human civilization itself. Ecological economist and steady-state theorist Herman Daly, a student of Georgescu-Roegen, has propounded the same argument by asserting that "all we can do is to avoid wasting the limited capacity of creation to support present and future life [on Earth]."
Ever since Georgescu-Roegen and Daly published these views, various scholars in the field have been discussing the existential impossibility of allocating Earth's finite stock of mineral resources evenly among an unknown number of present and future generations. This number of generations is likely to remain unknown to us, as there is no way—or only little way—of knowing in advance if or when mankind will ultimately face extinction. In effect, any conceivable intertemporal allocation of the stock will inevitably end up with universal economic decline at some future point.
Nanotechnology
Many nanoscale technologies are in development or currently in use. The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision. Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories of desktop proportions. When nanofactories gain the ability to produce other nanofactories, production may only be limited by relatively abundant factors such as input materials, energy and software.
Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons. Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.
Chris Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories:
From augmenting the development of other technologies such as AI and biotechnology.
By enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used.
From uncontrolled self-perpetuating processes with destructive effects.
Several researchers say the bulk of risk from nanotechnology comes from the potential to lead to war, arms races, and destructive global government. Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races):
A large number of players may be tempted to enter the race since the threshold for doing so is low;
The ability to make weapons with molecular manufacturing will be cheap and easy to hide;
Therefore, lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes;
Molecular manufacturing may reduce dependency on international trade, a potential peace-promoting factor;
Wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.
Since self-regulation by all state and non-state actors seems hard to achieve, measures to mitigate war-related risks have mainly been proposed in the area of international cooperation. International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control. International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed. One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour. The Center for Responsible Nanotechnology also suggests some technical restrictions. Improved transparency regarding technological capabilities may be another important facilitator for arms-control.
Gray goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation and has been a theme in mainstream media and fiction. This scenario involves tiny self-replicating robots that consume the entire biosphere (ecophagy) using it as a source of energy and building blocks. Nowadays, however, nanotech experts—including Drexler—discredit the scenario. According to Phoenix, a "so-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident".
Nuclear war
Some fear a hypothetical World War III could cause the annihilation of humankind. Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating large numbers of nuclear weapons would have an immediate, short term and long-term effects on the climate, potentially causing cold weather known as a "nuclear winter" with reduced sunlight and photosynthesis that may generate significant upheaval in advanced civilizations. However, while popular perception sometimes takes nuclear war as "the end of the world", experts assign low probability to human extinction from nuclear war. In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400–450 million directly, mostly in the United States, Europe and Russia, and maybe several hundred million more through follow-up consequences in those same areas. In 2008, a survey by the Future of Humanity Institute estimated a 4% probability of extinction from warfare by 2100, with a 1% chance of extinction from nuclear warfare.
The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Mistakenly launching a nuclear attack in response to a false alarm is one possible scenario; this nearly happened during the 1983 Soviet nuclear false alarm incident. Although the probability of a nuclear war per year is slim, Professor Martin Hellman has described it as inevitable in the long run; unless the probability approaches zero, inevitably there will come a day when civilization's luck runs out. During the Cuban Missile Crisis, U.S. president John F. Kennedy estimated the odds of nuclear war at "somewhere between one out of three and even". The United States and Russia have a combined arsenal of 14,700 nuclear weapons, and there is an estimated total of 15,700 nuclear weapons in existence worldwide.
World population and agricultural crisis
The Global Footprint Network estimates that current activity uses resources twice as fast as they can be naturally replenished, and that growing human population and increased consumption pose the risk of resource depletion and a concomitant population crash. Evidence suggests birth rates may be rising in the 21st century in the developed world. Projections vary; researcher Hans Rosling has projected population growth to start to plateau around 11 billion, and then to slowly grow or possibly even shrink thereafter. A 2014 study published in Science asserts that the human population will grow to around 11 billion by 2100 and that growth will continue into the next century.
The 20th century saw a rapid increase in human population due to medical developments and massive increases in agricultural productivity such as the Green Revolution. Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their 1994 study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study.
The authors of this study believe the mentioned agricultural crisis will begin to have an effect on the world after 2020 and will become critical after 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.
Since supplies of petroleum and natural gas are essential to modern agriculture techniques, a fall in global oil supplies (see peak oil for global concerns) could cause spiking food prices and unprecedented famine in the coming decades.
Wheat is humanity's third-most-produced cereal. Extant fungal infections such as Ug99 (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and the infection spreads on the wind. Should the world's large grain-producing areas become infected, the ensuing crisis in wheat availability would lead to price spikes and shortages in other food products.
Human activity has triggered an extinction event often referred to as the sixth "mass extinction", which scientists consider a major threat to the continued existence of human civilization. The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nations' Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, asserts that roughly one million species of plants and animals face extinction from human impacts such as expanding land use for industrial agriculture and livestock rearing, along with overfishing. A 1997 assessment states that over a third of Earth's land has been modified by humans, that atmospheric carbon dioxide has increased around 30 percent, that humans are the dominant source of nitrogen fixation, that humans control most of the Earth's accessible surface fresh water, and that species extinction rates may be over a hundred times faster than normal. Ecological destruction which impacts food production could produce a human population crash.
Non-anthropogenic
Of all species that have ever lived, 99% have gone extinct. Earth has experienced numerous mass extinction events, in which up to 96% of all species present at the time were eliminated. A notable example is the K-T extinction event, which killed the dinosaurs. The types of threats posed by nature have been argued to be relatively constant, though this has been disputed. A number of other astronomical threats have also been identified.
Asteroid impact
An impact event involving a near-Earth object (NEOs) could result in localized or widespread destruction, including widespread extinction and possibly human extinction.
Several asteroids have collided with Earth in recent geological history. The Chicxulub asteroid, for example, was about ten kilometers (six miles) in diameter and is theorized to have caused the extinction of non-avian dinosaurs at the end of the Cretaceous. No sufficiently large asteroid currently exists in an Earth-crossing orbit; however, a comet of sufficient size to cause human extinction could impact the Earth, though the annual probability may be less than 10−8. Geoscientist Brian Toon estimates that while a few people, such as "some fishermen in Costa Rica", could plausibly survive a ten-kilometer (six-mile) meteorite, a hundred-kilometer (sixty-mile) meteorite would be large enough to "incinerate everybody". Asteroids with around a 1 km diameter have impacted the Earth on average once every 500,000 years; these are probably too small to pose an extinction risk, but might kill billions of people. Larger asteroids are less common. Small near-Earth asteroids are regularly observed and can impact anywhere on the Earth injuring local populations. As of 2013, Spaceguard estimates it has identified 95% of all NEOs over 1 km in size. None of the large "dinosaur-killer" asteroids known to Spaceguard pose a near-term threat of collision with Earth.
In April 2018, the B612 Foundation reported "It's a 100 per cent certain we'll be hit [by a devastating asteroid], but we're not 100 per cent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
Planetary or interstellar collision
In April 2008, it was announced that two simulations of long-term planetary movement, one at the Paris Observatory and the other at the University of California, Santa Cruz, indicate a 1% chance that Mercury's orbit could be made unstable by Jupiter's gravitational pull sometime during the lifespan of the Sun. Were this to happen, the simulations suggest a collision with Earth could be one of four possible outcomes (the others being Mercury colliding with the Sun, colliding with Venus, or being ejected from the Solar System altogether).
Collision with or a near miss by a large object from outside the Solar System could also be catastrophic to life on Earth. Interstellar objects, including asteroids, comets, and rogue planets, are difficult to detect with current technology until they enter the Solar System, and could potentially do so at high speed.
If Mercury or a rogue planet of similar size were to collide with Earth, all life on Earth could be obliterated entirely: an asteroid 15 km wide is believed to have caused the extinction of the non-avian dinosaurs, whereas Mercury is 4,879 km in diameter. The destabilization of Mercury's orbit is unlikely in the foreseeable future.
A close pass by a large object could cause massive tidal forces that triggered anything from minor earthquakes to liquification of the Earth's crust to Earth being torn apart, becoming a disrupted planet.
Stars and black holes are easier to detect from a longer distance, but are much more difficult to deflect. Passage through the solar system could result in the destruction of the Earth or the Sun by being directly consumed. Astronomers expect the collision of the Milky Way Galaxy with the Andromeda Galaxy in about four billion years, but due to the large amount of empty space between them, most stars are not expected to collide directly.
The passage of another star system into or close to the outer reaches of the Solar System could trigger a swarm of asteroid impacts as the orbit of objects in the Oort Cloud is disturbed, or objects orbiting the two stars collide. It also increases the risk of catastrophic irradiation of the Earth. Astronomers have identified fourteen stars with a 90% chance of coming within 3.26 light years of the Sun in the next few million years, and four within 1.6 light years, including HIP 85605 and Gliese 710. Observational data on nearby stars was too incomplete for a full catalog of near misses, but more data is being collected by the Gaia spacecraft.
Physics hazards
Strangelets, if they exist, might naturally be produced by strange stars, and in the case of a collision, might escape and hit the Earth. Likewise, a false vacuum collapse could be triggered elsewhere in the universe.
Gamma-ray burst
Another interstellar threat is a gamma-ray burst, typically produced by a supernova when a star collapses inward on itself and then "bounces" outward in a massive explosion. Under certain circumstances, these events are thought to produce massive bursts of gamma radiation emanating outward from the axis of rotation of the star. If such an event were to occur oriented towards the Earth, the massive amounts of gamma radiation could significantly affect the Earth's atmosphere and pose an existential threat to all life. Such a gamma-ray burst may have been the cause of the Ordovician–Silurian extinction events. This scenario is unlikely in the foreseeable future. Astroengineering projects proposed to mitigate the risk of gamma-ray bursts include shielding the Earth with ionised smartdust and star lifting of nearby high mass stars likely to explode in a supernova. A gamma-ray burst would be able to vaporize anything in its beams out to around 200 light-years.
The Sun
A powerful solar flare, solar superstorm or a solar micronova, which is a drastic and unusual decrease or increase in the Sun's power output, could have severe consequences for life on Earth.
The Earth will naturally become uninhabitable due to the Sun's stellar evolution, within about a billion years. In around 1 billion years from now, the Sun's brightness may increase as a result of a shortage of hydrogen, and the heating of its outer layers may cause the Earth's oceans to evaporate, leaving only minor forms of life. Well before this time, the level of carbon dioxide in the atmosphere will be too low to support plant life, destroying the foundation of the food chains. See Future of the Earth.
About 7–8 billion years from now, if and after the Sun has become a red giant, the Earth will probably be engulfed by an expanding Sun and destroyed.
Uninhabitable universe
The ultimate fate of the universe is uncertain, but is likely to eventually become uninhabitable, either suddenly or gradually. If it does not collapse into the Big Crunch, over very long time scales the heat death of the universe may render life impossible. The expansion of spacetime could cause the destruction of all matter in a Big Rip scenario.
If our universe lies within a false vacuum, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that is known without forewarning. Such an occurrence is called vacuum decay, or the "Big Slurp".
Extraterrestrial invasion
Intelligent extraterrestrial life, if it exists, could invade Earth, either to exterminate and supplant human life, enslave it under a colonial system, exploit the planet's resources, or destroy it altogether.
Although the existence of sentient alien life has never been conclusively proven, scientists such as Carl Sagan have posited it to be very likely. Scientists consider such a scenario technically possible, but unlikely.
An article in The New York Times Magazine discussed the possible threats for humanity of intentionally sending messages aimed at extraterrestrial life into the cosmos in the context of the SETI efforts. Several public figures such as Stephen Hawking and Elon Musk have argued against sending such messages, on the grounds that extraterrestrial civilizations with technology are probably far more advanced than, and could therefore pose an existential threat to, humanity.
Invasion by microscopic life is also a possibility. In 1969, the "Extra-Terrestrial Exposure Law" was added to the United States Code of Federal Regulations (Title 14, Section 1211) in response to the possibility of biological contamination resulting from the U.S. Apollo Space Program. It was removed in 1991.
Natural pandemic
A pandemic involving one or more viruses, prions, or antibiotic-resistant bacteria. Epidemic diseases that have killed millions of people include smallpox, bubonic plague, influenza, HIV/AIDS, COVID-19, cocoliztli, typhus, and cholera. Endemic tuberculosis and malaria kill over a million people each year. Sudden introduction of various European viruses decimated indigenous American populations. A deadly pandemic restricted to humans alone would be self-limiting as its mortality would reduce the density of its target population. A pathogen with a broad host range in multiple species, however, could eventually reach even isolated human populations. U.S. officials assess that an engineered pathogen capable of "wiping out all of humanity", if left unchecked, is technically feasible and that the technical obstacles are "trivial". However, they are confident that in practice, countries would be able to "recognize and intervene effectively" to halt the spread of such a microbe and prevent human extinction.
There are numerous historical examples of pandemics that have had a devastating effect on a large number of people. The present, unprecedented scale and speed of human movement make it more difficult than ever to contain an epidemic through local quarantines, and other sources of uncertainty and the evolving nature of the risk mean natural pandemics may pose a realistic threat to human civilization.
There are several classes of argument about the likelihood of pandemics. One stems from history, where the limited size of historical pandemics is evidence that larger pandemics are unlikely. This argument has been disputed on grounds including the changing risk due to changing population and behavioral patterns among humans, the limited historical record, and the existence of an anthropic bias.
Another argument is based on an evolutionary model that predicts that naturally evolving pathogens will ultimately develop an upper limit to their virulence. This is because pathogens with high enough virulence quickly kill their hosts and reduce their chances of spreading the infection to new hosts or carriers. This model has limits, however, because the fitness advantage of limited virulence is primarily a function of a limited number of hosts. Any pathogen with a high virulence, high transmission rate and long incubation time may have already caused a catastrophic pandemic before ultimately virulence is limited through natural selection. Additionally, a pathogen that infects humans as a secondary host and primarily infects another species (a zoonosis) has no constraints on its virulence in people, since the accidental secondary infections do not affect its evolution. Lastly, in models where virulence level and rate of transmission are related, high levels of virulence can evolve. Virulence is instead limited by the existence of complex populations of hosts with different susceptibilities to infection, or by some hosts being geographically isolated. The size of the host population and competition between different strains of pathogens can also alter virulence.
Neither of these arguments is applicable to bioengineered pathogens, and this poses entirely different risks of pandemics. Experts have concluded that "Developments in science and technology could significantly ease the development and use of high consequence biological weapons", and these "highly virulent and highly transmissible [bio-engineered pathogens] represent new potential pandemic threats".
Natural climate change
Climate change refers to a lasting change in the Earth's climate. The climate has ranged from ice ages to warmer periods when palm trees grew in Antarctica. It has been hypothesized that there was also a period called "snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, near the end of the last Major Ice Age when the climate became more stable. However, abrupt climate change on the decade time scale has occurred regionally. A natural variation into a new climate regime (colder or hotter) could pose a threat to civilization.
In the history of the Earth, many Ice Ages are known to have occurred. An ice age would have a serious impact on civilization because vast areas of land (mainly in North America, Europe, and Asia) could become uninhabitable. Currently, the world is in an Interglacial period within a much older glacial event. The last glacial expansion ended about 10,000 years ago, and all civilizations evolved later than this. Scientists do not predict that a natural ice age will occur anytime soon. The amount of heat-trapping gases emitted into Earth's oceans and atmosphere will prevent the next ice age, which otherwise would begin in around 50,000 years, and likely more glacial cycles.
On a long time scale, natural shifts such as Milankovitch cycles (hypothesized quaternary climatic oscillations) could create unknown climate variability and change.
Volcanism
A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano could lead to a so-called volcanic winter, similar to a nuclear winter. Human extinction is a possibility. One such event, the Toba eruption, occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory, the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years.
A massive volcano eruption would eject extraordinary volumes of volcanic dust, toxic and greenhouse gases into the atmosphere with serious effects on global climate (towards extreme global cooling: volcanic winter if short-term, and ice age if long-term) or global warming (if greenhouse gases were to prevail).
When the supervolcano at Yellowstone last erupted 640,000 years ago, the thinnest layers of the ash ejected from the caldera spread over most of the United States west of the Mississippi River and part of northeastern Mexico. The magma covered much of what is now Yellowstone National Park and extended beyond, covering much of the ground from Yellowstone River in the east to Idaho falls in the west, with some of the flows extending north beyond Mammoth Springs.
According to a recent study, if the Yellowstone caldera erupted again as a supervolcano, an ash layer one to three millimeters thick could be deposited as far away as New York, enough to "reduce traction on roads and runways, short out electrical transformers and cause respiratory problems". There would be centimeters of thickness over much of the U.S. Midwest, enough to disrupt crops and livestock, especially if it happened at a critical time in the growing season. The worst-affected city would likely be Billings, Montana, population 109,000, which the model predicted would be covered with ash estimated as 1.03 to 1.8 meters thick.
The main long-term effect is through global climate change, which reduces the temperature globally by about 5–15 °C for a decade, together with the direct effects of the deposits of ash on their crops. A large supervolcano like Toba would deposit one or two meters thickness of ash over an area of several million square kilometers. (1000 cubic kilometers is equivalent to a one-meter thickness of ash spread over a million square kilometers). If that happened in some densely populated agricultural area, such as India, it could destroy one or two seasons of crops for two billion people.
However, Yellowstone shows no signs of a supereruption at present, and it is not certain that a future supereruption will occur.
Research published in 2011 finds evidence that massive volcanic eruptions caused massive coal combustion, supporting models for the significant generation of greenhouse gases. Researchers have suggested that massive volcanic eruptions through coal beds in Siberia would generate significant greenhouse gases and cause a runaway greenhouse effect. Massive eruptions can also throw enough pyroclastic debris and other material into the atmosphere to partially block out the sun and cause a volcanic winter, as happened on a smaller scale in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer. Such an eruption might cause the immediate deaths of millions of people several hundred kilometers (or miles) from the eruption, and perhaps billions of death worldwide, due to the failure of the monsoons, resulting in major crop failures causing starvation on a profound scale.
A much more speculative concept is the verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory.
See also
Great Filter
Notes
References
Works cited
.
Existential risk
Man-made disasters
International responses to disasters
Doomsday scenarios
Apocalyptic fiction | 0.777087 | 0.994014 | 0.772435 |
Definitions of education | Definitions of education aim to describe the essential features of education. A great variety of definitions has been proposed. There is wide agreement that education involves, among other things, the transmission of knowledge. But there are deep disagreements about its exact nature and characteristics. Some definitions see education as a process exemplified in events like schooling, teaching, and learning. Others understand it not as a but as the of such processes, i.e. as what characterizes educated persons. Various attempts have been made to give precise definitions listing its necessary and sufficient conditions. The failure of such attempts, often in the form of being unable to account for various counter examples, has led many theorists to adopt less precise conceptions based on family resemblance. On this view, different forms of education are similar by having overlapping features but there is no set of features shared by all forms. Clarity about the nature of education is central for various issues, for example, to coherently talk about the subject and to determine how to achieve and measure it.
An important discussion in the academic literature is about whether evaluative aspects are already part of the definition of education and, if so, what roles they play. Thin definitions are value-neutral while thick definitions include evaluative and normative components, for example, by holding that education implies that the person educated has changed for the better. Descriptive conceptions try to capture how the term "education" is used by competent speakers. Prescriptive conceptions, on the other hand, stipulate what education be like or what constitutes education.
Thick and prescriptive conceptions often characterize education in relation to the goals it aims to realize. These goals are sometimes divided into epistemic goods, like knowledge and understanding, skills, like rationality and critical thinking, and character traits, like kindness and honesty. Some theorists define education in relation to an overarching purpose, like socialization or helping the learner lead a good life. The more specific aims can then be understood as means to achieve this overarching purpose. Various researchers emphasize the role of critical thinking to distinguish education from indoctrination.
Traditional accounts of education characterize it mainly from the teacher's perspective, usually by describing it as a process in which they transmit knowledge and skills to their students. Student-centered definitions, on the other hand, emphasize the student's experience, for example, based on how education transforms and enriches their subsequent experience. Some conceptions take both the teacher's and the student's point of view into account by focusing on their shared experience of a common world.
General characteristics, disagreements, and importance
Definitions of education try to determine the essential features of education. Many general characteristics have been ascribed to education. However, there are several disagreements concerning its exact definition and a great variety of definitions have been proposed by theorists belonging to diverse fields. There is wide agreement that education is a purposeful activity directed at achieving certain aims. In this sense, education involves the transmission of knowledge. But it is often pointed out that this factor alone is not sufficient and needs to be accompanied by other factors, such as the acquisition of practical skills or instilling moral character traits.
Many definitions see education as a task or a process. In this regard, the conception of education is based on what happens during events like schooling, training, instructing, teaching, and learning. This process may in turn be understood either from the perspective of the teacher or with a focus on the student's experience instead. However, other theorists focus mainly on education as an achievement, a state, or a product that results as a consequence of the process of being educated. Such approaches are usually based on the features, mental states, and character traits exemplified by educated persons. In this regard, being educated implies having an encompassing familiarity with various topics. So one does not become an educated person just by undergoing specialized training in one specific field. Besides these two meanings, the term "education" may also refer to the academic field studying the methods and processes involved in teaching and learning or to social institutions employing these processes.
Education is usually understood as a very general term that has a wide family of diverse instances. Nonetheless, some attempts have been made to give a precise definition of the essential features shared by all forms of education. An influential early attempt was made by R. S. Peters in his book "Ethics and Education", where he suggests three criteria that constitute the necessary and sufficient conditions of education: (1) it is concerned with the transmission of knowledge and understanding; (2) this transmission is worthwhile and (3) done in a morally appropriate manner in tune with the student's interests. This definition has received a lot of criticism in the academic literature. While there is wide agreement that many forms of education fall under these three criteria, opponents have rejected that they are true for all of them by providing various counterexamples. For example, in regard to the third criterion, it may be sometimes necessary to educate children about certain facts even though they are not interested in learning about these facts. And regarding the second criterion, not everyone agrees that education is always desirable. Because of the various difficulties and counterexamples with this and other precise definitions, some theorists have argued that there is no one true definition of education. In this regard, the different forms of education may be seen as a group of loosely connected topics and "different groups within a society may have differing legitimate conceptions of education".
Some theorists have responded to this by defining education in terms of family resemblance. This is to say that there is no one precise set of features shared by all and only by forms of education. Instead, there is a group of many features characteristic of education. Some of these features apply to one form of education while slightly different ones are exemplified by another form of education. In this sense, any two forms of education are similar and their characteristic features overlap without being identical. This is closely related to the idea that words are like tools used in language games. On this view, there may be various language games or contexts in which the term "education" is used, in each one with a slightly different meaning. Following this line of thought, it has been suggested that definitions of education should limit themselves to a specific context without claiming to be true for all possible uses of the term. The most paradigmatic form of education takes place in schools. Many researchers have specifically this type of education in mind and some define it explicitly as the discipline investigating the methods of teaching and learning in a formal setting, like schools. But in its widest sense, it encompasses many other forms as well, including informal and non-formal education.
Clarity about the nature of education is important for various concerns. In a general sense, it is needed to identify and coherently talk about education. In this regard, all the subsequent academic discourse on topics like the aims of education, the psychology of education, or the role of education in society, depends on this issue. For example, when trying to determine what good education is like, one has to already assume some idea of what education is to decide what constitutes a good instance. It is also central for questions about how to achieve and measure the results of educational processes. The importance of providing an explicit definition is further increased by the fact that education initially seems to be a straightforward and common-sense concept that people usually use outside the academic discourse without much controversy. This impression hides various conceptual confusions and disagreements that only come to light in the attempt to make explicit the common pre-understanding associated with the term.
Many concrete definitions of education have been proposed. According to John Dewey, education involves the transmission of habits, ideals, hopes, expectations, standards, and opinions from one generation to the next. R. S. Peters revised his earlier definitions and understands education in his later philosophy as a form of initiation in which teachers share the experience of a common world with their students and convey worthwhile forms of thought and awareness to them. For Lawrence Cremin, "[e]ducation is the deliberate, systematic, and sustained effort to transmit, provoke or acquire knowledge, values, attitudes, skills or sensibilities as well as any learning that results from the effort". Another definition sees education as "a serious and sustained programme of learning, for the benefit of people qua people rather than only qua role-fillers or functionaries, above the level of what people might pick up for themselves in their daily lives'". The English word "education" has its etymological root in the Latin word "educare", which means "to train", "to mold", or "to lead out".
Role of values
There are various disagreements about whether evaluative and normative aspects should already be included in the definition of education and, if so, what roles they play. An important distinction in this regard is between thin and thick definitions. Thin definitions aim to provide a value-neutral description of what education is, independent of whether and to whom it is useful. Thick definitions, on the other hand, include various evaluative and normative components in their characterization, for example, the claim that education implies that the person educated has changed for the better. Otherwise, the process would not deserve the label "education". However, different thick definitions of education may still disagree with each other on what kind of values are involved and in which sense the change in question is an improvement. A closely related distinction is that between descriptive and prescriptive or programmatic conceptions. Descriptive definitions aim to provide a description of how the term "education" is actually used. They contrast with prescriptive definitions, which stipulate what education be like or what constitutes education. Some theorists also include an additional category for stipulative definitions, which are sometimes used by individual researchers as shortcuts for what they mean when they use the term without claiming that these are the essential features commonly associated with all forms of education. Thick and prescriptive conceptions are closely related to the aims of education in the sense that they understand education as a process aimed at a certain valuable goal that constitutes an improvement of the learner. Such improvements are often understood in terms of mental states fostered by the educational process.
Role of aims
Many conceptions of education, in particular thick and prescriptive accounts, base their characterizations on the aims of education, i.e. in relation to the purpose that the process of education tries to realize. The transmission of knowledge has a central role in this regard, but most accounts include other aims as well, such as fostering the student's values, attitudes, skills, and sensibilities. However, it has been argued that picking up certain skills and know-how without the corresponding knowledge and conceptual scheme does not constitute education, strictly speaking. But the same limitation may also be true for pure knowledge that is not accompanied by positive practical effects on the individual's life. The various specific aims are sometimes divided into epistemic goods, skills, and character traits. Examples of epistemic goods are truth, knowledge, and understanding. Skill-based accounts, on the other hand, hold that the goal of education is to develop skills like rationality and critical thinking. For character-based accounts, its main purpose is to foster certain character traits or virtues, like kindness, justice, and honesty. Some theorists try to provide a wide overarching framework. The various specific goals are then seen as aims of education to the extent that they serve this overarching purpose. When this purpose is understood in relation to society, education may be defined as the process of transmitting, from one generation to the next, the accumulated knowledge and skills needed to function as a regular citizen in a specific society. In this regard, education is equivalent to socialization or enculturation. More liberal or person-centered definitions, on the other hand, see the overarching purpose in relation to the individual learner instead: education is to help them develop their potential in order to lead a good life or the life they wish to lead, independently of the social ramifications of this process.
Various conceptions emphasize the aim of critical thinking in order to differentiate education from indoctrination. Critical thinking is a form of thinking that is reasonable, reflective, careful, and focused on determining what to believe or how to act. It includes the metacognitive component of monitoring and assessing its achievements in regard to the standards of rationality and clarity. Many theorists hold that fostering this disposition distinguishes education from indoctrination, which only tries to instill beliefs in the student's mind without being interested in their evidential status or fostering the ability to question those beliefs. But not all researchers accept this hard distinction. A few hold that, at least in the early stages of education, some forms of indoctrination are necessary until the child's mind has developed sufficiently to assess and evaluate reasons for and against particular claims and thus employ critical thinking. In this regard, critical thinking may still be an important aim of education but not an essential feature characterizing all forms of education.
Teacher- or student-centered
Most conceptions of education either explicitly or implicitly hold that education involves the relation between teacher and student. Some theorists give their characterization mainly from the teacher's perspective, usually emphasizing the act of transmitting knowledge or other skills, while others focus more on the learning experience of the student. The teacher-centered perspective on education is often seen as the traditional position. An influential example is found in the early philosophy of R. S. Peters. In it, he considers education to be the transmission of knowledge and skills while emphasizing that teachers should achieve this in a morally appropriate manner that reflects the student's interests. A student-centered definition is given by John Dewey, who sees education as the "reconstruction or reorganization of experience which adds to the meaning of experience, and which increases the ability to direct the course of subsequent experience". This way, the student's future experience is enriched and the student thereby undergoes a form of growth. Opponents of this conception have criticized its lack of a normative component. For example, the increase of undesirable abilities, like learning how to become an expert burglar, should not be understood as a form of education even though it is a reorganization of experience that directs the course of subsequent experience.
Other theories aim to provide a more encompassing perspective that takes both the teacher's and the student's point of view into account. Peters, in response to the criticism of his initially proposed definition, has changed his conception of education by giving a wider and less precise definition, seeing it as a type of initiation in which worthwhile forms of thought and awareness are conveyed from teachers to their students. This is based on the idea that both teachers and students participate in the shared experience of a common world. The teachers are more familiar with this world and try to guide the students by passing on their knowledge and understanding. Ideally, this process is motivated by curiosity and excitement on the part of the students to discover what there is and what it is like so that they may one day themselves become authorities on the subject. This conception can be used for answering questions about the contents of the curriculum or what should be taught: whatever the students need most for discovering and participating in the common world.
The shared perspective of both teachers and students is also emphasized by Paulo Freire. In his influential Pedagogy of the Oppressed, he rejects teacher-centered definitions, many of which characterize education using what he refers to as the banking model of education. According to the banking model, students are seen as empty vessels in analogy to piggy banks. It is the role of the teacher to deposit knowledge into the passive students, thereby shaping their character and outlook on the world. Instead, Freire favors a libertarian conception of education. On this view, teachers and students work together in a common activity of posing and solving problems. The goal of this process is to discover a shared and interactive reality, not by consuming ideas created by others but by producing and acting upon one's own ideas. Students and teachers are co-investigators of reality and the role of the teacher is to guide this process by representing the universe instead of merely lecturing about it.
References
Definitions
Education
Education studies
Philosophy of education | 0.781396 | 0.988518 | 0.772424 |
Extractivism | Extractivism is the removal of natural resources particularly for export with minimal processing. This economic model is common throughout the Global South and the Arctic region, but also happens in some sacrifice zones in the Global North in European extractivism. The concept was coined in Portuguese as "extractivismo" in 1996 to describe the for-profit exploitation of forest resources in Brazil.
Many actors are involved in the process of extractivism. These mainly include transnational corporations (TNCs) as the main players, but are not limited to them, because they also include the government and some (chiefly economic) community members. Trends have demonstrated that countries do not often extract their own resources; extraction is often led from abroad. These interactions have contributed to extractivism being rooted in the hegemonic order of global capitalism. Extractivism is controversial because it exists at the intersection where economic growth and environmental protection meet. This intersection is known as the green economy. Extractivism has evolved in the wake of neo-liberal economic transitions to become a potential avenue for development to occur. This development occurs through stabilizing growth rates and increasing direct foreign investment.
However, while these short-term economic benefits can be substantial, extractivism as a development model is often critiqued for failing to deliver the improved living conditions it promises and failing to work collaboratively with already existing programs, therefore inflicting environmental, social and political consequences.
Environmental concerns of extractivism include; climate change, soil depletion, deforestation, loss of food sovereignty, declining biodiversity and contamination of freshwater. Social and political implications include violation of human rights, unsafe labour conditions, unequal wealth distribution and conflict. As a result of this, extractivism remains a prominent debate in policy related discourse because while it sometimes delivers high economic gains in the short term, it also poses social and environmental dangers. Case studies in Latin America demonstrate these policy gaps.
Background
Definition
Extractivism is the removal of large quantities of raw or natural materials, particularly for export with minimal processing. The concept emerged in the 1990s (as extractivismo) to describe resource appropriation for export in Latin America. Scholarly work on extractivism has since applied the concept to other geographical areas and also to more abstract forms of extraction such as the digital and intellectual realms or to finance. Regardless of its range of application, the concept of extractivism may be essentially conceived as "a particular way of thinking and the properties and practices organized towards the goal of maximizing benefit through extraction, which brings in its wake violence and destruction". Guido Pascual Galafassi and Lorena Natalia Riffo see the concept as a continuation of Galeano's Open Veins of Latin America (1971).
Neo-extractivism
Extractivism has been promoted as a potential development path in which raw materials are exported and revenues are used to improve people's living conditions. This approach is called “neo-extractivism”. This transition to neo-liberal economies is rooted in a nation’s subordination to an emphasis on free trade. In contrast to older forms of extractivism, neo-extractivism regulates the allotment of resources and their revenue, pushes state-ownership of companies and raw materials, revises contracts, and raises export duties and taxes. The success of neo-extractivism is debatable as the communities at the sites of extraction rarely experience improved living conditions. More commonly, the people at these sites experience worsened living conditions, such as in the cases of extraction from Indigenous communities in Canada’s boreal forest. Neo-extractivism has similarities to older forms of extractivism and exists in the realm of neo-colonialism.
Criticism
The term and its negative connotations have drawn comments from some economists and high-ranking officials in South America. Álvaro García Linera, Vicepresident of Bolivia from 2005 to 2019 wrote:
All societies and modes of production have these different levels of processing of "raw materials" in their own way. If we conceptualize "extractivism" as the activity that only extracts raw materials (renewable or non-renewable), without introducing further transformation in labor activity, then all societies in the world, capitalist and non-capitalist, are also extractivist to a greater or lesser extent. The agrarian non-capitalist societies that processed iron, copper, gold or bronze on a greater or lesser scale, had some type of specialized extractive activity, complemented in some cases with the simple or complex processing of that raw material. Even the societies that lived or live from the extraction of wood and chestnut along with hunting and fishing, maintain a type of extractive activity of renewable natural resources.
The concept of extractivism has been criticized by Nicolás Eyzaguirre, Chilean Minister of Finance between 2000 and 2006, who cites the mining sector of Australia as a successful example of a "deep and sophisticated value chain", with high human capital, self-produced machinery and associated top-tier scientific research. For the case of Chile Eyzaguirre argue that rentierism and not extractivism should be the concept of concern.
History
Extractivism has been occurring for over 500 years. During colonization, large quantities of natural resources were exported from colonies in Africa, Asia and the Americas to meet the demands of metropolitan centres.
According to Rafael Domínguez the Chilean government coalition Concertación, which rule Chile from 1990 to 2010, pioneered "neo-extrativism".
Philosophy
Extractivism is a result of colonial thought which places humans above other life forms. It is rooted in the belief that taking from the earth will create abundance. Many Indigenous scholars argue that extractivism opposes their philosophy of living in balance with the earth and other life forms in order to create abundance. Leanne Betasamosake Simpson, a Michi Saagiig Nishnaabeg scholar and writer, compares these ideas of destruction versus regeneration in her book A Short History of the Blockade. She references the Trent–Severn Waterway, a dam in Canada that caused major loss of fish, a major source of food for her people. She quotes Freda Huson in saying, “Our people’s belief is that we are part of the land. The land is not separate from us. The land sustains us. And if we don’t take care of her, she won’t be able to sustain us, and we as a generation of people will die.” She also defines extractivism in another work, stating it is “stealing. It’s taking something, whether it’s a process, an object, a gift, or a person, out of the relationships that give it meaning, and placing it in a nonrelational context for the purposes of accumulation.” The colonial action of theft goes beyond only extracting from the earth. This philosophy of entitlement is the cause behind colonization itself, and we are watching the continuation of theft in real-time through practices such as extractivism. Naomi Klein also touches on this in her book This Changes Everything: Capitalism vs. The Climate. She writes, "Extractivism ran rampant under colonialism because relating to the world as a frontier of conquest- rather than a home- fosters this particular brand of irresponsibility. The colonial mind nurtures the belief that there is always somewhere else to go to and exploit once the current site of extraction has been exhausted."
Actors
Transnational corporations (TNCs) are a primary actor in neo-extractivism. Originally, as TNCs began to explore raw material extraction in developing countries they were applauded for taking a risk to extract high-demand resources. TNCs were able to navigate their way into a position where they maintained large amounts of control over various extraction-based industries. This success is credited to the oftentimes weak governance structure of the resource dependent economies where extraction is taking place. Through complex arrangements and agreements, resources have slowly become denationalized. As a result of this, the government has taken a “hands-off” approach, awarding most of the control over resource enclaves and the social responsibility that accompanies them to TNCs. However, the government still plays an important role in leading development by determining which TNCs they allow to extract their resources and how thorough they are when it comes to enforcing certain standards of social responsibility.
Resources and techniques
Some resources that are obtained through extraction include but are not limited to gold, diamonds, oil, lumber, water and food. This occurs through techniques such as mining, drilling and deforestation. Resources are typically extracted from developing countries as a raw material. This means that it has not been processed or has been processed only slightly. These materials then travel elsewhere to be turned into goods that are for sale on the world market. An example of this would be gold that is mined as a raw mineral and later in the supply chain manufactured into jewellery.
Impacts of extractivism
Economic benefits
Neo-extractivism is seen as an opportunity for successful development in many areas of the developing world. Demand for extracted resources on the global market has allowed this industry to expand. Since the year 2000, there has been a substantial rise in global demand and value for raw materials – this has contributed to steadily high prices. Neo-extractivism has therefore been seen as a tool for economically advancing developing countries that are rich in natural resources by participating in this market.
It is argued that the emergence of this industry in the neo-liberal context has allowed extractivism to contribute to stabilizing growth rates, increasing direct foreign investment, diversifying local economies, expanding the middle class and reducing poverty. This is done by using surplus revenue to invest in development projects such as expanding social programs and infrastructure. Overall, extraction based economies are seen as long-term development projects that guarantee a robust economic foundation. It has created a new hegemonic order that closely intertwines with the dominant capitalist system of the world. The green economy has emerged as an economic model in response to the arising tensions between the economy and the environment. Extractivism is one of the many issues that exist at this intersection between the economy and the environment.
Increasingly, policy tools such as corporate social responsibility mechanisms and increased government involvement are being used to mitigate the negative implications of neo-extractivism and make it a more effective development model.
Environmental consequences
One of the main consequences of extractivism is the toll that it takes on the natural environment. Due to the scale extraction takes place on; several renewable resources are becoming non-renewable. This means that the environment is incapable of renewing its resources as quickly as the rate they are extracted at. It is often falsely assumed that technological advancements will enable resources to renew more effectively and as a result make raw material extraction more sustainable. The environment often must compensate for overproduction driven by high demand. Global climate change, soil depletion, loss of biodiversity and contamination of fresh water are some of the environmental issues that extractivism contributes to. As well, extraction produces large amounts of waste such as toxic chemicals and heavy metals that are difficult to dispose of properly. To what degree humans have a right to take from the environment for developmental purposes is a topic that continues to be debated.
Social impacts
In addition to the environmental consequences of extractivism, social impacts arise as well. Local communities are often opposed to extractivism occurring. This is because it often uproots the communities or cause environmental impacts that will affect their quality of life. Indigenous communities tend to be particularly susceptible to the social impacts of extractivism. Indigenous peoples rely on their environment to sustain their lifestyles as well as connect with the land in spiritual ways. Extractivist policies and practices heavily destroy the land as explained above. This changes game populations, migration patterns for animals, pollutes rivers and much more. Doing so, does not allow Indigenous populations to practice their culture and ways of life because the environment they depend on to hunt, fish etc. is drastically changed. In addition, this destruction hinders the practice of Indigenous culture and creation of knowledge making it more difficult for Indigenous individuals to pass down their traditions to future generations.
While employment opportunities are brought to local communities as a pillar of neo-extractivism projects, the conditions are often unsafe for workers. TNCs can take advantage of more lenient health and safety conditions in developing countries and pay inadequate wages in order to maximize their profits. As well, foreigners usually fill the highest paying managerial positions, leaving local community members to do the most labour intensive jobs. Frequently, the enclaves where extractivism occurs are distanced from government involvement, therefore allowing them to avoid being subjected to the enforcement of national laws to protect citizens. This can result in widespread human rights violations. It is argued that prolonged social transformation cannot thrive on export dependent extractivism alone therefore making neo-extractivism a potentially flawed development method on its own.
Political implications
Due to the fact that the state is a prominent actor in the extractivism process it has several political implications. It pushes the state into a position where they are one of the central actors involved in development when recent decades have seen a shift to civil society organizations. As well, the relationship between the State providing the natural resources and the TNCs extracting them can be politically complex sometimes leading to corruption. Likewise, as a result of government involvement, this process as a development project becomes politicized. The increasing demand for raw materials also increases the likelihood of conflict breaking out over natural resources.
Extractivism near or on Indigenous land without the permission of Indigenous peoples begins to threaten the land based self-determination of Indigenous groups. Conflicts between Indigenous peoples, corporations and governments are occurring around the world. Because many of the extractivist practices take place where Indigenous communities are located, the conflicts are making these landscapes politicized and contested. The conflicts are driven because Indigenous lives are put in jeopardy when they are dispossessed, when they lose their livelihoods, when their water and land is polluted and the environment is commodified.
Anti-extractivist activism
Because extractivism so often has negative implications for the Indigenous communities it affects, there is much resistance and activism on their end. For example, from the 1980s and through today we can see examples of “extrACTIVISM”, a term coined by author Anna J. Willow. In protest of the logging project on their land, the Penan of Borean Malaysia claimed it was a case of civil disobedience as a means to end it and succeeded. In ‘89, Kayapó peoples stood up against the building of dams on their land in Pará, Brazil, causing their funding to be stopped and successfully ending the project. The U'wa people of Colombia ended oil extraction on their land through blockade activism from the 90s through 2000. Just this year, the Keystone Pipeline that runs through Canada and the U.S. was put to a halt due to Indigenous activism. Its construction officially ended in June 2021. Despite the difficulties they face in protesting these projects, their resilience continues to flourish and oftentimes they succeed in ending extractivism on their land. Another example of this activism is the Ponca tribe planting corn in the path of the Keystone Pipeline as an act of resistance. Aside from active protesting, Tribal sovereignty is essential in their goal of protecting their own land.
Case studies
Yanacocha gold mine
The Yanacocha gold mine in Cajamarca, Peru, is an extractivist project. In 1993, a joint venture between Newmont Corp and Compañia de Minas Buenaventura began the project. The government favoured this project and saw it as an opportunity for development therefore giving large amounts of control to the mining companies. Local communities expressed concerns about water contamination. The corporations promised the creation of 7,000 jobs and development projects that would be beneficial for the community. The TNC said they would abandon the project if they could not do so on socially and economically responsible terms. However, this guarantee failed to be actualized and violent conflict broke out as a result of chemical spills and environmental degradation. Regional and national governments had opposing opinions on the project and protests broke out injuring more than 20 people and killing five. The regional government sided with the community protestors, rejecting the Cajamarca mining project, but in the end, the national government overrode the concerns of the community and pushed the mine forward, leaving the task of social responsibility to the corporations.
Ecuador: oil exploitation in Yasuni National Park
Many Amazonian communities in Ecuador are opposed to the national government's endorsement of oil extraction in Yasuni National Park. The Spanish corporation Repsol S.A. and American corporation Chevron-Texaco have both attempted to extract oil from the reserves in Yasuni. Various civil society organizations fought against the implementation of this project because of the park's valuable biodiversity. In 2007 under President Correa, Ecuador launched the Yasuní-ITT Initiative, which proposed that the international community would compensate Ecuador $3.5 billion for the lost income that an oil reserve would have generated in exchange for protecting the forest. The initiative only raised $13 million dollars, and was cancelled in 2013. Drilling began in 2016, and in 2023 several oil platforms had been developed with over 100 oil wells in production.
See also
Agroextractivism
Dispossession of land
Eutrophication
Exploitation of natural resources
Indigenous land rights
Power politics
Slavery
Toxic colonialism
No Cav
Notes
References
Bibliography
Acosta, Alberto. “Extractivism and neo-extractivism: two sides of the same curse.”Beyond Development: Alternative Visions From Latin America, (2013): 61–87.
Natural resources | 0.78257 | 0.987028 | 0.772419 |
Nagoya Protocol | The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity, also known as the Nagoya Protocol on Access and Benefit Sharing (ABS), is a 2010 supplementary agreement to the 1992 Convention on Biological Diversity (CBD). Its aim is the implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources, thereby contributing to the conservation and sustainable use of biodiversity. It sets out obligations for its contracting parties to take measures in relation to access to genetic resources, benefit-sharing and compliance.
The protocol was adopted on 29 October 2010 in Nagoya, Japan, and entered into force on 12 October 2014. , it has been ratified by 137 parties, which includes 136 UN member states and the European Union.
Concerns have been expressed that the added bureaucracy and legislation could be damaging to the monitoring and collection of biodiversity, to conservation, to the international response to infectious diseases, and to research.
Aims and scope
The Nagoya Protocol applies to genetic resources that are covered by the CBD, and to the benefits arising from their utilization. The protocol also covers traditional knowledge associated with genetic resources that are covered by the CBD and the benefits arising from its utilization.
Its aim is the implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources, thereby contributing to the conservation and sustainable use of biodiversity.
Adoption and ratification
The protocol was adopted on 29 October 2010 in Nagoya, Japan, at the tenth meeting of the Conference of the Parties, held from 18 to 29 October 2010 and entered into force on 12 October 2014.
, it has been ratified by 137 parties, which includes 136 UN member states and the European Union.
Obligations
The Nagoya Protocol sets out obligations for its contracting parties to take measures in relation to access to genetic resources, benefit-sharing and compliance.
Access obligations
Domestic-level access measures aim to:
Create legal certainty, clarity, and transparency
Provide fair and non-arbitrary rules and procedures
Establish clear rules and procedures for prior informed consent and mutually agreed on terms
Provide for issuance of a permit or equivalent when access is granted
Create conditions to promote and encourage research contributing to biodiversity conservation and sustainable use
Pay due regard to cases of present or imminent emergencies that threaten human, animal, or plant health
Consider the importance of genetic resources for food and agriculture for food security
Benefit-sharing obligations
Domestic-level benefit-sharing measures aim to provide for the fair and equitable sharing of benefits arising from the utilization of genetic resources with the contracting party providing genetic resources. Utilization includes research and development on the genetic or biochemical composition of genetic resources, as well as subsequent applications and commercialization. Sharing is subject to mutually agreed terms. Benefits may be monetary or non-monetary such as royalties and the sharing of research results.
Compliance obligations
Specific obligations to support compliance with the domestic legislation or regulatory requirements of the contracting party providing genetic resources, and contractual obligations reflected in mutually agreed terms, are a significant innovation of the Nagoya Protocol.
Contracting parties are to:
Take measures providing that genetic resources utilized within their jurisdiction have been accessed in accordance with prior informed consent, and that mutually agreed terms have been established, as required by another contracting party
Cooperate in cases of an alleged violation of another contracting party's requirements
Encourage contractual provisions on dispute resolution in mutually agreed terms
Ensure an opportunity is available to seek recourse under their legal systems when disputes arise from mutually agreed terms (MAT)
Take measures regarding access to justice
Monitor the use of genetic resources after they leave a country by designating effective checkpoints at every stage of the value chain: research, development, innovation, pre-commercialization, or commercialization
Implementation
The Nagoya Protocol's success will require effective implementation at the domestic level. A range of tools and mechanisms provided by the Nagoya Protocol will assist contracting parties including:
Establishing national focal points (NFPs) and competent national authorities (CNAs) to serve as contact points for information, grant access, or compliance
An Access and Benefit-sharing Clearing-House to share information, such as domestic regulatory ABS requirements or information on NFPs and CNAs
Capacity-building to support key aspects of implementation.
Based on a country's self-assessment of national needs and priorities, capacity-building may help to:
Develop domestic ABS legislation to implement the Nagoya Protocol
Negotiate mutually-agreed terms
Develop in-country research capability and institutions
Raise awareness
Transfer technology
Target financial support for capacity-building and development initiatives through the GEF
Relationship to other international agreements
A growing number of Preferential Trade Agreements (PTAs) include provisions related to access to genetic resources or to the sharing of the benefits that arise out of their utilization. Indeed, some recent trade agreements, originating notably from Latin American countries, provide specific measures designed to facilitate the implementation of the ABS provisions contained in the Nagoya Protocol, including measures related to technical assistance, transparency and dispute settlement.
Criticism
However, there are concerns that the added bureaucracy and legislation will, overall, be damaging to the monitoring and collection of biodiversity, to conservation, to the international response to infectious diseases, and to research.
Many scientists have voiced concern over the protocol, fearing the increased red tape will hamper disease prevention and conservation efforts, and that the threat of possible imprisonment of scientists will have a chilling effect on research. Non-commercial biodiversity researchers and institutions such as natural history museums fear maintaining biological reference collections and exchanging material between institutions will become difficult.
Lack of implementation at the national level was frequently attributed as a major factor behind the failure of Nagoya Protocol.
See also
Animal Genetic Resources for Food and Agriculture
Bermuda Principles
Cartagena Protocol on Biosafety, another supplementary protocol adopted by the CBD
High Seas Treaty (BBNJ Agreement)
WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge (GRATK)
References
Further reading
External links
(CBD website)
2010 in Japan
Anti-biopiracy treaties
Biopiracy
Biodiversity
History of Nagoya
Traditional knowledge
Treaties concluded in 2010
Treaties entered into force in 2014
Treaties of Afghanistan
Treaties of Albania
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Belarus
Treaties of Belgium
Treaties of Benin
Treaties of Bhutan
Treaties of Bolivia
Treaties of Botswana
Treaties of Bulgaria
Treaties of Burkina Faso
Treaties of Myanmar
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of the Central African Republic
Treaties of the People's Republic of China
Treaties of the Comoros
Treaties of the Democratic Republic of the Congo
Treaties of the Republic of the Congo
Treaties of Croatia
Treaties of Cuba
Treaties of the Czech Republic
Treaties of Denmark
Treaties of Djibouti
Treaties of the Dominican Republic
Treaties of Egypt
Treaties of Equatorial Guinea
Treaties of Eritrea
Treaties of Ethiopia
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of the Gambia
Treaties of Germany
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Guyana
Treaties of Honduras
Treaties of Hungary
Treaties of India
Treaties of Indonesia
Treaties of Ireland
Treaties of Ivory Coast
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Lesotho
Treaties of Liberia
Treaties of Luxembourg
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of Mali
Treaties of the Marshall Islands
Treaties of Mexico
Treaties of Moldova
Treaties of Mongolia
Treaties of Malta
Treaties of Mauritania
Treaties of Mauritius
Treaties of the Federated States of Micronesia
Treaties of Mozambique
Treaties of Namibia
Treaties of the Netherlands
Treaties of Niger
Treaties of Norway
Treaties of Pakistan
Treaties of Palau
Treaties of Panama
Treaties of Peru
Treaties of the Philippines
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Samoa
Treaties of Senegal
Treaties of Serbia
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Slovakia
Treaties of South Africa
Treaties of Spain
Treaties of Sudan
Treaties of Sweden
Treaties of Eswatini
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Tanzania
Treaties of Togo
Treaties of Tuvalu
Treaties of the United Arab Emirates
Treaties of the United Kingdom
Treaties of Uruguay
Treaties of Venezuela
Treaties of Vietnam
Treaties of Zambia
Treaties entered into by the European Union
Environmental treaties
United Nations treaties
Genetics
Convention on Biological Diversity | 0.779528 | 0.990878 | 0.772417 |
Phytogeography | Phytogeography (from Greek φυτόν, phytón = "plant" and γεωγραφία, geographía = "geography" meaning also distribution) or botanical geography is the branch of biogeography that is concerned with the geographic distribution of plant species and their influence on the earth's surface. Phytogeography is concerned with all aspects of plant distribution, from the controls on the distribution of individual species ranges (at both large and small scales, see species distribution) to the factors that govern the composition of entire communities and floras. Geobotany, by contrast, focuses on the geographic space's influence on plants.
Fields
Phytogeography is part of a more general science known as biogeography. Phytogeographers are concerned with patterns and process in plant distribution. Most of the major questions and kinds of approaches taken to answer such questions are held in common between phyto- and zoogeographers.
Phytogeography in wider sense (or geobotany, in German literature) encompasses four fields, according with the focused aspect, environment, flora (taxa), vegetation (plant community) and origin, respectively:
plant ecology (or mesology – however, the physiognomic-ecological approach on vegetation and biome study are also generally associated with this field);
plant geography (or phytogeography in strict sense, chorology, floristics);
plant sociology (or phytosociology, synecology – however, this field does not prescind from flora study, as its approach to study vegetation relies upon a fundamental unit, the plant association, which is defined upon flora).
historical plant geography (or paleobotany, paleogeobotany)
Phytogeography is often divided into two main branches: ecological phytogeography and historical phytogeography. The former investigates the role of current day biotic and abiotic interactions in influencing plant distributions; the latter are concerned with historical reconstruction of the origin, dispersal, and extinction of taxa.
Overview
The basic data elements of phytogeography are occurrence records (presence or absence of a species) with operational geographic units such as political units or geographical coordinates. These data are often used to construct phytogeographic provinces (floristic provinces) and elements.
The questions and approaches in phytogeography are largely shared with zoogeography, except zoogeography is concerned with animal distribution rather than plant distribution. The term phytogeography itself suggests a broad meaning. How the term is actually applied by practicing scientists is apparent in the way periodicals use the term. The American Journal of Botany, a monthly primary research journal, frequently publishes a section titled "Systematics, Phytogeography, and Evolution." Topics covered in the American Journal of Botany's "Systematics and Phytogeography" section include phylogeography, distribution of genetic variation and, historical biogeography, and general plant species distribution patterns. Biodiversity patterns are not heavily covered.
A flora is the group of all plant species in a specific period of time or area, in which each species is independent in abundance and relationships to the other species. The group or the flora can be assembled in accordance with floral element, which are based on common features. A flora element can be a genetic element, in which the group of species share similar genetic information i.e. common evolutionary origin; a migration element has a common route of access into a habitat; a historical element is similar to each other in certain past events and an ecological element is grouped based on similar environmental factors. A population is the collection of all interacting individuals of a given species, in an area.
An area is the entire location where a species, an element or an entire flora can occur. Aerography studies the description of that area, chorology studies their development. The local distribution within the area as a whole, as that of a swamp shrub, is the topography of that area. Areas are an important factor is forming an image about how species interaction result in their geography. The nature of an area’s margin, their continuity, their general shape and size relative to other areas, make the study of area crucial in identifying these types of information. For example, a relict area is an area surviving from an earlier and more exclusive occurrence. Mutually exclusive plants are called vicarious (areas containing such plants are also called vicarious). The earth’s surface is divided into floristic region, each region associated with a distinctive flora.
History
Phytogeography has a long history. One of the subjects earliest proponents was Prussian naturalist Alexander von Humboldt, who is often referred to as the "father of phytogeography". Von Humboldt advocated a quantitative approach to phytogeography that has characterized modern plant geography.
Gross patterns of the distribution of plants became apparent early on in the study of plant geography. For example, Alfred Russel Wallace, co-discoverer of the principle of natural selection, discussed the latitudinal gradients in species diversity, a pattern observed in other organisms as well. Much research effort in plant geography has since then been devoted to understanding this pattern and describing it in more detail.
In 1890, the United States Congress passed an act that appropriated funds to send expeditions to discover the geographic distributions of plants (and animals) in the United States. The first of these was The Death Valley Expedition, including Frederick Vernon Coville, Frederick Funston, Clinton Hart Merriam, and others.
Research in plant geography has also been directed to understanding the patterns of adaptation of species to the environment. This is done chiefly by describing geographical patterns of trait/environment relationships. These patterns termed ecogeographical rules when applied to plants represent another area of phytogeography.
Floristic regions
Floristics is a study of the flora of some territory or area. Traditional phytogeography concerns itself largely with floristics and floristic classification,.
China has been a focus to botanist for its rich biota as it holds the record for the earliest known angiosperm megafossil.
See also
Biogeography
Botany
Geobotanical prospecting
indicator value
Species distribution
Zoogeography
Association (ecology)
References
Bibliography
External links
Biogeography | 0.789014 | 0.978911 | 0.772375 |
Advocacy | Advocacy is an activity by an individual or group that aims to influence decisions within political, economic, and social institutions. Advocacy includes activities and publications to influence public policy, laws and budgets by using facts, their relationships, the media, and messaging to educate government officials and the public. Advocacy can include many activities that a person or organization undertakes, including media campaigns, public speaking, commissioning and publishing research. Lobbying (often by lobby groups) is a form of advocacy where a direct approach is made to legislators on a specific issue or specific piece of legislation. Research has started to address how advocacy groups in the United States and Canada are using social media to facilitate civic engagement and collective action.
Forms
There are several forms of advocacy, each representing a different approach in a way to initiate changes in the society. One of the most popular forms is social justice advocacy. Cohen, de la Vega, and Watson (2001) state that this definition does not encompass the notions of power relations, people's participation, and a vision of a just society as promoted by social justice advocates. For them, advocacy represents the series of actions taken and issues highlighted to change the "what is" into a "what should be", considering that this "what should be" is a more decent and a more just society Those actions, which vary with the political, economic and social environment in which they are conducted, have several points in common. For instance, they:
Question the way policy is administered
Participate in the agenda-setting as they raise significant issues
Target political systems "because those systems are not responding to people's needs"
Are inclusive and engaging
Propose policy solutions
Open up space for public argumentation
Other forms of advocacy include:
Budget advocacy: another aspect of advocacy that ensures proactive engagement of Civil Society Organizations with the government budget to make the government more accountable to the people and promote transparency. Budget advocacy also enables citizens and social action groups to compel the government to be more alert to the needs and aspirations of people in general and the deprived sections of the community.
Bureaucratic advocacy: people considered "experts" have more chance to succeed at presenting their issues to decision-makers. They use bureaucratic advocacy to influence the agenda, although at a slower pace.
Express versus issue advocacy: These two types of advocacy when grouped together usually refers to a debate in the United States whether a group is expressly making their desire known that voters should cast ballots in a particular way, or whether a group has a long-term issue that isn't campaign and election season specific.
Health, environment and climate change negotiations advocacy: supports and promotes patients' health care rights as well as enhance community health and policy initiatives that focus on the availability, safety and quality of care.
Ideological advocacy: in this approach, groups fight, sometimes during protests, to advance their ideas in the decision-making circles.
Interest-group advocacy: lobbying is the main tool used by interest groups doing mass advocacy. It is a form of action that does not always succeed at influencing political decision-makers as it requires resources and organization to be effective.
Legislative advocacy: the "reliance on the state or federal legislative process" as part of a strategy to create change.
Mass advocacy: any type of action taken by large groups (petitions, demonstrations, etc.)
Media advocacy: "the strategic use of the mass media as a resource to advance a social or public policy initiative" (Jernigan and Wright, 1996). In Canada, for example, the Manitoba Public Insurance campaigns illustrate how media advocacy was used to fight alcohol and tobacco-related health issues. We can also consider the role of health advocacy and the media in "the enactment of municipal smoking bylaws in Canada between 1970 and 1995."
Special education advocacy: advocacy with a "specific focus on the educational rights of students with disabilities."
Different contexts in which advocacy is used:
In a legal/law context: An "advocate" is the title of a specific person who is authorized/appointed in some way to speak on behalf of a person in a legal process.
In a political context: An "advocacy group" is an organized collection of people who seek to influence political decisions and policy, without seeking election to public office.
In a social care context: Both terms (and more specific ones such as "independent advocacy") are used in the UK in the context of a network of interconnected organisations and projects which seek to benefit people who are in difficulty (primarily in the context of disability and mental health).
In the context of inclusion: Citizen Advocacy organisations (or programmes) seek to cause benefit by reconnecting people who have become isolated. Their practice was defined in two key documents: CAPE, and Learning from Citizen Advocacy Programs.
Tactics
Margaret E. Keck and Kathryn Sikkink have observed four types of advocacy tactics:
Information politics: quickly and credibly generating politically usable information and moving it to where it will have the most impact.
Symbolic politics: calling upon symbols, actions, or stories that make sense of a situation for an audience that is frequently far away.
Leverage politics: calling upon powerful actors to affect a situation where weaker members of a network are unlikely to have influence.
Accountability politics: efforts to hold powerful actors to their previously stated policies or principles.
These tactics have been also observed within advocacy organizations outside the USA.
Use of the Internet
Groups involved in advocacy work have been using the Internet to accomplish organizational goals. It has been argued that the Internet helps to increase the speed, reach and effectiveness of advocacy-related communication as well as mobilization efforts, suggesting that social media are beneficial to the advocacy community.
Other examples
Advocacy activities may include conducting an exit poll or the filing of an amicus brief.
Topics
People advocate for a large number and variety of topics. Some of these are clear-cut social issues that are universally agreed to be problematic and worth solving, such as human trafficking. Others—such as abortion—are much more divisive and inspire strongly held opinions on both sides. There may never be a consensus on this latter type of issues, but intense advocacy is likely to remain. In the United States, any issue of widespread debate and deeply divided opinion can be referred to as a social issue. The Library of Congress has assembled an extensive list of social issues in the United States, ranging from vast ones like abortion to same-sex marriage to smaller ones like hacking and academic cheating.
Topics that appear to involve advancing a certain positive ideal are often known as causes. A particular cause may be very expansive in nature — for instance, increasing liberty or fixing a broken political system. For instance in 2008, U.S. presidential candidate Barack Obama utilized such a meaning when he said, "this was the moment when we tore down barriers that have divided us for too long; when we rallied people of all parties and ages to a common cause." Change.org and Causes are two popular websites that allow people to organize around a common cause.
Topics upon which there is universal agreement that they need to be solved include, for example, human trafficking, poverty, water and sanitation as a human right.
"Social issues" as referred to in the United States also include topics (also known as "causes") intended by their advocates to advance certain ideals (such as equality) include: civil rights, LGBT rights, women's rights, environmentalism, and veganism.
Transnational advocacy
Advocates and advocacy groups represent a wide range of categories and support several issues as listed on worldadvocacy.com. The Advocacy Institute, a US-based global organization, is dedicated to strengthening the capacity of political, social, and economic justice advocates to influence and change public policy.
The phenomenon of globalization draws a special attention to advocacy beyond countries’ borders. The core existence of networks such as World Advocacy or the Advocacy Institute demonstrates the increasing importance of transnational advocacy and international advocacy. Transnational advocacy networks are more likely to emerge around issues where external influence is necessary to ease the communication between internal groups and their own government. Groups of advocates willing to further their mission also tend to promote networks and to meet with their internal counterparts to exchange ideas.
Transnational advocacy is increasingly playing a role in advocacy for migrants rights, and migrant advocacy organizations have strategically called upon governments and international organizations for leverage.
Transnational advocates spend time with local interest groups in order to better understand their views and wishes.
See also
Advocacy group
Cause lawyer
Disability advocacy
Patient advocacy
References
External links
College Board Advocacy & Policy Center
Public Affairs World – news and information site on the subject of lobbying
Activism by type | 0.77793 | 0.992857 | 0.772373 |
Urban geography | Urban geography is the subdiscipline of geography that derives from a study of cities and urban processes. Urban geographers and urbanists examine various aspects of urban life and the built environment. Scholars, activists, and the public have participated in, studied, and critiqued flows of economic and natural resources, human and non-human bodies, patterns of development and infrastructure, political and institutional activities, governance, decay and renewal, and notions of socio-spatial inclusions, exclusions, and everyday life. Urban geography includes different other fields in geography such as the physical, social, and economic aspects of urban geography. The physical geography of urban environments is essential to understand why a town is placed in a specific area, and how the conditions in the environment play an important role with regards to whether or not the city successfully develops. Social geography examines societal and cultural values, diversity, and other conditions that relate to people in the cities. Economic geography is important to examine the economic and job flow within the urban population. These various aspects involved in studying urban geography are necessary to better understand the layout and planning involved in the development of urban environments worldwide.
Patterns of Urban Development and Infrastructure
The development pattern of a place such as city, neighborhood deals how the building and human activities are arranged and organized on the landscape. Urban environments are composed of hard infrastructure, such as roads and bridges, and soft infrastructure, such as health and social services. The construction of urban areas is facilitated through urban planning and architecture. To combat the negative environmental effects of urban development, green infrastructure such as community gardens and parks, sewage and waste systems, and the use of solar energy have been implemented in many cities. The use of green infrastructure has been effective in responding to climate change and reducing flood risks. Green infrastructure, such as home and urban gardens, have been found to not only improve air quality but also promote mental well-being.
Flow of Economic and Natural Resources Within Urban Environments
Over the years, the development of urban environments has continued to increase due to globalization and urbanization. According to the UN, the world's population in urban areas is estimated to increase from 55% to 68% by the year 2050. The increase in the development of urban environments leads to the increase in economic flow and utilization of natural resources. As the population in urban areas continue to grow, the use of direct energy and transport energy tends to increase and is estimated to increase in the future.
According to the study conducted by Creutzig et al., the current energy usage is projected to increase from 240 EJ in 2005 to 730 EJ in the year 2050 if worldwide urbanization continues. As more people move to the cities in search of work, business tends to follow suit. Thus, cities will develop the need for new infrastructures such as schools, hospitals, and various public facilities. The development of these types of soft infrastructure can lead to a positive impact on the residents. For instance, soft infrastructure can promote economic growth through allowing its residents to specialize in different areas of expertise. The diversification of careers within the urban population can increase the economic flow within the urban area.
Human Interactions Within Urban Environments
The development of soft infrastructure within urban areas provide people with ways to connect with one another as a community as well as ways to seek support services. Community infrastructure includes areas and services that allow human beings to interact with one another. Such interactions can be facilitated through health services, educational institutions, outreach centers, and community groups. Human interactions with their urban environments can lead to both positive and negative effects. Humans depend on their environment in order to get essential resources, such as good air quality, food and shelter. This natural environmental dependence can lead to the over exploitation of natural resources as the need for such resources increase. Humans can also modify their environment in order to meet their goals. For instance, humans can clear land or agriculture in order to develop urbanized buildings such as commercial skyscrapers and public housing. The clearing of land to pave the way for urbanization can lead to negative environmental impacts such as deforestation, decreased air quality, and wild life displacement.
Social and Political Flow Within Urban Environments
As populations within cities grew over the years, the need to create forms of local government emerged. To maintain order within developing cities, politicians are elected to address environmental and societal issues within the population. For instance, the influence of local and state political dynamics plays an important role in how actions are taken place to combat climate change and housing issues.
Impact of Urban Geography
Environmental Impact
The environment of urban areas is developed through the concept of urbanization. Urbanization is the transition from rural town-structured communities to urban city-structured communities. This transition is because humans are pulled to cities because of jobs and even welfare. In cities, problems will arise such as environmental degradation. The increasing population can lead to poor air quality and quality and availability of water. The growth of urbanization can lead to more use of energy which leads to air pollution and can impact human health. Flash flooding is another environmental hazard that can occur due to urban development. The concept of urbanization plays an important role in the study of urban geography because it involves the formation of urban infrastructures such as sanitation, sewage systems, and the distribution of electricity and gas.
Societal Impact
The migration form rural to urbanized areas is fueled by their search for jobs, education, and social welfare. There are trends in urbanization that are influenced by push and pull factors. The push factors include the increasingly high growth of rural areas which leads many people to migrate to the cities in search of better livelihood opportunities, a good quality of life, and a higher standard of living. People are forced to leave their rural homes and move to various cities because of various factors such as low agricultural productivity, poverty, and food insecurity. In addition to the push factors, there are also the pull factors, which "pull" people to cities for better opportunities, better education, proper public health facilities, and also entertainment which offers employment opportunities. The gentrification of urban environments leads to an increase in income gaps, racial inequality, and displacement within metropolitan areas. The negative environmental impacts of urbanization disproportionately effects minority low income areas more than higher income communities.
Climate Impact
The increasing demand for new building infrastructure within densely populated cities resulted in an increase in air pollution due to the high energy usage within these urban areas. The increasing energy use leads to an increase in heat emissions, which results in global warming. Cities are a key contributor to climate change because urban activities are a major source of greenhouse gas emissions. It was estimated that cities are responsible for about 75% of global carbon dioxide emissions, with the inclusion of transportation and buildings being the largest contributor. In order to combat the negative environmental impacts urbanization, many modern cities develop environmentally conscious infrastructure. For instance, the implementation of public transportation such as train and bus systems help to lessen the use of cars within cities. The use of solar energy can also be found in many commercial and residential buildings, which helps to lessen the reliance on non-renewable energy resources.
Biodiversity Impact
Urbanization has a great impact on biodiversity. As cities develop, vital habitats are destroyed or fragmented into patches which leads to them not being big enough to support complex ecological communities. In cities, species can become endangered or locally extinct. The human population is the main contributor to the expansion of urban areas. As urban areas grow from increasing human population and from migration, this can result in deforestation, habitat loss, and extraction of freshwater from the environment which can decrease biodiversity and alter the species ranges and interaction. Some additional cause-and-effect relationships between urban geography and ecosystems include habitat loss which decreases the species' populations, ranges, and interaction among organisms, the life cycles, and traits can help species survive and reproduce in disturbed ecosystems. The paving of land with concrete can increase water runoff, increase erosion, and soil quality can also decrease.
Research interest
Urban geographers are primarily concerned with the ways in which cities and towns are constructed, governed and experienced. Alongside neighboring disciplines such as urban anthropology, urban planning and urban sociology, urban geography mostly investigates the impact of urban processes on the earth's surface's social and physical structures. Urban geographical research can be part of both human geography and physical geography.
The two fundamental aspects of cities and towns, from the geographic perspective are:
Location ("systems of cities"): spatial distribution and the complex patterns of movement, flows and linkages that bind them in space; and
Urban structure ("cities as systems"): study of patterns of distribution and interaction within cities, from quantitative, qualitative, structural, and behavioral perspectives.
Research topics
Cities as centers of manufacturing and services
Cities differ in their economic makeup, their social and demographic characteristics, and the roles they play within the city system. One can trace these differences back to regional variations in the local resources on which growth was based during the early development of the urban pattern and in part to the subsequent shifts in the competitive advantage of regions brought about by changing locational forces affecting regional specialization within the framework of a market economy. The recognition of different city types is critical for the classification of cities in urban geography. For such classification, emphasis given in particular to functional town classification and the basic underlying dimensions of the city system.
The purpose of classifying cities is twofold. On the one hand, it is undertaken to search reality for hypotheses. In this context, the recognition of different types of cities on the basis of, for example, their functional specialization may enable the identification of spatial regularities in the distribution and structure of urban functions and the formulation of hypotheses about the resulting patterns. On the other hand, classification is undertaken to structure reality in order to test specific hypotheses that have already been formulated. For example, to test the hypotheses that cities with a diversified economy grow at a faster rate then those with a more specialized economic base, cities must first be classified so that diversified and specialized cities can be differentiated.
The simplest way to classify cities is to identify the distinctive role they play in the city system. There are three distinct roles:
central places functioning primarily as service centers for local hinterlands
transportation cities performing break-of-bulk and allied functions for larger regions
specialized-function cities, dominated by one activity such as mining, manufacturing or recreation and serving national and international markets
The composition of a city's labor force has traditionally been regarded as the best indicator of functional specialization, and different city types have been most frequently identified from the analysis of employment profiles. Specialization in a given activity is said to exist when employment in it exceeds some critical level.
The relationship between the city system and the development of manufacturing has become very apparent. The rapid growth and spread of cities within the heartland-hinterland framework after 1870 was conditioned to a large extent by industrial developments, and the decentralization of population within the urban system in recent years is related in large part to the movement of employment in manufacturing away from traditional industrial centers. Manufacturing is found in nearly all cities, but its importance is measured by the proportion of total earnings received by the inhabitants of an urban area. When 25 percent or more of the total earnings in an urban region derive from manufacturing, that urban area is arbitrarily designated as a manufacturing center.
The location of manufacturing is affected by myriad economic and non-economic factors, such as the nature of the material inputs, the factors of production, the market and transportation costs. Other important influences include agglomeration and external economies, public policy and personal preferences. Although it is difficult to evaluate precisely the effect of the market on the location of manufacturing activities, two considerations are involved:
the nature of and demand for the product
transportation costs
Urbanization
Urbanization, the transformation of population from rural to urban, is a major phenomenon of the modern era and a central topic of study.
History of the discipline
Urban geography arrived as a critical sub-discipline with the 1973 publication of David Harvey's Social Justice and the City, which was heavily influenced by previous work by Anne Buttimer. Prior to its emergence as its own discipline, urban geography served as the academic extension of what was otherwise a professional development and planning practice. At the turn of the 19th century, urban planning began as a profession charged with mitigating the negative consequences of industrialization as documented by Friedrich Engels in his geographic analysis of the condition of the working class in England, 1844.
In a 1924 study of urban geography, Marcel Aurousseau observed that urban geography cannot be considered a subdivision of geography because it plays such an important part. However, urban geography did emerge as a specialized discipline after World War II, amidst increasing urban planning and a shift away from the primacy of physical terrain in the study of geography. Chauncy Harris and Edward Ullman were among its earliest exponents.
Urban geography arose by the 1930s in the Soviet Union as an academic complement to active urbanization and communist urban planning, focusing on cities' economic roles and potential.
Spatial analysis, behavioral analysis, Marxism, humanism, social theory, feminism, and postmodernism have arisen (in approximately this order) as overlapping lenses used within the field of urban geography in the West.
Geographic information science, using digital processing of large data sets, has become widely used since the 1980s, with major applications for urban geography.
Notable urban geographers and urbanists
Ash Amin
Mike Batty
Walter Benjamin
Anne Buttimer
Michel de Certeau
Tim Cresswell
Mike Davis
Friedrich Engels
Matthew Gandy
Peter Hall (urbanist)
Milton Santos
David Harvey
Jane Jacobs
Henri Lefebvre
David Ley
Peter Marcuse
Doreen Massey
Don Mitchell
Aihwa Ong
Gillian Rose (geographer)
Ananya Roy
Neil Smith (geographer)
Allen J. Scott
Edward W. Soja
Michael Storper
Fulong Wu
Akin Mabogunje
Loretta Lees
See also
Arbia's law of geography
Chicago school (sociology)
Commuter town
Concepts and Techniques in Modern Geography
Garden city movement
Gentrification
Index of urban studies articles
Infrastructure
Municipal or urban engineering
Rural sociology
Settlement geography
Tobler's first law of geography
Tobler's second law of geography
Urban agriculture
Urban area
Urban ecology
Urban economics
Urban field
Urban sociology
Urban studies
Urban vitality
References
External links
Imagining Urban Futures
Social and Spatial Inequalities
Urban Geography Specialty Group of the Association of American Geographers
Urban Geography Research Group of the Royal Geographical Society-Institute of British Geographers
Urbanization
Urban planning | 0.783188 | 0.986162 | 0.77235 |
Environmental DNA | Environmental DNA or eDNA is DNA that is collected from a variety of environmental samples such as soil, seawater, snow or air, rather than directly sampled from an individual organism. As various organisms interact with the environment, DNA is expelled and accumulates in their surroundings from various sources. Such eDNA can be sequenced by environmental omics to reveal facts about the species that are present in an ecosystem — even microscopic ones not otherwise apparent or detectable.
In recent years, eDNA has been used as a tool to detect endangered wildlife that were otherwise unseen. In 2020, human health researchers began repurposing eDNA techniques to track the COVID-19 pandemic.
Example sources of eDNA include, but are not limited to, feces, mucus, gametes, shed skin, carcasses and hair. Samples can be analyzed by high-throughput DNA sequencing methods, known as metagenomics, metabarcoding, and single-species detection, for rapid monitoring and measurement of biodiversity. In order to better differentiate between organisms within a sample, DNA metabarcoding is used in which the sample is analyzed and uses previously studied DNA libraries, such as BLAST, to determine what organisms are present.
eDNA metabarcoding is a novel method of assessing biodiversity wherein samples are taken from the environment via water, sediment or air from which DNA is extracted, and then amplified using general or universal primers in polymerase chain reaction and sequenced using next-generation sequencing to generate thousands to millions of reads. From this data, species presence can be determined, and overall biodiversity assessed. It is an interdisciplinary method that brings together traditional field-based ecology with in-depth molecular methods and advanced computational tools.
The analysis of eDNA has great potential, not only for monitoring common species, but to genetically detect and identify other extant species that could influence conservation efforts. This method allows for biomonitoring without requiring collection of the living organism, creating the ability to study organisms that are invasive, elusive, or endangered without introducing anthropogenic stress on the organism. Access to this genetic information makes a critical contribution to the understanding of population size, species distribution, and population dynamics for species not well documented. Importantly, eDNA is often more cost-effective compared to traditional sampling methods. The integrity of eDNA samples is dependent upon its preservation within the environment.
Soil, permafrost, freshwater and seawater are well-studied macro environments from which eDNA samples have been extracted, each of which include many more conditioned subenvironments. Because of its versatility, eDNA is applied in many subenvironments such as freshwater sampling, seawater sampling, terrestrial soil sampling (tundra permafrost), aquatic soil sampling (river, lake, pond, and ocean sediment), or other environments where normal sampling procedures can become problematic.
On 7 December 2022 a study in Nature reported the recovery of two-million year old eDNA in sediments from Greenland, which is currently considered the oldest DNA sequenced so far.
Overview
Environmental DNA or eDNA describes the genetic material present in environmental samples such as sediment, water, and air, including whole cells, extracellular DNA and potentially whole organisms. The analysis of eDNA starts with capturing an environmental sample of interest. The DNA in the sample is then extracted and purified. The purified DNA is then amplified for a specific gene target so it can be sequenced and categorised based on its sequence. From this information, detection and classification of species is possible.
eDNA can come from skin, mucous, saliva, sperm, secretions, eggs, feces, urine, blood, roots, leaves, fruit, pollen, and rotting bodies of larger organisms, while microorganisms may be obtained in their entirety. eDNA production is dependent on biomass, age and feeding activity of the organism as well as physiology, life history, and space use.
Despite being a relatively new method of surveying, eDNA has already proven to have enormous potential in biological monitoring. Conventional methods for surveying richness and abundance are limited by taxonomic identification, may cause disturbance or destruction of habitat, and may rely on methods in which it is difficult to detect small or elusive species, thus making estimates for entire communities impossible. eDNA can complement these methods by targeting different species, sampling greater diversity, and increasing taxonomic resolution. Additionally, eDNA is capable of detecting rare species, but not of determining population quality information such as sex ratios and body conditions, so it is ideal for supplementing traditional studies. Regardless, it has useful applications in detecting the first occurrences of invasive species, the continued presence of native species thought to be extinct or otherwise threatened, and other elusive species occurring in low densities that would be difficult to detect by traditional means.
Degradation of eDNA in the environment limits the scope of eDNA studies, as often only small segments of genetic material remain, particularly in warm, tropical regions. Additionally, the varying lengths of time to degradation based on environmental conditions and the potential of DNA to travel throughout media such as water can affect inference of fine-scale spatiotemporal trends of species and communities. Despite these drawbacks, eDNA still has the potential to determine relative or rank abundance as some studies have found it to correspond with biomass, though the variation inherent in environmental samples makes it difficult to quantify. While eDNA has numerous applications in conservation, monitoring, and ecosystem assessment, as well as others yet to be described, the highly variable concentrations of eDNA and potential heterogeneity through the water body makes it essential that the procedure is optimized, ideally with a pilot study for each new application to ensure that the sampling design is appropriate to detect the target.
Community DNA
While the definition of eDNA seems straightforward, the lines between different forms of DNA become blurred, particularly in comparison to community DNA, which is described as bulk organismal samples. A question arises regarding whole microorganisms captured in eDNA samples: do these organisms alter the classification of the sample to a community DNA sample? Additionally, the classification of genetic material from feces is problematic and often referred to as eDNA. Differentiation between the two is important as community DNA indicates organismal presence at a particular time and place, while eDNA may have come from a different location, from predator feces, or from past presence, however this differentiation is often impossible. However, eDNA can be loosely classified as including many sectors of DNA biodiversity research, including fecal analysis and bulk samples when they are applicable to biodiversity research and ecosystem analysis.
selfDNA
The concept of selfDNA stems from discoveries made by scientists from the University of Naples Federico II, which were reported during 2015 in the journal New Phytologist, about the self-inhibitory effect of extracellular DNA in plants, but also in bacteria, fungi, algae, plants, protozoa and insects. The environmental source of such extracellular DNA is proposed to be plant litter but also other sources in different ecosystems and organisms, with the size of DNA fragments experimentally shown to have an inhibitory effect upon their conspecific organisms typically ranging between 200 and 500 base pairs. The selfDNA phenomenon has been postulated to drive ecological interactions and to be mechanistically mediated by damage-associated molecular patterns (DAMPs) and to have potential for the development of biocidal applications.
eDNA metabarcoding
By 2019 methods in eDNA research had been expanded to be able to assess whole communities from a single sample. This process involves metabarcoding, which can be precisely defined as the use of general or universal polymerase chain reaction (PCR) primers on mixed DNA samples from any origin followed by high-throughput next-generation sequencing (NGS) to determine the species composition of the sample. This method has been common in microbiology for years, but is only just finding its footing in assessment of macroorganisms. Ecosystem-wide applications of eDNA metabarcoding have the potential to not only describe communities and biodiversity, but also to detect interactions and functional ecology over large spatial scales, though it may be limited by false readings due to contamination or other errors. Altogether, eDNA metabarcoding increases speed, accuracy, and identification over traditional barcoding and decreases cost, but needs to be standardized and unified, integrating taxonomy and molecular methods for full ecological study.
eDNA metabarcoding has applications to diversity monitoring across all habitats and taxonomic groups, ancient ecosystem reconstruction, plant-pollinator interactions, diet analysis, invasive species detection, pollution responses, and air quality monitoring. eDNA metabarcoding is a unique method still in development and will likely remain in flux for some time as technology advances and procedures become standardized. However, as metabarcoding is optimized and its use becomes more widespread, it is likely to become an essential tool for ecological monitoring and global conservation study.
Extracellular and relic DNA
Extracellular DNA, sometimes called relic DNA, is DNA from dead microbes. Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress.
Under the name of environmental DNA, eDNA has seen increased use in the natural sciences as a survey tool for ecology, monitoring the movements and presence of species in water, air, or on land, and assessing an area's biodiversity.
In the diagram on the right, the amount of relic DNA in a microbial environment is determined by inputs associated with the mortality of viable individuals with intact DNA and by losses associated with the degradation of relic DNA. If the diversity of sequences contained in the relic DNA pool is sufficiently different from that in the intact DNA pool, then relic DNA may bias estimates of microbial biodiversity (as indicated by different colored boxes) when sampling from the total (intact + relic) DNA pool. Standardised Data on Initiatives (STARDIT) has been proposed as one way of standardising both data about sampling and analysis methods, and taxonomic and ontological relationships.
Collection
Terrestrial sediments
The importance of eDNA analysis stemmed from the recognition of the limitations presented by culture-based studies. Organisms have adapted to thrive in the specific conditions of their natural environments. Although scientists work to mimic these environments, many microbial organisms can not be removed and cultured in a laboratory setting. The earliest version of this analysis began with ribosomal RNA (rRNA) in microbes to better understand microbes that live in hostile environments. The genetic makeup of some microbes is then only accessible through eDNA analysis. Analytical techniques of eDNA were first applied to terrestrial sediments yielding DNA from both extinct and extant mammals, birds, insects and plants. Samples extracted from these terrestrial sediments are commonly referenced as 'sedimentary ancient DNA' (sedaDNA or dirtDNA). The eDNA analysis can also be used to study current forest communities including everything from birds and mammals to fungi and worms. Samples can be obtained from soil, faeces, 'bite DNA' from where leaves have been bitten, plants and leaves where animals have been, and from the blood meals of captured mosquitos which may have eaten blood from any animals in the area. Some methods can also attempt to capture cells with hair traps and sandpaper in areas commonly transversed by target species.
Aquatic sediments
The sedaDNA was subsequently used to study ancient animal diversity and verified using known fossil records in aquatic sediments. The aquatic sediments are deprived of oxygen and are thus protect the DNA from degrading. Other than ancient studies, this approach can be used to understand current animal diversity with relatively high sensitivity. While typical water samples can have the DNA degrade relatively quickly, the aquatic sediment samples can have useful DNA two months after the species was present. One problem with aquatic sediments is that it is unknown where the organism deposited the eDNA as it could have moved in the water column.
Aquatic (water column)
Studying eDNA in the water column can indicate the community composition of a body of water. Before eDNA, the main ways to study open water diversity was to use fishing and trapping, which requires resources such as funding and skilled labour, whereas eDNA only needs samples of water. This method is effective as pH of the water does not affect the DNA as much as previously thought, and sensitivity can be increased relatively easily. Sensitivity is how likely the DNA marker will be present in the sampled water, and can be increased simply by taking more samples, having bigger samples, and increasing PCR. eDNA degrades relatively fast in the water column, which is very beneficial in short term conservation studies such as identifying what species are present.
Researchers at the Experimental Lakes Area in Ontario, Canada and McGill University have found that eDNA distribution reflects lake stratification. As seasons and water temperature change, water density also changes such that it forms distinct layers in small boreal lakes in the summer and winter. These layers mix during the spring and fall. Fish habitat use correlates to stratification (e.g. a cold-water fish like lake trout will stay in cold water) and so does eDNA distribution, as these researchers found.
Monitoring species
eDNA can be used to monitor species throughout the year and can be very useful in conservation monitoring. eDNA analysis has been successful at identifying many different taxa from aquatic plants, aquatic mammals, fishes, mussels, fungi and even parasites. eDNA has been used to study species while minimizing any stress inducing human interaction, allowing researchers to monitor species presence at larger spatial scales more efficiently. The most prevalent use in current research is using eDNA to study the locations of species at risk, invasive species, and keystone species across all environments. eDNA is especially useful for studying species with small populations because eDNA is sensitive enough to confirm the presence of a species with relatively little effort to collect data which can often be done with a soil sample or water sample. eDNA relies on the efficiency of genomic sequencing and analysis as well as the survey methods used which continue to become more efficient and cheaper. Some studies have shown that eDNA sampled from stream and inshore environment decayed to undetectable level at within about 48 hours.
Environmental DNA can be applied as a tool to detect low abundance organisms in both active and passive forms. Active eDNA surveys target individual species or groups of taxa for detection by using highly sensitive species-specific quantitative real-time PCR or digital droplet PCR markers. CRISPR-Cas methodology has also been applied to the detection of single species from eDNA; utilising the Cas12a enzyme and allowing greater specificity when detecting sympatric taxa. Passive eDNA surveys employ massively-parallel DNA sequencing to amplify all eDNA molecules in a sample with no a priori target in mind providing blanket DNA evidence of biotic community composition.
Decline of terrestrial arthropods
Terrestrial arthropods are experiencing massive decline in Europe as well as globally, although only a fraction of the species have been assessed and the majority of insects are still undescribed to science. As one example, grassland ecosystems are home to diverse taxonomic and functional groups of terrestrial arthropods, such as pollinators, phytophagous insects, and predators, that use nectar and pollen for food sources, and stem and leaf tissue for food and development. These communities harbor endangered species, since many habitats have disappeared or are under significant threat. Therefore, extensive efforts are being conducted in order to restore European grassland ecosystems and conserve biodiversity. For instance, pollinators like bees and butterflies represent an important ecological group that has undergone severe decline in Europe, indicating a dramatic loss of grassland biodiversity. The vast majority of flowering plants are pollinated by insects and other animals both in temperate regions and the tropics. The majority of insect species are herbivores feeding on different parts of plants, and most of these are specialists, relying on one or a few plant species as their main food resource. However, given the gap in knowledge on existing insect species, and the fact that most species are still undescribed, it is clear that for the majority of plant species in the world, there is limited knowledge about the arthropod communities they harbor and interact with.
Terrestrial arthropod communities have traditionally been collected and studied using methods, such as Malaise traps and pitfall traps, which are very effective but somewhat cumbersome and potentially invasive methods. In some instances, these techniques fall short of performing efficient and standardized surveys, due to, for example, phenotypic plasticity, closely related species, and difficulties in identifying juvenile stages. Furthermore, morphological identification depends directly on taxonomic expertise, which is in decline. All such limitations of traditional biodiversity monitoring have created a demand for alternative approaches. Meanwhile, the advance in DNA sequencing technologies continuously provides new means of obtaining biological data. Hence, several new molecular approaches have recently been suggested for obtaining fast and efficient data on arthropod communities and their interactions through non‐invasive genetic techniques. This includes extracting DNA from sources such as bulk samples or insect soups, empty leaf mines, spider webs, pitcher plant fluid, environmental samples like soil, water, air, and even whole flowers (environmental DNA [eDNA]), host plant and predatory diet identification from insect DNA extracts, and predator scat from bats. Recently, also DNA from pollen attached to insects has been used for retrieving information on plant–pollinator interactions. Many of such recent studies rely on DNA metabarcoding—high‐throughput sequencing of PCR amplicons using generic primers.
Mammals
Snow tracks
Wildlife researchers in snowy areas also use snow samples to gather and extract genetic information about species of interest. DNA from snow track samples has been used to confirm the presence of such elusive and rare species as polar bears, arctic fox, lynx, wolverines, and fishers.
DNA from the air
In 2021, researchers demonstrated that eDNA can be collected from air and used to identify mammals. In 2023, scientists developed a specialized sampling probe and aircraft surveys to assess biodiversity of multiple taxa, including mammals, using air eDNA.
Managing fisheries
The successful management of commercial fisheries relies on standardised surveys to estimate the quantity and distribution of fish stocks. Atlantic cod (Gadus morhua) is an iconic example that demonstrates how poorly constrained data and uninformed decision making can result in catastrophic stock decline and ensuing economic and social problems. Traditional stock assessments of demersal fish species have relied primarily on trawl surveys, which have provided a valuable stream of information to decision makers. However, there are some notable drawbacks of demersal trawl surveys including cost, gear selectivity/catchability, habitat destruction and restricted coverage (e.g. hard-substrate bottom environments, marine protected areas).
Environmental DNA (eDNA) has emerged as a potentially powerful alternative for studying ecosystem dynamics. The constant loss and shedding of genetic material from macroorganisms imparts a molecular footprint in environmental samples that can be analysed to determine either the presence of specific target species or characterise biodiversity. The combination of next generation sequencing and eDNA sampling has been successfully applied in aquatic systems to document spatial and temporal patterns in the diversity of fish fauna. To further develop the utility of eDNA for fisheries management, understanding the ability of eDNA quantities to reflect fish biomass in the ocean is an important next step.
Positive relationships between eDNA quantities and fish biomass and abundance have been demonstrated in experimental systems. However, known variations between eDNA production and degradation rates is anticipated to complicate these relationships in natural systems. Furthermore, in oceanic systems, large habitat volumes and strong currents are likely to result in physical dispersal of DNA fragments away from target organisms. These confounding factors have been previously considered to restrict the application of quantitative eDNA monitoring in oceanic settings.
Despite these potential constraints, numerous studies in marine environments have found positive relationships between eDNA quantities and complimentary survey efforts including radio-tagging, visual surveys, echo-sounding and trawl surveys. However, studies that quantify target eDNA concentrations of commercial fish species with standardised trawl surveys in marine environments are much scarcer. In this context, direct comparisons of eDNA concentrations with biomass and stock assessment metrics, such as catch per unit effort (CPUE), are necessary to understand the applicability of eDNA monitoring to contribute to fisheries management efforts.
Deep sea sediments
Extracellular DNA in surface deep-sea sediments is by far the largest reservoir of DNA of the world oceans. The main sources of extracellular DNA in such ecosystems are represented by in situ DNA release from dead benthic organisms, and/or other processes including cell lysis due to viral infection, cellular exudation and excretion from viable cells, virus decomposition, and allochthonous inputs from the water column. Previous studies provided evidence that an important fraction of extracellular DNA can escape degradation processes, remaining preserved in the sediments. This DNA represents, potentially, a genetic repository that records biological processes occurring over time.
Recent investigations revealed that DNA preserved in marine sediments is characterized by a large number of highly diverse gene sequences. In particular, extracellular DNA has been used to reconstruct past prokaryotic and eukaryotic diversity in benthic ecosystems characterized by low temperatures and/or permanently anoxic conditions.
The diagram on the right shows the OTU (operational taxonomic unit) network of the extracellular DNA pools from the sediments of the different continental margins. The dot size within the network is proportional to the abundance of sequences for each OTU. Dots circled in red represent extracellular core OTUs, dot circled in yellow are partially shared (among two or more pools) OTUs, dots circled in black are OTUs exclusive of each pool. The core OTUs contributing at least for 20 sequences are shown. The numbers in parentheses represent the number of connections among OTUs and samples: 1 for exclusive OTUs, 2–3 for partially shared OTUs and 4 for core OTUs.
Previous studies suggested that the preservation of DNA might be also favoured in benthic systems characterised by high organic matter inputs and sedimentation rates, such as continental margins. These systems, which represent ca. 15% of the global seafloor, are also hotspots of benthic prokaryotic diversity, and therefore they could represent optimal sites to investigate the prokaryotic diversity preserved within extracellular DNA.
Spatial distribution of prokaryotic diversity has been intensively studied in benthic deep-sea ecosystems through the analysis of "environmental DNA" (i.e., the genetic material obtained directly from environmental samples without any obvious signs of biological source material). However, the extent to which gene sequences contained within extracellular DNA can alter the estimates of the diversity of the present-day prokaryotic assemblages is unknown.
Sedimentary ancient DNA
Analyses of ancient DNA preserved in various archives have transformed understanding of the evolution of species and ecosystems. Whilst earlier studies have concentrated on DNA extracted from taxonomically constrained samples (such as bones or frozen tissue), advances in high-throughput sequencing and bioinformatics now allow the analysis of ancient DNA extracted from sedimentary archives, so called sedaDNA. The accumulation and preservation of sedaDNA buried in land and lake sediments have been subject to active research and interpretation. However, studying the deposition of DNA on the ocean floor and its preservation in marine sediments is more complex because the DNA has to travel through a water column for several kilometers. Unlike in the terrestrial environment, with pervasive transport of subfossil biomass from land, the largest portion of the marine sedaDNA is derived from planktonic community, which is dominated by marine microbes and marine protists. After the death of the surface plankton, its DNA is subject to a transport through the water column, during which much of the associated organic matter is known to be consumed and respired. This transport could take between 3 and 12 days depending on the size and morphology of test. However, it remains unclear how exactly the planktonic eDNA, defined as the total DNA present in the environment after, survives this transport, whether the degradation or transport are associated with sorting or lateral advection, and finally, whether the eDNA arriving at the seafloor is preserved in marine sediments without further distortion of its composition.
Despite the long exposure to degradation under oxic conditions during transport in the water column, and substantially lower concentration of organic matter on the seafloor, there is evidence that planktonic eDNA is preserved in marine sediments and contains exploitable ecological signal. Earlier studies have shown sedaDNA preservation in marine sediments deposited under anoxia with unusually high amounts of organic matter preserved, but later investigations indicate that sedaDNA can also be extracted from normal marine sediments, dominated by clastic or biogenic mineral fractions. In addition, the low temperature of deep-sea water (0–4 °C) ensures a good preservation of sedaDNA. Using planktonic foraminifera as a "Rosetta Stone", allowing benchmarking of sedaDNA signatures by co-occurring fossil tests of these organisms, Morard et al. showed in 2017 that the fingerprint of plankton eDNA arriving on the seafloor preserves the ecological signature of these organisms at a large geographic scale. This indicates that planktonic community eDNA is deposited onto the seafloor below, together with aggregates, skeletons and other sinking planktonic material. If this is true, sedaDNA should be able to record signatures of surface ocean hydrography, affecting the composition of plankton communities, with the same spatial resolution as the skeletal remains of the plankton. In addition, if the plankton eDNA is arriving on the seafloor in association with aggregates or shells, it is possible that it withstands the transport through the water column by fixation onto mineral surfaces. The same mechanism has been proposed to explain the preservation of sedaDNA in sediments, implying that the flux of planktonic eDNA encapsulated in calcite test arriving on the seafloor is conditioned for preservation upon burial.
Planktonic foraminifera sedaDNA is an ideal proxy both “horizontally” to assess the spatial resolution of reconstructing past surface ocean hydrographic features and “vertically”, to unambiguously track the burial of its signal throughout the sediment column. Indeed, the flux of planktonic foraminifera eDNA should be proportionate to the flux of dead foraminiferal shells sinking to the seafloor, allowing independent benchmarking of the eDNA signal. eDNA is powerful tool to study ecosystem because it does not require direct taxonomic knowledge thus allowing information to be gathered on every organism present in a sample, even at the cryptic level. However, assignment of the eDNA sequences to known organisms is done via comparison with reference sequences (or barcodes) made available in public repositories or curated databases. The taxonomy of planktonic foraminifera is well understood and barcodes exist allowing almost complete mapping of eDNA amplicons on the taxonomy based on foraminiferal test morphology. Importantly, the composition of planktonic foraminifera communities is closely linked to surface hydrography and this signal is preserved by fossil tests deposited on the seafloor. Since foraminiferal eDNA accumulated in the ocean sediment can be recovered, it could be used to analyze changes in planktonic and benthic communities over time.
In 2022, two-million year old eDNA genetic material was discovered and sequenced in Greenland, and is currently considered the oldest DNA discovered so far.
Participatory research and citizen science
The relative simplicity of eDNA sampling lends itself to projects which seek to involve local communities in being part of research projects, including collecting and analysing DNA samples. This can empower local communities (including Indigenous peoples) to be actively involved in monitoring the species in an environment, and help make informed decisions as part of participatory action research model. An example of such a project has been demonstrated by the charity Science for All with the 'Wild DNA' project.
See also
Circulating free DNA
Exogenous DNA
Extracellular RNA
RNAs present in environmental samples
Shadow Effect (Genetics)
References
Further references
External links
BLAST
Biomeme Guide to eDNA
Measurement of biodiversity
DNA | 0.782234 | 0.98735 | 0.772339 |
Aquatic ecosystem | An aquatic ecosystem is an ecosystem found in and around a body of water, in contrast to land-based terrestrial ecosystems. Aquatic ecosystems contain communities of organisms—aquatic life—that are dependent on each other and on their environment. The two main types of aquatic ecosystems are marine ecosystems and freshwater ecosystems. Freshwater ecosystems may be lentic (slow moving water, including pools, ponds, and lakes); lotic (faster moving water, for example streams and rivers); and wetlands (areas where the soil is saturated or inundated for at least part of the time).
Types
Marine ecosystems
Marine coastal ecosystem
Marine surface ecosystem
Freshwater ecosystems
Lentic ecosystem (lakes)
Lotic ecosystem (rivers)
Wetlands
Functions
Aquatic ecosystems perform many important environmental functions. For example, they recycle nutrients, purify water, attenuate floods, recharge ground water and provide habitats for wildlife. The biota of an aquatic ecosystem contribute to its self-purification, most notably microorganisms, phytoplankton, higher plants, invertebrates, fish, bacteria, protists, aquatic fungi, and more. These organisms are actively involved in multiple self-purification processes, including organic matter destruction and water filtration. It is crucial that aquatic ecosystems are reliably self-maintained, as they also provide habitats for species that reside in them.
In addition to environmental functions, aquatic ecosystems are also used for human recreation, and are very important to the tourism industry, especially in coastal regions. They are also used for religious purposes, such as the worshipping of the Jordan River by Christians, and educational purposes, such as the usage of lakes for ecological study.
Biotic characteristics (living components)
The biotic characteristics are mainly determined by the organisms that occur. For example, wetland plants may produce dense canopies that cover large areas of sediment—or snails or geese may graze the vegetation leaving large mud flats. Aquatic environments have relatively low oxygen levels, forcing adaptation by the organisms found there. For example, many wetland plants must produce aerenchyma to carry oxygen to roots. Other biotic characteristics are more subtle and difficult to measure, such as the relative importance of competition, mutualism or predation. There are a growing number of cases where predation by coastal herbivores including snails, geese and mammals appears to be a dominant biotic factor.
Autotrophic organisms
Autotrophic organisms are producers that generate organic compounds from inorganic material. Algae use solar energy to generate biomass from carbon dioxide and are possibly the most important autotrophic organisms in aquatic environments. The more shallow the water, the greater the biomass contribution from rooted and floating vascular plants. These two sources combine to produce the extraordinary production of estuaries and wetlands, as this autotrophic biomass is converted into fish, birds, amphibians and other aquatic species.
Chemosynthetic bacteria are found in benthic marine ecosystems. These organisms are able to feed on hydrogen sulfide in water that comes from volcanic vents. Great concentrations of animals that feed on these bacteria are found around volcanic vents. For example, there are giant tube worms (Riftia pachyptila) 1.5 m in length and clams (Calyptogena magnifica) 30 cm long.
Heterotrophic organisms
Heterotrophic organisms consume autotrophic organisms and use the organic compounds in their bodies as energy sources and as raw materials to create their own biomass.
Euryhaline organisms are salt tolerant and can survive in marine ecosystems, while stenohaline or salt intolerant species can only live in freshwater environments.
Abiotic characteristics (non-living components)
An ecosystem is composed of biotic communities that are structured by biological interactions and abiotic environmental factors. Some of the important abiotic environmental factors of aquatic ecosystems include substrate type, water depth, nutrient levels, temperature, salinity, and flow. It is often difficult to determine the relative importance of these factors without rather large experiments. There may be complicated feedback loops. For example, sediment may determine the presence of aquatic plants, but aquatic plants may also trap sediment, and add to the sediment through peat.
The amount of dissolved oxygen in a water body is frequently the key substance in determining the extent and kinds of organic life in the water body. Fish need dissolved oxygen to survive, although their tolerance to low oxygen varies among species; in extreme cases of low oxygen, some fish even resort to air gulping. Plants often have to produce aerenchyma, while the shape and size of leaves may also be altered. Conversely, oxygen is fatal to many kinds of anaerobic bacteria.
Nutrient levels are important in controlling the abundance of many species of algae. The relative abundance of nitrogen and phosphorus can in effect determine which species of algae come to dominate. Algae are a very important source of food for aquatic life, but at the same time, if they become over-abundant, they can cause declines in fish when they decay. Similar over-abundance of algae in coastal environments such as the Gulf of Mexico produces, upon decay, a hypoxic region of water known as a dead zone.
The salinity of the water body is also a determining factor in the kinds of species found in the water body. Organisms in marine ecosystems tolerate salinity, while many freshwater organisms are intolerant of salt. The degree of salinity in an estuary or delta is an important control upon the type of wetland (fresh, intermediate, or brackish), and the associated animal species. Dams built upstream may reduce spring flooding, and reduce sediment accretion, and may therefore lead to saltwater intrusion in coastal wetlands.
Freshwater used for irrigation purposes often absorbs levels of salt that are harmful to freshwater organisms.
Threats
The health of an aquatic ecosystem is degraded when the ecosystem's ability to absorb a stress has been exceeded. A stress on an aquatic ecosystem can be a result of physical, chemical or biological alterations to the environment. Physical alterations include changes in water temperature, water flow and light availability. Chemical alterations include changes in the loading rates of biostimulatory nutrients, oxygen-consuming materials, and toxins. Biological alterations include over-harvesting of commercial species and the introduction of exotic species. Human populations can impose excessive stresses on aquatic ecosystems. Climate change driven by anthropogenic activities can harm aquatic ecosystems by disrupting current distribution patterns of plants and animals. It has negatively impacted deep sea biodiversity, coastal fish diversity, crustaceans, coral reefs, and other biotic components of these ecosystems. Human-made aquatic ecosystems, such as ditches, aquaculture ponds, and irrigation channels, may also cause harm to naturally occurring ecosystems by trading off biodiversity with their intended purposes. For instance, ditches are primarily used for drainage, but their presence also negatively affects biodiversity.
There are many examples of excessive stresses with negative consequences. The environmental history of the Great Lakes of North America illustrates this problem, particularly how multiple stresses, such as water pollution, over-harvesting and invasive species can combine. The Norfolk Broadlands in England illustrate similar decline with pollution and invasive species. Lake Pontchartrain along the Gulf of Mexico illustrates the negative effects of different stresses including levee construction, logging of swamps, invasive species and salt water intrusion.
See also
Ocean
- one of the founders of aquatic ecosystem science
References
Aquatic ecology
Ecosystems
Aquatic plants
Fisheries science
Systems ecology
Water | 0.775129 | 0.996391 | 0.772331 |
Future of Earth | The biological and geological future of Earth can be extrapolated based on the estimated effects of several long-term influences. These include the chemistry at Earth's surface, the cooling rate of the planet's interior, the gravitational interactions with other objects in the Solar System, and a steady increase in the Sun's luminosity. An uncertain factor is the pervasive influence of technology introduced by humans, such as climate engineering, which could cause significant changes to the planet. For example, the current Holocene extinction is being caused by technology, and the effects may last for up to five million years. In turn, technology may result in the extinction of humanity, leaving the planet to gradually return to a slower evolutionary pace resulting solely from long-term natural processes.
Over time intervals of hundreds of millions of years, random celestial events pose a global risk to the biosphere, which can result in mass extinctions. These include impacts by comets or asteroids and the possibility of a near-Earth supernova—a massive stellar explosion within a radius of the Sun. Other large-scale geological events are more predictable. Milankovitch's theory predicts that the planet will continue to undergo glacial periods at least until the Quaternary glaciation comes to an end. These periods are caused by the variations in eccentricity, axial tilt, and precession of Earth's orbit. As part of the ongoing supercontinent cycle, plate tectonics will probably result in a supercontinent in 250–350 million years. Sometime in the next 1.5–4.5 billion years, Earth's axial tilt may begin to undergo chaotic variations, with changes in the axial tilt of up to 90°.
The luminosity of the Sun will steadily increase, causing a rise in the solar radiation reaching Earth and resulting in a higher rate of weathering of silicate minerals. This will affect the carbonate–silicate cycle, which will cause a decrease in the level of carbon dioxide in the atmosphere. In about 600 million years from now, the level of carbon dioxide will fall below the level needed to sustain C3 carbon fixation photosynthesis used by trees. Some plants use the C4 carbon fixation method to persist at carbon dioxide concentrations as low as ten parts per million. However, the long-term trend is for plant life to die off altogether. The extinction of plants will be the demise of almost all animal life since plants are the base of much of the animal food chain on Earth.
In about one billion years the solar luminosity will be 10% higher, causing the atmosphere to become a "moist greenhouse", resulting in a runaway evaporation of the oceans. As a likely consequence, plate tectonics and the entire carbon cycle will end. Following this event, in about 2–3 billion years, the planet's magnetic dynamo may cease, causing the magnetosphere to decay and leading to an accelerated loss of volatiles from the outer atmosphere. Four billion years from now, the increase in Earth's surface temperature will cause a runaway greenhouse effect, creating conditions more extreme than present-day Venus and heating Earth's surface enough to melt it. By that point, all life on Earth will be extinct. Finally, the most probable fate of the planet is absorption by the Sun in about 7.5 billion years, after the star has entered the red giant phase and expanded beyond the planet's current orbit.
Human influence
Humans play a key role in the biosphere, with the large human population dominating many of Earth's ecosystems. This has resulted in a widespread, ongoing mass extinction of other species during the present geological epoch, now known as the Holocene extinction. The large-scale loss of species caused by human influence since the 1950s has been called a biotic crisis, with an estimated 10% of the total species lost as of 2007. At current rates, about 30% of species are at risk of extinction in the next hundred years. The Holocene extinction event is the result of habitat destruction, the widespread distribution of invasive species, poaching, and climate change. In the present day, human activity has had a significant impact on the surface of the planet. More than a third of the land surface has been modified by human actions, and humans use about 20% of global primary production. The concentration of carbon dioxide in the atmosphere has increased by close to 50% since the start of the Industrial Revolution.
The consequences of a persistent biotic crisis have been predicted to last for at least five million years. It could result in a decline in biodiversity and homogenization of biotas, accompanied by a proliferation of species that are opportunistic, such as pests and weeds. Novel species may emerge; in particular taxa that prosper in human-dominated ecosystems may rapidly diversify into many new species. Microbes are likely to benefit from the increase in nutrient-enriched environmental niches. No new species of existing large vertebrates are likely to arise and food chains will probably be shortened.
There are multiple scenarios for known risks that can have a global impact on the planet. From the perspective of humanity, these can be subdivided into survivable risks and terminal risks. Risks that humans pose to themselves include climate change, the misuse of nanotechnology, a nuclear holocaust, warfare with a programmed superintelligence, a genetically engineered disease, or a disaster caused by a physics experiment. Similarly, several natural events may pose a doomsday threat, including a highly virulent disease, the impact of an asteroid or comet, runaway greenhouse effect, and resource depletion. There may be the possibility of an infestation by an extraterrestrial lifeform. The actual odds of these scenarios occurring are difficult if not impossible to deduce.
Should the human species become extinct, then the various features assembled by humanity will begin to decay. The largest structures have an estimated decay half-life of about 1,000 years. The last surviving structures would most likely be open-pit mines, large landfills, major highways, wide canal cuts, and earth-fill flank dams. A few massive stone monuments like the pyramids at the Giza Necropolis or the sculptures at Mount Rushmore may still survive in some form after a million years.
Cataclysmic astronomical events
As the Sun orbits the Milky Way, wandering stars may approach close enough to have a disruptive influence on the Solar System. A close stellar encounter may cause a significant reduction in the perihelion distances of comets in the Oort cloud—a spherical region of icy bodies orbiting within half a light-year of the Sun. Such an encounter can trigger a 40-fold increase in the number of comets reaching the inner Solar System. Impacts from these comets can trigger a mass extinction of life on Earth. These disruptive encounters occur an average of once every 45 million years. There is a 1% chance every billion years that a star will pass within of the Sun, potentially disrupting the Solar System. The mean time for the Sun to collide with another star in the solar neighborhood is approximately 30 trillion years, which is much longer than the estimated age of the Universe, at approximately 13.8 billion years. This can be taken as an indication of the low likelihood of such an event occurring during the lifetime of the Earth.
The energy released from the impact of an asteroid or comet with a diameter of or larger is sufficient to create a global environmental disaster and cause a statistically significant increase in the number of species extinctions. Among the deleterious effects resulting from a major impact event is a cloud of fine dust ejecta blanketing the planet, blocking some direct sunlight from reaching the Earth's surface thus lowering land temperatures by about within a week and halting photosynthesis for several months (similar to a nuclear winter). The mean time between major impacts is estimated to be at least 100 million years. During the last 540 million years, simulations demonstrated that such an impact rate is sufficient to cause five or six mass extinctions and 20 to 30 lower severity events. This matches the geologic record of significant extinctions during the Phanerozoic Eon. Such events can be expected to continue.
A supernova is a cataclysmic explosion of a star. Within the Milky Way galaxy, supernova explosions occur on average once every 40 years. During the history of Earth, multiple such events have likely occurred within a distance of 100 light-years; known as a near-Earth supernova. Explosions inside this distance can contaminate the planet with radioisotopes and possibly impact the biosphere. Gamma rays emitted by a supernova react with nitrogen in the atmosphere, producing nitrous oxides. These molecules cause a depletion of the ozone layer that protects the surface from ultraviolet (UV) radiation from the Sun. An increase in UV-B radiation of only 10–30% is sufficient to cause a significant impact on life; particularly to the phytoplankton that form the base of the oceanic food chain. A supernova explosion at a distance of 26 light-years will reduce the ozone column density by half. On average, a supernova explosion occurs within 32 light-years once every few hundred million years, resulting in a depletion of the ozone layer lasting several centuries. Over the next two billion years, there will be about 20 supernova explosions and one gamma ray burst that will have a significant impact on the planet's biosphere.
The incremental effect of gravitational perturbations between the planets causes the inner Solar System as a whole to behave chaotically over long time periods. This does not significantly affect the stability of the Solar System over intervals of a few million years or less, but over billions of years, the orbits of the planets become unpredictable. Computer simulations of the Solar System's evolution over the next five billion years suggest that there is a small (less than 1%) chance that a collision could occur between Earth and either Mercury, Venus, or Mars. During the same interval, the odds that Earth will be scattered out of the Solar System by a passing star are on the order of 1 in 100,000 (0.001%). In such a scenario, the oceans would freeze solid within several million years, leaving only a few pockets of liquid water about underground. There is a remote chance that Earth will instead be captured by a passing binary star system, allowing the planet's biosphere to remain intact. The odds of this happening are about 1 in 3 million.
Orbit and rotation
The gravitational perturbations of the other planets in the Solar System combine to modify the orbit of Earth and the orientation of its rotation axis. These changes can influence the planetary climate. Despite such interactions, highly accurate simulations show that overall, Earth's orbit is likely to remain dynamically stable for billions of years into the future. In all 1,600 simulations, the planet's semimajor axis, eccentricity, and inclination remained nearly constant.
Glaciation
Historically, there have been cyclical ice ages in which glacial sheets periodically covered the higher latitudes of the continents. Ice ages may occur because of changes in ocean circulation and continentality induced by plate tectonics. The Milankovitch theory predicts that glacial periods occur during ice ages because of astronomical factors in combination with climate feedback mechanisms. The primary astronomical drivers are a higher than normal orbital eccentricity, a low axial tilt (or obliquity), and the alignment of the northern hemisphere's summer solstice with the aphelion. Each of these effects occur cyclically. For example, the eccentricity changes over time cycles of about 100,000 and 400,000 years, with the value ranging from less than 0.01 up to 0.05. This is equivalent to a change of the semiminor axis of the planet's orbit from 99.95% of the semimajor axis to 99.88%, respectively.
Earth is passing through an ice age known as the quaternary glaciation, and is presently in the Holocene interglacial period. This period would normally be expected to end in about 25,000 years. However, the increased rate at which humans release carbon dioxide into the atmosphere may delay the onset of the next glacial period until at least 50,000–130,000 years from now. On the other hand, a global warming period of finite duration (based on the assumption that fossil fuel use will cease by the year 2200) will probably only impact the glacial period for about 5,000 years. Thus, a brief period of global warming induced by a few centuries' worth of greenhouse gas emission would only have a limited impact in the long term.
Obliquity
The tidal acceleration of the Moon slows the rotation rate of the Earth and increases the Earth-Moon distance. Friction effects—between the core and mantle and between the atmosphere and surface—can dissipate the Earth's rotational energy. These combined effects are expected to increase the length of the day by more than 1.5 hours over the next 250 million years, and to increase the obliquity by about a half degree. The distance to the Moon will increase by about 1.5 Earth radii during the same period.
Based on computer models, the presence of the Moon appears to stabilize the obliquity of the Earth, which may help the planet to avoid dramatic climate changes. This stability is achieved because the Moon increases the precession rate of the Earth's rotation axis, thereby avoiding resonances between the precession of the rotation and precession of the planet's orbital plane (that is, the precession motion of the ecliptic). However, as the semimajor axis of the Moon's orbit continues to increase, this stabilizing effect will diminish. At some point, perturbation effects will probably cause chaotic variations in the obliquity of the Earth, and the axial tilt may change by angles as high as 90° from the plane of the orbit. This is expected to occur between 1.5 and 4.5 billion years from now.
A high obliquity would probably result in dramatic changes in the climate and may destroy the planet's habitability. When the axial tilt of the Earth exceeds 54°, the yearly insolation at the equator is less than that at the poles. The planet could remain at an obliquity of 60° to 90° for periods as long as 10 million years.
Geodynamics
Tectonics-based events will continue to occur well into the future and the surface will be steadily reshaped by tectonic uplift, extrusions, and erosion. Mount Vesuvius can be expected to erupt about 40 times over the next 1,000 years. During the same period, about five to seven earthquakes of magnitude 8 or greater should occur along the San Andreas Fault, while about 50 events of magnitude 9 may be expected worldwide. Mauna Loa should experience about 200 eruptions over the next 1,000 years, and the Old Faithful Geyser will likely cease to operate. The Niagara Falls will continue to retreat upstream, reaching Buffalo in about 30,000–50,000 years. Supervolcano events are the most impactful geological hazards, generating over of fragmented material and covering thousands of square kilometers with ash deposits. However, they are comparatively rare, occurring on average every 100,000 years.
In 10,000 years, the post-glacial rebound of the Baltic Sea will have reduced the depth by about . The Hudson Bay will decrease in depth by 100 m over the same period. After 100,000 years, the island of Hawaii will have shifted about to the northwest. The planet may be entering another glacial period by this time.
Continental drift
The theory of plate tectonics demonstrates that the continents of the Earth are moving across the surface at the rate of a few centimeters per year. This is expected to continue, causing the plates to relocate and collide. Continental drift is facilitated by two factors: the energy generated within the planet and the presence of a hydrosphere. With the loss of either of these, continental drift will come to a halt. The production of heat through radiogenic processes is sufficient to maintain mantle convection and plate subduction for at least the next 1.1 billion years.
At present, the continents of North and South America are moving westward from Africa and Europe. Researchers have produced several scenarios about how this will continue in the future. These geodynamic models can be distinguished by the subduction flux, whereby the oceanic crust moves under a continent. In the introversion model, the younger, interior, Atlantic Ocean becomes preferentially subducted and the current migration of North and South America is reversed. In the extroversion model, the older, exterior, Pacific Ocean remains preferentially subducted and North and South America migrate toward eastern Asia.
As the understanding of geodynamics improves, these models will be subject to revision. In 2008, for example, a computer simulation was used to predict that a reorganization of the mantle convection will occur over the next 100 million years, creating a new supercontinent composed of Africa, Eurasia, Australia, Antarctica and South America to form around Antarctica.
Regardless of the outcome of the continental migration, the continued subduction process causes water to be transported to the mantle. After a billion years from the present, a geophysical model gives an estimate that 27% of the current ocean mass will have been subducted. If this process were to continue unmodified into the future, the subduction and release would reach an equilibrium after 65% of the current ocean mass has been subducted.
Introversion
Christopher Scotese and his colleagues have mapped out the predicted motions several hundred million years into the future as part of the Paleomap Project. In their scenario, 50 million years from now the Mediterranean Sea may vanish, and the collision between Europe and Africa will create a long mountain range extending to the current location of the Persian Gulf. Australia will merge with Indonesia, and Baja California will slide northward along the coast. New subduction zones may appear off the eastern coast of North and South America, and mountain chains will form along those coastlines. The migration of Antarctica to the north will cause all of its ice sheets to melt. This, along with the melting of the Greenland ice sheets, will raise the average ocean level by . The inland flooding of the continents will result in climate changes.
As this scenario continues, by 100 million years from the present, the continental spreading will have reached its maximum extent and the continents will then begin to coalesce. In 250 million years, North America will collide with Africa. South America will wrap around the southern tip of Africa. The result will be the formation of a new supercontinent (sometimes called Pangaea Ultima), with the Pacific Ocean stretching across half the planet. Antarctica will reverse direction and return to the South Pole, building up a new ice cap.
Extroversion
The first scientist to extrapolate the current motions of the continents was Canadian geologist Paul F. Hoffman of Harvard University. In 1992, Hoffman predicted that the continents of North and South America would continue to advance across the Pacific Ocean, pivoting about Siberia until they begin to merge with Asia. He dubbed the resulting supercontinent, Amasia. Later, in the 1990s, Roy Livermore calculated a similar scenario. He predicted that Antarctica would start to migrate northward, and East Africa and Madagascar would move across the Indian Ocean to collide with Asia.
In an extroversion model, the closure of the Pacific Ocean would be complete in about 350 million years. This marks the completion of the current supercontinent cycle, wherein the continents split apart and then rejoin each other about every 400–500 million years. Once the supercontinent is built, plate tectonics may enter a period of inactivity as the rate of subduction drops by an order of magnitude. This period of stability could cause an increase in the mantle temperature at the rate of every 100 million years, which is the minimum lifetime of past supercontinents. As a consequence, volcanic activity may increase.
Supercontinent
The formation of a supercontinent can dramatically affect the environment. The collision of plates will result in mountain building, thereby shifting weather patterns. Sea levels may fall because of increased glaciation. The rate of surface weathering can rise, increasing the rate at which organic material is buried. Supercontinents can cause a drop in global temperatures and an increase in atmospheric oxygen. This, in turn, can affect the climate, further lowering temperatures. All of these changes can result in more rapid biological evolution as new niches emerge.
The formation of a supercontinent insulates the mantle. The flow of heat will be concentrated, resulting in volcanism and the flooding of large areas with basalt. Rifts will form and the supercontinent will split up once more. The planet may then experience a warming period as occurred during the Cretaceous period, which marked the split-up of the previous Pangaea supercontinent.
Solidification of the outer core
The iron-rich core region of the Earth is divided into a diameter solid inner core and a diameter liquid outer core. The rotation of the Earth creates convective eddies in the outer core region that cause it to function as a dynamo. This generates a magnetosphere about the Earth that deflects particles from the solar wind, which prevents significant erosion of the atmosphere from sputtering. As heat from the core is transferred outward toward the mantle, the net trend is for the inner boundary of the liquid outer core region to freeze, thereby releasing thermal energy and causing the solid inner core to grow. This iron crystallization process has been ongoing for about a billion years. In the modern era, the radius of the inner core is expanding at an average rate of roughly per year, at the expense of the outer core. Nearly all of the energy needed to power the dynamo is being supplied by this process of inner core formation.
The inner core is expected to consume most or all of the outer core 3–4 billion years from now, resulting in an almost completely solidified core composed of iron and other heavy elements. The surviving liquid envelope will mainly consist of lighter elements that will undergo less mixing. Alternatively, if at some point plate tectonics cease, the interior will cool less efficiently, which would slow down or even stop the inner core's growth. In either case, this can result in the loss of the magnetic dynamo. Without a functioning dynamo, the magnetic field of the Earth will decay in a geologically short time period of roughly 10,000 years. The loss of the magnetosphere will cause an increase in erosion of light elements, particularly hydrogen, from the Earth's outer atmosphere into space, resulting in less favorable conditions for life.
Solar evolution
The energy generation of the Sun is based upon thermonuclear fusion of hydrogen into helium. This occurs in the core region of the star using the proton–proton chain reaction process. Because there is no convection in the solar core, the helium concentration builds up in that region without being distributed throughout the star. The temperature at the core of the Sun is too low for nuclear fusion of helium atoms through the triple-alpha process, so these atoms do not contribute to the net energy generation that is needed to maintain hydrostatic equilibrium of the Sun.
At present, nearly half the hydrogen at the core has been consumed, with the remainder of the atoms consisting primarily of helium. As the number of hydrogen atoms per unit mass decreases, so too does their energy output provided through nuclear fusion. This results in a decrease in pressure support, which causes the core to contract until the increased density and temperature bring the core pressure into equilibrium with the layers above. The higher temperature causes the remaining hydrogen to undergo fusion at a more rapid rate, thereby generating the energy needed to maintain the equilibrium.
The result of this process has been a steady increase in the energy output of the Sun. When the Sun first became a main sequence star, it radiated only 70% of the current luminosity. The luminosity has increased in a nearly linear fashion to the present, rising by 1% every 110 million years. Likewise, in three billion years the Sun is expected to be 33% more luminous. The hydrogen fuel at the core will finally be exhausted in five billion years, when the Sun will be 67% more luminous than at present. Thereafter, the Sun will continue to burn hydrogen in a shell surrounding its core until the luminosity reaches 121% above the present value. This marks the end of the Sun's main-sequence lifetime, and thereafter it will pass through the subgiant stage and evolve into a red giant.
By this time, the collision of the Milky Way and Andromeda galaxies should be underway. Although this could result in the Solar System being ejected from the newly combined galaxy, it is considered unlikely to have any adverse effect on the Sun or its planets.
Climate impact
The rate of weathering of silicate minerals will increase as rising temperatures speed chemical processes up. This, in turn, will decrease the level of carbon dioxide in the atmosphere, as reactions with silicate minerals convert carbon dioxide gas into solid carbonates. Within the next 600 million years from the present, the concentration of carbon dioxide will fall below the critical threshold needed to sustain C3 photosynthesis: about 50 parts per million. At this point, trees and forests in their current forms will no longer be able to survive. This decline in plant life is likely to be a long-term decline rather than a sharp drop. Plant groups will likely die one by one well before the 50 parts per million level is reached. The first plants to disappear will be C3 herbaceous plants, followed by deciduous forests, evergreen broad-leaf forests and finally evergreen conifers. However, C4 carbon fixation can continue at much lower concentrations, down to above 10 parts per million; thus, plants using photosynthesis may be able to survive for at least 0.8 billion years and possibly as long as 1.2 billion years from now, after which rising temperatures will make the biosphere unsustainable. Researchers at Caltech have suggested that once C3 plants die off, the lack of biological production of oxygen and nitrogen will cause a reduction in Earth's atmospheric pressure, which will counteract the temperature rise, and allow enough carbon dioxide to persist for photosynthesis to continue. This would allow life to survive up to 2 billion years from now, at which point water would be the limiting factor.
Currently, plants represent about 5% of Earth's plant biomass and 1% of its known plant species. For example, about 50% of all grass species (Poaceae) use the photosynthetic pathway, as do many species in the herbaceous family Amaranthaceae.
When the carbon dioxide levels fall to the limit where photosynthesis is barely sustainable, the proportion of carbon dioxide in the atmosphere is expected to oscillate up and down. This will allow land vegetation to flourish each time the level of carbon dioxide rises due to tectonic activity and respiration from animal life; however, the long-term trend is for the plant life on land to die off altogether as most of the remaining carbon in the atmosphere becomes sequestered in the Earth. Plants—and, by extension, animals—could survive longer by evolving other strategies such as requiring less carbon dioxide for photosynthetic processes, becoming carnivorous, adapting to desiccation, or associating with fungi. These adaptations are likely to appear near the beginning of the moist greenhouse (see further).
The loss of higher plant life will result in the eventual loss of oxygen as well as ozone due to the respiration of animals, chemical reactions in the atmosphere, and volcanic eruptions. Modeling of the decline in oxygenation predicts that it may drop to 1% of the current atmospheric levels by one billion years from now. This decline will result in less attenuation of DNA-damaging UV, as well as the death of animals; the first animals to disappear would be large mammals, followed by small mammals, birds, amphibians and large fish, reptiles and small fish, and finally invertebrates.
Before this happens, it is expected that life would concentrate at refugia of lower temperatures such as high elevations where less land surface area is available, thus restricting population sizes. Smaller animals would survive better than larger ones because of lesser oxygen requirements, while birds would fare better than mammals thanks to their ability to travel large distances looking for cooler temperatures. Based on oxygen's half-life in the atmosphere, animal life would last at most 100 million years after the loss of higher plants. Some cyanobacteria and phytoplankton could outlive plants due to their tolerance for carbon dioxide levels as low as 1 ppm, and may survive for around the same time as animals before carbon dioxide becomes too depleted to support any form of photosynthesis.
In their work The Life and Death of Planet Earth, authors Peter D. Ward and Donald Brownlee have argued that some form of animal life may continue even after most of the Earth's plant life has disappeared. Ward and Brownlee use fossil evidence from the Burgess Shale in British Columbia, Canada, to determine the climate of the Cambrian Explosion, and use it to predict the climate of the future when rising global temperatures caused by a warming Sun and declining oxygen levels result in the final extinction of animal life. Initially, they expect that some insects, lizards, birds, and small mammals may persist, along with sea life; however, without oxygen replenishment by plant life, they believe that animals would probably die off from asphyxiation within a few million years. Even if sufficient oxygen were to remain in the atmosphere through the persistence of some form of photosynthesis, the steady rise in global temperature would result in a gradual loss of biodiversity.
As temperatures rise, the last of animal life will be driven toward the poles, possibly underground. They would become primarily active during the polar night, aestivating during the polar day due to the intense heat. Much of the surface would become a barren desert and life would primarily be found in the oceans. However, due to a decrease in the amount of organic matter entering the oceans from land as well as a decrease in dissolved oxygen, sea life would disappear too, following a similar path to that on Earth's surface. This process would start with the loss of freshwater species and conclude with invertebrates, particularly those that do not depend on living plants such as termites or those near hydrothermal vents such as worms of the genus Riftia. As a result of these processes, multicellular life forms may be extinct in about 800 million years, and eukaryotes in 1.3 billion years, leaving only the prokaryotes.
Loss of oceans
One billion years from now, about 27% of the modern ocean will have been subducted into the mantle. If this process were allowed to continue uninterrupted, it would reach an equilibrium state where 65% of the current surface reservoir would remain at the surface. Once the solar luminosity is 10% higher than its current value, the average global surface temperature will rise to . The atmosphere will become a "moist greenhouse" leading to a runaway evaporation of the oceans. At this point, models of the Earth's future environment demonstrate that the stratosphere would contain increasing levels of water. These water molecules will be broken down through photodissociation by solar UV, allowing hydrogen to escape the atmosphere. The net result would be a loss of the world's seawater in about 1 to 1.5 billion years from the present, depending on the model.
There will be one of two variations of this future warming feedback: the "moist greenhouse" where water vapor dominates the troposphere while water vapor starts to accumulate in the stratosphere (if the oceans evaporate very quickly), and the "runaway greenhouse" where water vapor becomes a dominant component of the atmosphere (if the oceans evaporate too slowly). In this ocean-free era, there will continue to be surface reservoirs as water is steadily released from the deep crust and mantle, where it is estimated that there is an amount of water equivalent to several times that currently present in the Earth's oceans. Some water may be retained at the poles and there may be occasional rainstorms, but for the most part, the planet would be a desert with large dunefields covering its equator, and a few salt flats on what was once the ocean floor, similar to the ones in the Atacama Desert in Chile.
With no water to serve as a lubricant, plate tectonics would likely stop and the most visible signs of geological activity would be shield volcanoes located above mantle hotspots. In these arid conditions the planet may retain some microbial and possibly even multicellular life. Most of these microbes will be halophiles and life could find refuge in the atmosphere as has been proposed to have happened on Venus. However, the increasingly extreme conditions will likely lead to the extinction of the prokaryotes between 1.6 billion years and 2.8 billion years from now, with the last of them living in residual ponds of water at high latitudes and heights or in caverns with trapped ice. However, underground life could last longer.
What proceeds after this depends on the level of tectonic activity. A steady release of carbon dioxide by volcanic eruption could cause the atmosphere to enter a "super-greenhouse" state like that of the planet Venus. But, as stated above, without surface water, plate tectonics would probably come to a halt and most of the carbonates would remain securely buried until the Sun becomes a red giant and its increased luminosity heats the rock to the point of releasing the carbon dioxide. However, as pointed out by Peter Ward and Donald Brownlee in their book The Life and Death of Planet Earth, according to NASA Ames scientist Kevin Zahnle, it is highly possible that plate tectonics may stop long before the loss of the oceans, due to the gradual cooling of the Earth's core, which could happen in just 500 million years. This could potentially turn the Earth back into a water world, and even perhaps drowning all remaining land life.
The loss of the oceans could be delayed until 2 billion years in the future if the atmospheric pressure were to decline. A lower atmospheric pressure would reduce the greenhouse effect, thereby lowering the surface temperature. This could occur if natural processes were to remove the nitrogen from the atmosphere. Studies of organic sediments have shown that at least of nitrogen has been removed from the atmosphere over the past four billion years, which is enough to effectively double the current atmospheric pressure if it were to be released. This rate of removal would be sufficient to counter the effects of increasing solar luminosity for the next two billion years.
By 2.8 billion years from now, the surface temperature of the Earth will have reached , even at the poles. At this point, any remaining life will be extinguished due to the extreme conditions. What happens beyond this depends on how much water is left on the surface. If all of the water on Earth has evaporated by this point (via the "moist greenhouse" at ~1 Gyr from now), the planet will stay in the same conditions with a steady increase in the surface temperature until the Sun becomes a red giant. If not and there are still pockets of water left, and they evaporate too slowly, then in about 3–4 billion years, once the amount of water vapor in the lower atmosphere rises to 40%, and the luminosity from the Sun reaches 35–40% more than its present-day value, a "runaway greenhouse" effect will ensue, causing the atmosphere to warm and raising the surface temperature to around . This is sufficient to melt the surface of the planet. However, most of the atmosphere is expected to be retained until the Sun has entered the red giant stage.
With the extinction of life, 2.8 billion years from now, it is expected that Earth's biosignatures will disappear, to be replaced by signatures caused by non-biological processes.
Red giant stage
Once the Sun changes from burning hydrogen within its core to burning hydrogen in a shell around its core, the core will start to contract, and the outer envelope will expand. The total luminosity will steadily increase over the following billion years until it reaches 2,730 times its current luminosity at the age of 12.167 billion years. Most of Earth's atmosphere will be lost to space. Its surface will consist of a lava ocean with floating continents of metals and metal oxides and icebergs of refractory materials, with its surface temperature reaching more than . The Sun will experience more rapid mass loss, with about 33% of its total mass shed with the solar wind. The loss of mass will mean that the orbits of the planets will expand. The orbital distance of Earth will increase to at most 150% of its current value (that is, ).
The most rapid part of the Sun's expansion into a red giant will occur during the final stages, when the Sun will be about 12 billion years old. It is likely to expand to swallow both Mercury and Venus, reaching a maximum radius of . Earth will interact tidally with the Sun's outer atmosphere, which would decrease Earth's orbital radius. Drag from the chromosphere of the Sun would reduce Earth's orbit. These effects will counterbalance the impact of mass loss by the Sun, and the Sun will likely engulf Earth in about 7.59 billion years from now.
The drag from the solar atmosphere may cause the orbit of the Moon to decay. Once the orbit of the Moon closes to a distance of , it will cross Earth's Roche limit, meaning that tidal interaction with Earth would break apart the Moon, turning it into a ring system. Most of the orbiting rings will begin to decay, and the debris will impact Earth. Hence, even if the Sun does not swallow the Earth, the planet may be left moonless. Furthermore, the ablation and vaporization caused by its fall on a decaying trajectory towards the Sun may remove Earth's mantle, leaving just its core, which will finally be destroyed after at most 200 years. Earth's sole legacy will be a very slight increase (0.01%) of the solar metallicity following this event.
Beyond and ultimate fate
After fusing helium in its core to carbon, the Sun will begin to collapse again, evolving into a compact white dwarf star after ejecting its outer atmosphere as a planetary nebula. The predicted final mass is 54% of the present value, most likely consisting primarily of carbon and oxygen.
Currently, the Moon is moving away from Earth at a rate of per year. In 50 billion years, if the Earth and Moon are not engulfed by the Sun, they will become tidelocked into a larger, stable orbit, with each showing only one face to the other. Thereafter, the tidal action of the Sun will extract angular momentum from the system, causing the orbit of the Moon to decay and the Earth's rotation to accelerate. In about 65 billion years, it is estimated that the Moon may collide with the Earth, due to the remaining energy of the Earth–Moon system being sapped by the remnant Sun, causing the Moon to slowly move inwards toward the Earth.
Beyond this point, the ultimate fate of the Earth (if it survives) depends on what happens. On a time scale of 1015 (1 quadrillion) years the remaining planets in the Solar System will be ejected from the system by close encounters with other stellar remnants, and Earth will continue to orbit through the galaxy for around 1019 years before it is ejected or falls into a supermassive black hole. If Earth is not ejected during a stellar encounter, then its orbit will decay via gravitational radiation until it collides with the Sun in 1020 (100 quintillion) years. If proton decay can occur and Earth is ejected to intergalactic space, then it will last around 1038 (100 undecillion) years before evaporating into radiation.
See also
References
Bibliography
.
Notes
Further reading
Earth
Plate tectonics
Earth
Geological history of Earth | 0.773286 | 0.99874 | 0.772312 |
Impact of the COVID-19 pandemic on the environment | <onlyinclude>
The COVID-19 pandemic has had an impact on the environment, with changes in human activity leading to temporary changes in air pollution, greenhouse gas emissions and water quality. As the pandemic became a global health crisis in early 2020, various national responses including lockdowns and travel restrictions caused substantial disruption to society, travel, energy usage and economic activity, sometimes referred to as the "anthropause". As public health measures were lifted later in the pandemic, its impact has sometimes been discussed in terms of effects on implementing renewable energy transition and climate change mitigation.
With the onset of the pandemic, some positive effects on the environment as a result of human inactivity were observed. In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally. In April 2020, emissions fell by up to 30%. In China, lockdowns and other measures resulted in a 26% decrease in coal consumption, and a 50% reduction in nitrogen oxide emissions. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions, with the direct impact of pandemic policies having a negligible long-term impact on climate change.
Some developed nations introduced so-called "green recovery" economic stimulus packages, aiming to boost economic growth while facilitating renewable energy transition. One of these investments was the European Union's seven-year €1 trillion budget proposal and €750 billion recovery plan, "Next Generation EU", which seeks to reserve 25% of EU spending for climate-friendly expenditure.
However, decreased human activity during the pandemic diverted attention from ongoing activities such as accelerated deforestation of the Amazon rainforest and increased poaching in parts of Africa. The hindrance of environmental policy efforts, combined with economic slowdown may have contributed to slowed investment in green energy technologies.
The pandemic also led to increased medical waste. Production and use of medical equipment such as personal protective equipment contributed to plastic waste. The medical response required a larger than normal number of masks, gloves, needles, syringes, and medications. During 2020, approximately 65 billion gloves and 129 billion face masks were used every month, and were disposed of. Enforced public use of PPE has posed challenges to conventional waste management. Greenhouse gas emissions resulting from the treatment process of this plastic waste ranged from 14 to 33.5 tons of CO2 per ton of mask, the largest share being from production and transport.
Background
Environmental issues
Increasing amounts of greenhouse gases since the beginning of the industrialization era has caused average global temperatures on the Earth to rise. Climate change has led to the melting of glaciers, an increase in extreme weather, loss of species, frequent wildfires, and rising sea levels. Prior to the COVID-19 pandemic, measures that were expected to be recommended by health authorities in the case of a pandemic included quarantines and social distancing. Simultaneously, researchers predicted that a reduction in economic activity would target the issues created by global warming; it would halt rising temperatures, as well as diminish air and marine pollution, and benefit the environment. The relationship between human activity and the environment had been observed in various public health crises in the past, such as during the Spanish flu and smallpox epidemics, and was observed again with the COVID-19 pandemic.
COVID-19 pandemic
On 11 March 2020, the outbreak of COVID-19 was declared a pandemic by the World Health Organization (WHO). By 5 July 2020, 188 countries or regions had reported cases of COVID-19. As of November 2021, the continuing COVID-19 pandemic had killed over 5 million people. As a result of the severity of the virus, most countries enacted lockdowns to protect people, mitigate the spread of the virus, and ensure space in hospitals. These lockdowns disrupted daily life worldwide, decreasing the level and frequency of human activity and production.
COVID-19 forced industries, businesses, and large corporations to shut down. Although the damage caused to human life, the economy, and society was extensive, the dramatic changes to human activity had an impact on the environment. Surplus to emerging estimates of monthly energy supply or estimated parameters that constructed the near-real-time daily CO2 emission inventories during COVID reduction was observed based on activity from power generation (for 29 countries), industry (for 73 countries), road transportation (for 406 cities), aviation and maritime transportation and commercial and residential sectors emissions (for 206 countries). This decline in CO2 emissions was followed by decline in regional concentrations of nitrogen oxide, which was observed by ground-based networks and satellites. These emissions were calculated by researchers in which observations showed little impact (less than 0.13ppm by April 30, 2020) on the over-served global concentration.
Reductions in fossil fuel consumption as well as economic activity due to travel restrictions, business closures and other dramatic responses due to COVID-19 were recorded. As human activity slowed globally, a substantial decrease in fossil fuel use, resource consumption, and waste disposal was observed, generating less air and water pollution in many regions of the world. Specifically, there was a sharp and lasting decline in planned air travel and vehicle transportation throughout the COVID-19 pandemic, which in effect reduced the net carbon emission across the globe.
With the impact being noted, some researchers and officials called for biodiversity and environmental protections as part of COVID-19 recovery strategies.
Air quality
Due to the pandemic's impact on travel and industry, the planet as a whole experienced a decrease in air pollution. A reduction in air pollution mitigated both climate change and COVID-19 risks, but it has not yet been established which types of air pollution, if any, are common risks to both. The Centre for Research on Energy and Clean Air reported that methods to contain the spread of SARS-CoV-2, such as quarantines and travel bans, resulted in a 25% reduction of carbon emission in China. In the first month of lockdowns, China produced approximately 200 million fewer metric tons of carbon dioxide than of the same period in 2019 due to a reduction in air traffic, oil refining, and coal consumption. In this same period, car travel fell by 70% in the UK. One Earth systems scientist estimated that this reduction may have saved at least 77,000 lives. However, Sarah Ladislaw from the Center for Strategic & International Studies argued that reductions in emissions resulting from economic downturns should not be viewed as beneficial, because China's return to previous rates of growth amidst trade wars and supply chain disruptions in the energy market will worsen its environmental impact. Additionally, Nature reported that in 2020, global carbon emissions only fell by 6.4%.
Between 1 January and 11 March 2020, the European Space Agency observed a marked decline in nitrous oxide emissions from cars, power plants, and factories in the Po Valley region in northern Italy, coinciding with lockdowns in the region. Throughout areas in North India such as Jalandhar, the Himalayas became visible again for the first time in decades, as the drop in pollution triggered air quality improvement.
During the initial phase of the COVID-19 pandemic, NASA and the ESA monitored the significant decrease in nitrogen dioxide gases in China. The economic slowdown from the virus drastically reduced pollution levels, especially in cities like Wuhan, China by 25-40%. NASA used an ozone monitoring instrument (OMI) to analyze and observe the ozone layer as well as pollutants such as NO2, aerosols, and other chemicals. This instrument helped NASA to process and interpret the data coming in due to the lock-downs worldwide. According to NASA scientists, the drop in NO2 pollution began in Wuhan, China and slowly spread to the rest of the world. The drop occurred drastically because the emergence of the virus coincided with the same time of year as the Lunar Year celebrations in China. During this festival, factories and businesses were closed for the last week of January to celebrate the Lunar Year festival. The drop in NO2 in China did not achieve an air quality of the standard considered acceptable by health authorities. Other pollutants in the air such as aerosol emissions remained.
In early 2020, improvements were observed in transboundary Southeast Asian haze, attributed to lockdowns and other restrictions introduced by governments, as well as favourable meteorological conditions.
Joint research led by scientists from China and the U.S. estimated that nitrogen oxide emissions decreased by 50% in East China from 23 January (Wuhan lockdown) to 9 February 2020 in comparison to the period from 1 to 22 January 2020. Emissions then increased by 26% from 10 February (back-to-work day) to 12 March 2020, indicating possible increasing socioeconomic activities after most provinces allowed businesses to open. It is yet to be investigated what COVID-19 control measures are most efficient controlling virus spread and least socioeconomic impact.
According to the World Health Organization, more than 80% of individuals living in cities are typically exposed to dangerous air pollution, which has been associated with an increased risk of COVID-19 problems and mortality.
The changes in air pollution during COVID lockdowns have also impacted water quality. Scientists have long noted that air quality and surface water quality have a close connection; however, the decrease in air pollution during the pandemic specific impact on water systems remains unclear. Most studies have found that improvements due to COVID-19 were temporary, although there have been notable decreases in pollutants in various water systems.
India
On 30 January 2020, the first COVID-19 case in India was recorded in Kerala in South India, which was followed by a nationwide lockdown from March 25 to May 31, 2020. Reduction in air pollution as well as improvement in air quality was reported due to the lockdown which came as a relief to the environment; restrictions on industrial activities were also beneficial. Many Indian cities also observed a major reduction in air pollution. Even the industrial state of Gujarat, situated on the west coast of India, reported remarkable reduction of air pollutants due to restrictions imposed on industrial activities and traffic between the lockdown period from 25 March to 20 April 2020. Some of the major air pollutants, like nitrogen dioxide and sulphur dioxide, decreased by one to two per cent along with average reduction of 0.3 degree Celsius in temperature in Vapi within the year 2019. Moreover, the emissions of pollutants decreased on an average of fifty-one to seventy two per cent, resulting in an average temperature dropdown by two degrees Celsius within the lockdown period. Megacities Mumbai, Delhi, Chennai and Kolkata also reported the fall in temperature in Celsius by 2°, 3°, 2° and 2.5° respectively. The COVID-19 lockdown led to improvement of the water and air quality due to significant fall of air pollutants as reported in countrywide researches. Emissions of chemicals which lead to pollution of the environment such as carbon monoxide, ammonia, sulphur dioxide and nitrogen dioxide showed a significant reduction of 22.82%, 30.61%, 32.11% and 46.95% respectively; PM2.5 as well as PM10 reported a downfall by 57.09% and 48.56% respectively, resulting in improvement of air quality during the fourth phase of lockdown, from 22 March to 31 May 2020, named “Janta Curfew”.
Water quality
Atmosphere's impact on water quality
The vast reduction of nitrous oxides in the atmosphere was seen far from the industrial borders of China. The metropolitan centers of New York, Paris, and London recorded 40% declines in nitrous oxide in the first two weeks of Spring 2020 in comparison to the prior year. In March 2020, Los Angeles (notorious for both traffic and smog) saw a 20% increase in air quality due to the quarantine. In the San Francisco Bay Area, traffic was down 45%, leading to a stark contrast in carbon dioxide emissions compared to previous years.
Scientists have long understood that in the atmosphere, water particles chemically react with carbon dioxide, sulfur oxides and/or nitrogen oxides; the result of this mixing is acid rain. Acid rain falls into rivers and lakes, which in turn, harms aquatic life. As a result, air quality and water quality are linked. Researchers have noted the interconnected relationship between the quality of the air and the cleanliness of water. Strong correlations between the simultaneous improvement in air and water quality were again witnessed during the COVID-19 pandemic.
United States
Numerous reports have documented that the increased usage of masks led to, "...an extra 8 million tons of plastic waste during the pandemic...", partly due to discarded facial masks that were worn in an effort to stem the spread of COVID-19 from person to person via airborne transmission.
The onset of COVID-19 in the United States improved air quality. The improvement in air quality led to improvements in water quality. For example, in the San Francisco Bay, notable reductions in water pollution were observed. Experts have attributed the reduction of water particulates to the absence of traffic due to the pandemic. Additionally, studies about the relationship between the COVID-19 pandemic and atmospheric NO2 concentration levels in New York City revealed that air quality significantly improved during the pandemic. This information suggested that improved air quality in New York City was a result of the correlation between air and water quality.
In April 2020, Oregon State University launched a public health project named TRACE-COVID-19, which performed over 60,000 individual tests and 3,000 wastewater tests throughout Oregon communities. The purpose of the project was to determine the community prevalence of COVID-19 and ultimately aimed to both lower the risk and slow the spread of the virus. The data collected from the TRACE program was used to help officials decide what public health actions they should take.
A 2-month study about vehicular travel in Massachusetts in 2020 revealed a 71% and 46% reduction in car and truck traffic, respectively. The significant decrease in traffic correlated with a direct reduction in atmospheric levels of harmful particulates, resulting in a decrease in overall air pollution. As seen in other instances, the atmospheric particulate reductions led to an improvement in water quality.
Peru
The Peruvian jungle experienced 14 oil spills from the beginning of the pandemic through early October 2020. Of these, eight spills were in a single sector (Block 192) operated by Frontera Energy del Perú S.A. which ceased operations during the pandemic and failed to maintain its wells and pipes. The oil seeped into the ground where it contaminated the drinking water of the indigenous people in Quichua territory. Oil spills in the Peruvian Amazon have been a problem for decades, leaking toxic metals and hydrocarbons into the drinking water and surrounding environment. A 2016 study done on 1,168 people living near Block 192 indicated that 50% of those tested had toxic metals (lead, arsenic, mercury, and cadmium) in their blood at levels above WHO acceptable limits. As a result of these oil spills, the Quichua people of Nueva Andoas were at a particularly high risk for diseases before the pandemic. Further compounded by a lack of medicine, lack of doctors, lack of access to vaccines, and poor government response, the Indigenous people of the Peruvian Amazon were in an extremely vulnerable position and at high risk during the pandemic.
Italy
In Venice, shortly after quarantine began in March 2020, water in the canals cleared and experienced greater water flow. The increase in water clarity was primarily caused by a decrease in boat traffic, which in turn, allowed the normally stirred up sediment to instead remain at the floor of the canals. In the year prior, during the initial onset of the coronavirus, organizations such as the European Space Agency detected the striking change between the water in the Venetian canals as the country became more and more contaminated. Two satellite images, one taken on April 19, 2019, and the other on April 13, 2020, showed the water in the canals transitions from a paler, teal coloration to a deeper blue. This showed the increase in the health of the water as the coronavirus set in across the country. Through this Copernicus Sentinel-2 mission, the space agency's images captured the benefit of less transportive travel on Venice's waterways and highlighted that, despite the decline in tourists as the city shut down, the canals contained water far cleaner and safer for organisms and consumption than was the case previously. While the water in the Venetian canals cleared up due to the decrease in boat transportation and pollution, marine life returned to the area in far less numbers than previously believed. Although numerous social media posts depicted dolphins and other oceanic creatures venturing back to Venice's shores, National Geographic exposed the falsities behind these rumors, showing images captured in different places and debunking the hopes circulating around that the impact of COVID-19 contributed to healthier waters and a re-emergence of wildlife. Misinformation such as the claims made about animals infiltrating Venice's waterways have given people a distorted image of both the ongoing pandemic and climate change crises, concealing growing problems such as the city's current low tides.
India
In India, more than 28 million people were affected by the rapid transmission of the COVID-19 virus. As a result, the Government of India put the whole country on a full lockdown. While many suffered under these circumstances, both socially and financially, environmental researchers discovered significant improvements to environmental quality during the slow in human activity and travel. A metadata analysis of river water quality (RWQ) indicated that the rivers in Damodar, an urban-industrial area, had improved in quality. There was a reduction in pollution that led to this improvement in water quality. A second study conducted on the Damodar in January 2021 revealed a significant change of the water quality during the pandemic. In the pre-lockdown period, the Water Pollution Index (WPI) of samples from the river fell between 1.59 and 2.46, indicating a high level of pollution. In contrast, during the lockdown, the WPI for water samples ranged from 0.52 to 0.78, indicating that samples were either ‘good' or ‘moderately polluted' water. The significant improvement in the WPI suggested that the shutdowns of heavy industries and subsequent reduction of toxic pollutants led to an increase in water quality.
Similar to the river Damodar, the Ganga experienced significant improvements with regards to water quality. Specifically, DO levels increased, while BOD and nitrate concentrations decreased. The nationwide lockdown and subsequent shutdown of major industries not only increased river quality, but the quality of polluted creeks. In some regions, waste inflow was reduced up to 50%. Both studies point to a significant improvement in water quality as a result of India's complete lockdown. The changes were a result of a decrease in sewage and wastewater being discharged into the rivers. This was most likely because of Damodar's specific location in an industrial area. The industrial areas experienced extremely different levels of activity as a result of the lockdown, so the results of the water quality tests from before the pandemic and after were affected by the different levels of activity.
In addition to the above studies, research on India's longest lake, Vembanad Lake, in April 2020, showed that suspended particulate matter concentration decreased by 16% during early lockdowns.
China
As the first country affected by the pandemic, China had to quickly adapt new health and safety restrictions before any other nation in January 2020. Similar to other countries, numerous large industries in China shutdown during the COVID-19 lockdown. As a result, the water quality significantly improved. Results from monthly field measurements on river water quality in China showed improvements for several different indicators. Ammonia nitrogen (NH3-N) was the first indicator to rapidly reduce after the lockdown, while dissolved oxygen (DO) and chemical oxygen demand (COD) started to show improvements in early-February 2020. The pH levels of the river water started to increase in late-March 2020. After the lockdown was lifted, a study conducted by scientists, Dong Liua, Hong Yang, and Julian R. Thompson, found that all water quality parameters returned to normal conditions. Because the conditions improved during a temporary lockdown period, this study suggested that future pollutant reduction strategies should be location-specific and sustained in order to maintain progress to protect the environment.
South Africa
During the pandemic, developing countries in Africa didn't have the infrastructure, equipment, facilities, and trained staff to do widespread tests for COVID-19, so they used wastewater surveillance as a way to highlight hotspot areas, especially in the country of South Africa. This allowed them to discover where SARS-CoV-2 viral RNA existed in different wastewater after testing municipal wastewater (industrial wastewater), surface water (rivers, canals, dams), and drinking water. Traces of SARS-CoV-2 RNA were found in wastewater treatment facilities in the first phases of treatment, but once the water was treated there was no RNA detected. While the treated water was safe for drinking and other uses, the wastewater from the treatment facilities that drained into rivers or seas could still have some SARS-CoV-2 RNA, but it was too low to be detected which proved it to be unlikely. No other water source had detected SARS-CoV-2 RNA which led scientists of this experiment to see no prominent harm done from the pandemic on the water quality in South Africa.
Morocco
The COVID-19 lockdown had a positive effect for the water quality of the Boukhalef River in northern Morocco. Researchers used Sentinel 3 water surface temperature (WST) values to test several locations along the Boukhalef River before and after the lockdown. Before the lockdown there were high WST values indicating poor water quality at these sites. However, after the lockdown, industrial activities greatly reduced their production and subsequent polluting of the water. As a result, there were normal WST values indicating normal water quality in the same sites.
England
A study of water use using the CityWat-SemiDistributed (CWSD) system analyzed how the lockdown during COVID-19 affected the water supply in England. Increases in household water consumption were attributed to increased use of appliances and preventative measures such as hand washing during lockdowns. A decrease in activity outside of the home was associated with a 35% increase in water use. As in other countries, England saw a decrease in transportation, such as daily commuting, in large cities, the result of which was a change in pollution concentration zones. Additionally, the rivers in London became less polluted, but water quality became worse near peoples' households. This minimized the continued pollution of larger rivers, but instead increased the pollution in smaller ones in suburban areas.
Ecuador
During the pandemic, surveys were distributed and data was collected in Ecuador to study the water quality of the ocean. Preliminary data suggested that the water appeared clearer and cleaner because of the lack of people swimming and visiting the beaches. Residents of the Salinas beach were surveyed on the quality of the water twice, 10 weeks apart, during quarantine. Using a 1-5 scale, with 1 being the worst quality and 5 being the best, participants said that during the 10 weeks, the quality went from a 2.83 to a 4.33. Off the coast of Ecuador, the Galapagos Islands also saw improvements in water quality during the pandemic. Researchers noticed the presence of more turtles, sea lions and sharks in the water because of the lack of pollution.
Unfortunately, sanitary water conditions became a concern in Ecuador during the COVID-19 pandemic. It was suggested that SARS-CoV-2 could be contracted through fecal matter from wastewater treatment plants. In Ecuador, only 20% of wastewater was treated before being discharged back into the water. The urban area of Quito, Ecuador was particularly affected by the lack of wastewater treatment. Its population of 3 million citizens represented an under-diagnosed demographic. At the time of testing, reports claimed that only 750 citizens were infected with COVID-19, but actual wastewater contamination showed a larger percentage of the population infected. Improper wastewater management during the COVID-19 pandemic may have infected Ecuador's citizens through water contamination.
Nepal
The Bagmati River passes through the Nepalese capital of Kathmandu, and with its tributaries, comprises a water basin that spans the Kathmandu valley. A July 2021 study revealed the Bagmati River basin saw considerable improvements in water quality during the COVID-19 pandemic. Reduced human activity caused a decrease in biological oxygen demand, an important indicator of bacteria levels in water, by 1.5 times the level before lockdowns were implemented.
Egypt
A reduction in human activities due to COVID-19 mitigation measures resulted in less industrial wastewater dumping in the Nile River, the Nile's canals and tributaries, the Nile Delta, and several lakes in Egypt. Additionally, fewer tourist ships sailed the Nile, thereby minimizing the frequency of oil and gas spills. A decrease in shipping traffic through the Suez Canal also helped improve its water quality. Similar reductions in wastewater dumping and shipping traffic contributed to improving the quality of Egypt's coastal Mediterranean waters as well. After the onset of the pandemic, residents in Egyptian villages needed to purify their own water. The Zawyat Al-Na’ura village, for example, used ultraviolet rays as a water purification technique.
Water demand
Water demand was impacted by the pandemic in a myriad of ways. Practicing good hygiene was one of the main protocols done to combat the pandemic. Frequent hand washing with soap and water for 20 seconds, disinfecting surfaces, and cleaning food containers as they came into the home, increased the demand for water.
Residential areas
Water demand increased in residential areas due to mandated lockdowns that kept people home. For example, home water use in Portsmouth, England increased by 15%, while non-residential use decreased by 17%. The increased water usage at home led to higher residential water bills, exacerbating financial stress to those impacted by the stay-at-home lockdowns mandated by the pandemic.
Desert-like areas
While some regions benefitted from lockdowns, water scarce regions suffered severely. For example, in Nevada, there was a 13.1% water usage increase within the first month of quarantine; businesses used substantially less water. Furthermore, water usage at academic institutions declined by 66.2%. Cumulatively in all water sectors, during the first month of quarantine, there was a 3.3% uptick in overall water usage. Consequently, there were efforts to restrict household water usage because of the region's already scarce water supply. These measures included water rations and other limitations put on citizens for their water use, such as watering the grass.
Industrial sector
Numerous public buildings were shut down for significant amounts of time during the pandemic. The results of these shutdowns were water quality issues such as mold in standing water in pipes and leaching. These became of concern as non-residential demand increased back to normal levels when the shutdowns ended. The effects varied depending on the makeup of the non-residential sectors however, as a whole changes in water demand were seen. The changes in water demand also had notable impacts on water utilities. Utilities experienced significant revenue losses as total water usage dropped in many areas, and simultaneously multitudes of water bills went unpaid while businesses and non-commercial customers struggled financially. Some companies offered overtime and hazard pay to their employees as their work became increasingly essential, which led to increased operational costs. Industries that were part of the water supply chain experienced revenue losses as the industrial water demand declined.
Underdeveloped countries
In regions already facing barriers to water access across the globe, such as the Democratic Republic of the Congo and Yemen, the pandemic exacerbated challenges. These preexisting inequalities relating to infrastructure and water access were likely a factor contributing to disparate impacts of the pandemic. The World Health Organization and UNICEF strongly recommended sanitary hand washing facilities to be the bare minimum for fighting COVID-19 and suggested that lack of access to these necessary facilities (for over 74 million people in the Arab regions) was responsible for putting people at very high risk of contracting COVID-19.
In some undeveloped countries, water utilities have worked with governments to temporarily suspend billing for vulnerable groups. This was an effort to mitigate the impact of using extra water during the pandemic while people were out of work. The implementation of this process caused a huge loss in revenue for water companies.
Wildlife
Fish prices and demand for fish decreased due to the pandemic in early 2020, and fishing fleets around the world sat mostly idle. German scientist Rainer Froese has said the fish biomass will increase due to the sharp decline in fishing, and projected that in European waters, some fish, such as herring, could double their biomass. As of April 2020, signs of aquatic recovery remain mostly anecdotal.
As people stayed at home due to lockdown and travel restrictions, many types of animals have been spotted roaming freely in cities. Sea turtles were spotted laying eggs on beaches they once avoided (such as the coast of the Bay of Bengal), due to lower levels of human interference and light pollution. In the United States, fatal vehicle collisions with animals such as deer, elk, moose, bears, mountain lions fell by 58% during March and April 2020. In Glacier National Park scientists noted considerable changes in wildlife behavior due to the massive decline in the presence of humans (in effect an involuntary park within a national park).
Conservationists expected that African countries would experience a massive surge in bush meat poaching. Matt Brown of the Nature Conservancy said that "When people don't have any other alternative for income, our prediction -- and we're seeing this in South Africa -- is that poaching will go up for high-value products like rhino horn and ivory." On the other hand, Gabon decided to ban the human consumption of bats and pangolins, to stem the spread of zoonotic diseases, as SARS-CoV-2 was thought to have transmitted itself to humans through these animals. Pangolins are no longer thought to have transmitted SARS-CoV-2. In June 2020, Myanmar allowed breeding of endangered animals such as tigers, pangolins, and elephants. Experts fear that the Southeast Asian country's attempts to deregulate wildlife hunting and breeding may create "a New Covid-19."
In 2020, a worldwide study on mammalian wildlife responses to human presence during COVID lockdowns found complex patterns of animal behavior. Carnivores were generally less active when humans were around, while herbivores in developed areas were more active. Among other findings, this suggested that herbivores may view humans as a shield against predators, highlighting the importance of location and human presence history in understanding wildlife responses to changes in human activity in a given area.
Infections
A wide variety of largely mammalian species, both captive and wild, have been shown to be susceptible to SARS-CoV-2, with some encountering particularly fatal outcomes. In particular, both farmed and wild mink have developed highly symptomatic and severe COVID-19 infections, with a mortality rate as high as 35–55% according to one study. White-tailed deer, on the other hand, have largely avoided severe outcomes but have effectively become natural reservoirs of the virus, with large numbers of free-ranging deer infected throughout the US and Canada, including approximately 80% of Iowa's wild deer herd. An August 2023 study appeared to confirm the status of white-tailed deer as a disease reservoir, noting that the viral evolution of SARS-CoV-2 in deer occurs at triple the rate of its evolution in humans and that infection rates remained high, even in areas rarely frequented by humans.
Deforestation and reforestation
Due to the sharp decrease in job opportunities during the pandemic, many unemployed individuals were hired to help illegal deforestation operations throughout the world, specifically in the tropics. According to the deforestation alerts from Global Land Analysis & Discovery (GLAD), a total of 9583 km2 of deforested lands were detected across the global tropics during the first month following the establishment of COVID-19 precautions, which was approximately two times that seen the year before, in 2019 (4732 km2). The disruption from the pandemic provided cover for illegal deforestation operations in Brazil, which were at a 9-year high. Satellite imagery showed deforestation of the Amazon rainforest surging by over 50% compared to baseline levels. Conversely, unemployment caused by the COVID-19 pandemic facilitated the recruitment of laborers for Pakistan's 10 Billion Tree Tsunami campaign to plant 10 billion trees – the estimated global annual net loss of trees – over the span of 5 years. Because the pandemic saw many authorities unemployed, poaching became much more popular during 2020 and 2021. In Columbia, illegal activities and wildfires were the two biggest factors contributing to the further destruction of the rainforests.
Deforestation has an impact on clean drinking water. One study showed that a 1% increase in deforestation decreases access to clean drinking water by 0.93%. Deforestation lowers water quality because it lowers the soil infiltration of water which causes a higher level of turbidity in the water. In countries that are not able to pay for drinking water treatment this poses a significant issue.
Climate change
Societal shifts caused by the COVID-19 lockdowns – such as adoption of remote work policies, and virtual events – may have a more sustained impact beyond the short-term reduction of transportation usage. In a study published in September 2020, scientists estimate that such behavioral changes developed during confinement may reduce 15% of all transportation CO₂ emissions permanently.
Despite this, the concentration of carbon dioxide in the atmosphere was the highest ever recorded in human history in May 2020. Energy and climate expert Constantine Samaras states that "a pandemic is the worst possible way to reduce emissions" and that "technological, behavioral, and structural change is the best and only way to reduce emissions". Tsinghua University's Zhu Liu clarifies that "only when we would reduce our emissions even more than this for longer would we be able to see the decline in concentrations in the atmosphere". The world's demand for fossil fuels decreased by almost 10% amid COVID-19 measures and reportedly many energy economists believe it may not recover from the crisis.
Impact on climate
In a study published in August 2020, scientists estimated that global NOx emissions declined by as much as 30% in April but were offset by ~20% reduction in global SO₂ emissions that weakens the cooling effect and conclude that the direct effect of the response to the pandemic on global warming will likely be negligible, with an estimated cooling of around 0.01 ± 0.005 °C by 2030 compared to a baseline scenario but that indirect effects due to an economic recovery tailored towards stimulating a green economy, such as by reducing fossil fuel investments, could avoid future warming of 0.3 °C by 2050. The study indicates that systemic change in how humanity powers and feeds itself is required for a substantial impact on global warming.
In October 2020 scientists reported, based on near-real-time activity data, an 'unprecedented' abrupt 8.8% decrease in global CO₂ emissions in the first half of 2020 compared to the same period in 2019, larger than during previous economic downturns and World War II. Authors note that such decreases of human activities "cannot be the answer" and that structural and transformational changes in human economic management and behaviour systems are needed.
In January 2021 scientists reported that reductions in air pollution due to worldwide COVID-19 lockdowns in 2020 were larger than previously estimated. It was concluded that, because of the impact of the COVID-19 pandemic on the climate during that year, a slight warming of Earth's climate during the year was seen instead of a slight cooling. Climate models were used to identify small impacts that could not be discerned with observations. The study's lead author noted that aerosol emissions into the lower atmosphere have major health ramifications and can't be part of a viable approach for mitigating global warming. In contrast aerosol emissions into the upper atmosphere are not thought to be a health risk, but their environmental impact has not yet been properly researched.
Despite a decrease in anthropogenic methane emissions, methane levels in the atmosphere increased. Researchers have attributed this increase in methane despite a reduction of human emissions of methane to an increase in wetland methane emissions facilitated by a reduction of nitrous oxide emissions.
Fossil fuel industry
A report by the London-based think tank Carbon Tracker concludes that the COVID-19 pandemic may have pushed the fossil fuel industry into "terminal decline" as demand for oil and gas decreases while governments aim to accelerate the clean energy transition. It predicts that an annual 2% decline in demand for fossil fuels could cause the future profits of oil, gas and coal companies to collapse from an estimated $39tn to $14tn. However, according to Bloomberg New Energy Finance more than half a trillion dollars worldwide are currently intended to be poured into high-carbon industries. Preliminary disclosures from the Bank of England's Covid Corporate Financing Facility indicate that billions of pounds of taxpayer support are intended to be funneled to fossil fuel companies. According to Reclaim Finance the European Central Bank intends to allocate as much as €220bn (£193bn) to fossil fuel industries. An assessment by Ernst & Young finds that a stimulus program that focuses on renewable energy and climate-friendly projects could create more than 100,000 direct jobs across Australia and estimates that every $1m spent on renewable energy and exports creates 4.8 full-time jobs in renewable infrastructure while $1m on fossil fuel projects would only create 1.7 full-time jobs.
In addition, due to the effects of the COVID-19 pandemic on the fossil fuel and petrochemical industry, natural gas prices dropped so low for a short time that gas producers were burning it off on-site (not being worth the cost to transport it to cracking facilities). Bans on single-use consumer plastic (in China, the European Union, Canada, and many countries in Africa), and bans on plastic bags (in several states in the USA) have also reduced demand for plastics considerably. Many cracking facilities in the USA were suspended. The petrochemical industry has been trying to save itself by attempting to rapidly expand demand for plastic products worldwide (i.e. through pushbacks on plastic bans and by increasing the number of products wrapped in plastic in countries where plastic use is not already as widespread (i.e. developing nations)).
Cycling
During the pandemic, many people started cycling, causing bike sales to surge. Many cities set up semi-permanent "pop-up bike lanes" to provide people who switched from public transit to bicycles with more room. Many individuals chose cycling due to a heightened anxiety over public transportation. This was because public transportation could be crowded at times, raising the fear that one may catch COVID-19. Additionally, exercise became more popular during the pandemic, since lockdowns led to mass unemployment. These reasons led to a "bike boom". In Berlin, proposals exist to make the initially reversible changes permanent.
Retail and food production
Food production
Small-scale farmers have been embracing digital technologies as a way to directly sell produce, and community-supported agriculture and direct-sell delivery systems are on the rise. These methods have benefited smaller online grocery stores which predominantly sell organic and more local food and can have a positive environmental impact due to consumers who prefer to receive deliveries rather than travel to the store by car. Online grocery shopping has grown substantially during the pandemic.
While carbon emissions dropped during the pandemic, methane emissions from livestock continued to rise. Methane is a more potent greenhouse gas than carbon dioxide.
Retail
Due to lockdowns and COVID-19 protocols, many consumers switched to online shopping during the pandemic, which resulted in a 32% increase in e-commerce. This caused an increase in packaging waste. Many online purchases were for essential items; however 45% of shoppers made non-essential purchases, such as clothing. There remains an ongoing debate about whether online shopping was more environmentally friendly than shopping in stores, and currently there is no conclusion as to which is best. Both online and in-person shopping had aspects that helped and hurt the environment. For example, shipping products to individual consumers could equally as detrimental to the environment as powering a brick and mortar shop. Another factor to consider was that 20% of online returns ended up in landfills because they could not be resold as new merchandise.
Litter
The substantial increase of plastic waste during the COVID-19 pandemic became a major environmental concern. The increased demand for single-use plastics exacerbated an already significant plastic pollution problem. Most of the new plastic found in oceans was generated from hospitals, shipping packages, and from personal protection equipment (PPE). In the first 18 months of the pandemic, approximately 8 million tons of waste had been accumulated. A significant portion originated from the developing world, and 72% of this waste was from Asia. This surplus of waste was particularly concerning for the oceans (and wildlife), and mainly accumulated on beaches and coastal regions.
In Kenya, the COVID-19 pandemic impacted the amount of debris found on beaches; approximately 55.1% of trash found was a pandemic-related item. Although the pandemic-related trash showed up along the beaches of Kenya, it did not make its way into the water. This was thought to be the result of the closing of beaches and lack of movement during the pandemic. Most of the litter found washed up on the beaches were fabric masks. The amount of fabric masks being produced during the pandemic was on the rise for in Kenya for people who could not afford to buy single-use masks. More people were buying fabric masks then disposing of them improperly, which was the direct cause of many masks showing up on the coast or on the beaches. This was also why the beaches were closed during the pandemic.
Additional impacts of the pandemic were seen in Hong Kong, where disposable masks ended up along the beaches of Soko's islands. This was attributed to the increased production and use of disposable masks for personal and commercial use, which led to a rise in subsequent disposal of these products.
According to a study conducted by MIT, the effects of the pandemic are estimated to generate up to 7,200 tons of medical waste every day, much of which are disposable masks. The data was collected during the first six months of the pandemic (late March 2020 to late September 2020) in the United States. These calculations only pertained to healthcare workers, not including mask usage by the general public. Theoretically, if every health care worker in the United States wore a new N95 mask for every patient they encountered, the total number of masks required would be approximately 7.4 billion, at a cost of $6.4 billion. This would lead to 84 million kgs of waste. However, the same study also found that decontaminating regular N95 masks, thereby making the masks reusable, dropped environmental waste by 75% and fully reusable silicone N95 masks could offer an even greater reduction in waste. Another study estimated that in Africa, over 12 billion medical and fabric face masks were discarded monthly (an equivalent of 105,000 tonnes).
The majority of masks used during the pandemic were properly disposed, so, like typical garbage, incineration was used as the final disposal method in most countries. The process of incineration generally produced two types of ash; one was a slag residue, and one contained toxic substances (dioxins, plastics, and heavy metals). In the various stages of waste incineration, there was no absolute method that could completely clear away the harmful substances in the ashes. These substances caused damage to human health and caused irreversible damage to the earth's ecological environment. Secondary pollution was often found in the air, food, and wastewater as a result of incineration.
The quarantine restrictions implemented at many locations have had an impact on plastic waste volumes. Purchasing items, including food, online results in an increase in packaging waste. The pandemic significantly effected domestic waste recycling systems. Temporary suspension of household waste collection in some jurisdictions in order to protect waste workers reduced the supply of recyclable material. In the United States 34% of recycling companies partially or completely closed. In many Asian countries, including India, Malaysia and Vietnam, only around one-third of recyclers continued daily operations due to anti-pandemic measures. Many informal waste pickers have been seriously affected by stay-at-home orders and business closures. The poverty of informal workers in developing countries is expected to increase by 56%. Pressure on the existing waste management infrastructure has also led to poor quality waste management including dumping and open burning. In 2020 in Dublin, Ireland, illegal dumping increased by 25% and in the United Kingdom illegal waste disposal rose by 300%.
Investments and other economic measures
Some have noted that planned stimulus package could be designed to speed up renewable energy transitions and to boost energy resilience. Researchers of the World Resources Institute have outlined a number of reasons for investments in public transport as well as cycling and walking during and after the pandemic. Use of public transport in cities worldwide has fallen by 50-90%, with substantial loss of revenue losses for operators. Investments such as in heightened hygienic practices on public transport and in appropriate social distancing measures may address public health concerns about public transport usage. The International Energy Agency states that support from governments due to the pandemic could drive rapid growth in battery and hydrogen technology, reduce reliance on fossil fuels and has illustrated the vulnerability of fossil fuels to storage and distribution problems.
According to a study published in August 2020, an economic recovery "tilted towards green stimulus and reductions in fossil fuel investments" could avoid future warming of 0.3 °C by 2050.
Secretary-general of the OECD club of rich countries José Ángel Gurría, called upon countries to "seize this opportunity [of the COVID-19 recovery] to reform subsidies and use public funds in a way that best benefits people and the planet".
In March 2020, the ECB announced the Pandemic Emergency Purchase Programme. Reclaim Finance said that the Governing Council failed to integrate climate into both the “business as usual” monetary policy and the crisis response. It also ignored the call from 45 NGO's that demanded that the ECB deliver a profound shift on climate integration at this decision-making meeting. This, as it also finances 38 fossil fuel companies, including 10 active in coal and 4 in shale oil and gas. Greenpeace stated that (by June 2020) the ECB's covid-related asset purchases already funded the fossil fuel sector by to up to 7.6 billion.
The report, Are We Building Back Better?, from the Oxford University’s Global Recovery Observatory, found that of the $14.6tn spending announced by the world’s largest 50 countries in 2020, $1.9tn (13%) was directed to long-term ‘recovery-type’ measures, and $341bn (18%) of long-term spending was for green initiatives.
With the 2020 COVID-19 outbreak spreading rapidly within the European Union, the focus on the European Green Deal diminished. Some have suggested either a yearly pause or even a complete discontinuation of the deal. Many believe the current main focus of the European Union's current policymaking process should be the immediate, shorter-term crisis rather than climate change. In May 2020 the €750 billion European recovery package, called Next Generation EU, and the €1 trillion budget were announced. The European Green deal is part of it. One of the package's principles is "Do no harm". The money will be spent only on projects that meet some green criteria. 25% of all funding will go to climate change mitigation. Fossil fuels and nuclear power are excluded from the funding.
In 2021, Joe Biden announced the $1.9 trillion American Rescue Plan Act of 2021 on March 11, 2021. He also announced the Build Back Better Plan.
Some sources of revenue for environmental projects – such as indigenous communities monitoring rainforests and conservation projects – diminished due to the pandemic.
Despite a temporary decline in global carbon emissions, the International Energy Agency warned that the economic turmoil caused by the COVID-19 pandemic may prevent or delay companies and others from investing in green energy. Others cautioned that large corporations and the wealthy could exploit the crisis for economic gain in line with the Shock Doctrine, as has occurred after past pandemics.
The Earth Overshoot Day took place more than three weeks later than 2019, due to COVID-19 induced lockdowns around the world. The president of the Global Footprint Network claims that the pandemic by itself is one of the manifestations of "ecological imbalance."
Approximately 58% of enterprises in the European Union are concerned about the physical hazards of climate change, particularly in areas prone to extreme weather. In 2021, climate change was addressed by 43% of EU enterprises. Despite the pandemic, the percentage of enterprises planning climate-related investment has climbed to 47%, from 41% in 2020. Future investments, however, are put on hold by uncertainty about the regulatory environment and taxation.
According to a 2022 analysis of the $14tn that G20 countries have spent as economic stimulus in 2020 and 2021, only about 6% has been allocated to areas "that will also cut emissions" and 3% has targeted activities "that are likely to increase global emissions".
Analysis and recommendations
Multiple organizations and organization-coalitions – such as think tanks, companies, business organizations, political bodies and research institutes – have created unilateral analyses and recommendations for investments and related measures for sustainability-oriented socioeconomic recovery from the pandemic on global and national levels – including the International Energy Agency, the Grantham Institute – Climate Change and Environment and the European Commission. The United Nations' Secretary General António Guterres recommended six broad sustainability-related principles for shaping the recovery.
According to a report commissioned by the High Level Panel for a Sustainable Ocean Economy and published in July 2020, investment in four key ocean intervention areas could help aid economic recovery and yield high returns on investment in terms of economic, environmental and health benefits. According to Jackie Savitz, chief policy officer for America ocean conservation nonprofit Oceana, strategies such as "setting science-based limits on fishing so that stocks can recover, practicing selective fishing to protect endangered species and ensuring that fishing gear doesn't destroy ocean habitats are all effective, cost-efficient ways to manage sustainable fisheries".
Politics
The pandemic has also impacted environmental policy and climate diplomacy, as the 2020 United Nations Climate Change Conference was postponed to 2021 in response to the pandemic after its venue was converted to a field hospital. This conference was crucial as nations were scheduled to submit enhanced nationally determined contributions to the Paris Agreement. The pandemic also limits the ability of nations, particularly developing nations with low state capacity, to submit nationally determined contributions, as they focus on the pandemic.
Time highlighted three possible risks: that preparations for the November 2020 Glasgow conference planned to follow the 2015 Paris Agreement were disrupted; that the public would see global warming as a lower priority issue than the pandemic, weakening the pressure on politicians; and that a desire to "restart" the global economy would cause an excess in extra greenhouse gas production. However, the drop in oil prices during the COVID-19 recession could be a good opportunity to get rid of fossil fuel subsidies, according to the executive director of the International Energy Agency.
Carbon Tracker argues that China should not stimulate the economy by building planned coal-fired power stations, because many would have negative cashflow and would become stranded assets.
The United States' Trump administration suspended the enforcement of some environmental protection laws via the Environmental Protection Agency (EPA) during the pandemic. This allows polluters to ignore some environmental laws if they can claim that these violations were caused by the pandemic.
Popular reactions
Humour
Early in the pandemic, the perceived benefit to the environment caused by a slowdown in human activity led to the creation of memes. These memes generally made light of exaggerated or distorted claims of benefits to the environment, those overly credulous of these claims, and those who compared humanity to COVID, construing human civilization as a viral infection on Earth. Memes include the captioning images with phrases such as "nature is healing", "the Earth is healing", "we are the virus", or combinations of the phrases. One such joke, a tweet, featured a photo of a large rubber duck in the Thames with the text "nature is healing", construing the duck as a native species returning to the river in the absence of human activity.
Activism
In March 2020 in England, Wales and Northern Ireland, the National Trust initiated the #BlossomWatch campaign, which encouraged people to share images of the first signs of Spring, such as fruit tree blossoms, that they saw on lockdown walks.
In December 2021, when the first reported case of animal-to-human transmission of SARS-CoV-2 in Hong Kong took place via imported pet hamsters, researchers expressed difficulty in identifying some of the viral mutations within a global genomic data bank, leading city authorities to announce a mass cull of all hamsters purchased after December 22, 2021, which would affect roughly 2,000 animals. After the government 'strongly encouraged' citizens to turn in their pets, approximately 3,000 people joined underground activities to promote the adoption of abandoned hamsters throughout the city and to maintain pet ownership via methods such as the forgery of pet store purchase receipts. Some activists attempted to intercept owners who were on their way to turn in pet hamsters and encourage them to choose adoption instead, which the government subsequently warned would be subject to police action.
Rebound effect
The restarting of greenhouse-gas producing industries and transport following the COVID-19 lockdowns was hypothesized as an event that would contribute to increasing greenhouse gas production rather than reducing it. In the transport sector, the pandemic could trigger several effects, including behavioral changes – such as more remote work and teleconferences and changes in business models – which could, in turn, translate in reductions of emissions from transport. A scientific study published in September 2020 estimates that sustaining such behavioral changes could abate 15% of all transport emissions with limited impacts on societal well-being. On the other hand, there could be a shift away from public transport, driven by fear of contagion, and reliance on single-occupancy cars, which would significantly increase emissions. However, city planners are also creating new cycle paths in some cities during the pandemic. In June 2020, it was reported that carbon dioxide emissions were rebounding quickly.
The Organisation for Economic Co-operation and Development recommends governments continue to enforce existing air pollution regulations after the COVID-19 crisis, and channel financial support measures to public transport providers to enhance capacity and quality with a focus on reducing crowding and promoting cleaner facilities.
Fatih Birol, executive director of the International Energy Agency, states that "the next three years will determine the course of the next 30 years and beyond" and that "if we do not [take action] we will surely see a rebound in emissions. If emissions rebound, it is very difficult to see how they will be brought down in future. This is why we are urging governments to have sustainable recovery packages."
In March 2022, before formal publication of the 'Global Carbon Budget 2021' preprint, scientists reported, based on Carbon Monitor data, that after COVID-19-pandemic-caused record-level declines in 2020, global emissions rebounded sharply by 4.8% in 2021, indicating that at the current trajectory, the 1.5 °C carbon budget would be used up within 9.5 years with a likelihood.
Psychology and risk perception
Chaos and the negative effects of the COVID-19 pandemic made a catastrophic future seem less remote and action to prevent it more necessary and reasonable. However, it also had the opposite effect by putting the focus on more immediate issues of the pandemic rather than larger global issues, such as climate change and deforestation.
The improvements caused by human inactivity during lockdowns were not an indication that climate change was improving long-term or that climate saving methods should be postponed. However, several international climate change conventions were postponed and, in some cases, not rescheduled. Notable examples were the postponement of the COP26, the United Nations Climate Change Conference, the World Conservation Congress, the Convention on Biological Diversity, and the U.N. Ocean Conference. These conferences were originally created so nations around the world could make concrete plans to ensure the safety of future generations. Though climate improvements seen during the lockdown provided hope for the future, as humans returned to normal activity, these changes proved to be temporary.
Impact on environmental monitoring and prediction
Weather forecasts
The European Centre for Medium-Range Weather Forecasts (ECMWF) announced that a worldwide reduction in aircraft flights due to the pandemic could impact the accuracy of weather forecasts, citing commercial airlines' use of Aircraft Meteorological Data Relay (AMDAR) as an integral contribution to weather forecast accuracy. The ECMWF predicted that AMDAR coverage would decrease by 65% or more due to the drop in commercial flights.
Seismic noise reduction
Seismologists have reported that quarantine, lockdown, and other measures to mitigate COVID-19 have resulted in a mean global high-frequency seismic noise reduction of up to 50%. This study reports that the noise reduction resulted from a combination of factors including reduced traffic/transport, lower industrial activity, and weaker economic activity. The reduction in seismic noise was observed at both remote seismic monitoring stations and at borehole sensors installed several hundred metres below the ground. The study states that the reduced noise level may allow for better monitoring and detection of natural seismic sources, such as earthquakes and volcanic activity.
Noise pollution has been shown to negatively affect both humans and invertebrates. The WHO suggests that 100 million people in Europe are negatively affected by unwanted noise daily, resulting in hearing loss, cardiovascular disorders, loss of sleep, and negative psychological effects. During the pandemic, however, government enforced travel mandates lowered car and plane movements resulting in significant reduction in noise pollution.
See also
Environmental impact of aviation
Green recovery
Impact of the COVID-19 pandemic
Impact of the COVID-19 pandemic on public transport
Pandemic prevention#Environmental policy and economics
Technosignature#Atmospheric analysis
The Year Earth Changed
References
Sources
External links
COVID-19 Earth Observation Dashboard by NASA, ESA, and JAXA
Rapid Action on COVID-19 and Earth Observation Dashboard by ESA and EC
Observed and Potential Impacts of the COVID-19 Pandemic on the Environment
United Nations: Six Nature Facts Related to Coronaviruses
WHO air quality index/report of air pollution in 2020
Coronavirus pandemic impact
Coronavirus pandemic impact
Environment and health
Environment
2020s in the environment
COVID-19 pandemic | 0.781514 | 0.988223 | 0.772311 |
Arcology | Arcology, a portmanteau of "architecture" and "ecology", is a field of creating architectural design principles for very densely populated and ecologically low-impact human habitats.
The term was coined in 1969 by architect Paolo Soleri, who believed that a completed arcology would provide space for a variety of residential, commercial, and agricultural facilities while minimizing individual human environmental impact. These structures have been largely hypothetical, as no large-scale arcology has yet been built.
The concept has been popularized by various science fiction writers. Larry Niven and Jerry Pournelle provided a detailed description of an arcology in their 1981 novel Oath of Fealty. William Gibson mainstreamed the term in his seminal 1984 cyberpunk novel Neuromancer, where each corporation has its own self-contained city known as an arcology. More recently, authors such as Peter Hamilton in Neutronium Alchemist and Paolo Bacigalupi in The Water Knife explicitly used arcologies as part of their scenarios. They are often portrayed as self-contained or economically self-sufficient.
Development
An arcology is distinguished from a merely large building in that it is designed to lessen the impact of human habitation on any given ecosystem. It could be self-sustainable, employing all or most of its own available resources for a comfortable life: power, climate control, food production, air and water conservation and purification, sewage treatment, etc. An arcology is designed to make it possible to supply those items for a large population. An arcology would supply and maintain its own municipal or urban infrastructures in order to operate and connect with other urban environments apart from its own.
Arcologies were proposed in order to reduce human impact on natural resources. Arcology designs might apply conventional building and civil engineering techniques in very large, but practical projects in order to achieve pedestrian economies of scale that have proven, post-automobile, to be difficult to achieve in other ways.
Frank Lloyd Wright proposed an early version called Broadacre City although, in contrast to an arcology, his idea is comparatively two-dimensional and depends on a road network. Wright's plan described transportation, agriculture, and commerce systems that would support an economy. Critics said that Wright's solution failed to account for population growth, and assumed a more rigid democracy than the US actually has.
Buckminster Fuller proposed the Old Man River's City project, a domed city with a capacity of 125,000, as a solution to the housing problems in East St. Louis, Illinois.
Paolo Soleri proposed later solutions, and coined the term "arcology". Soleri describes ways of compacting city structures in three dimensions to combat two-dimensional urban sprawl, to economize on transportation and other energy uses. Like Wright, Soleri proposed changes in transportation, agriculture, and commerce. Soleri explored reductions in resource consumption and duplication, land reclamation; he also proposed to eliminate most private transportation. He advocated for greater "frugality" and favored greater use of shared social resources, including public transit (and public libraries).
Similar real-world projects
Arcosanti is an experimental "arcology prototype", a demonstration project under construction in central Arizona since 1970. Designed by Paolo Soleri, its primary purpose is to demonstrate Soleri's personal designs, his application of principles of arcology to create a pedestrian-friendly urban form.
Many cities in the world have proposed projects adhering to the design principles of the arcology concept, like Tokyo, and Dongtan near Shanghai. The Dongtan project may have collapsed, and it failed to open for the Shanghai World Expo in 2010. The Ihme-Zentrum in Hanover was an attempt to build a "city within a city".
McMurdo Station of the United States Antarctic Program and other scientific research stations on Antarctica resemble the popular conception of an arcology as a technologically advanced, relatively self-sufficient human community. The Antarctic research base provides living and entertainment amenities for roughly 3,000 staff who visit each year. Its remoteness and the measures needed to protect its population from the harsh environment give it an insular character. The station is not self-sufficientthe U.S. military delivers 30,000 cubic metres (8,000,000 US gal) of fuel and of supplies and equipment yearly through its Operation Deep Freeze resupply effortbut it is isolated from conventional support networks. Under international treaty, it must avoid damage to the surrounding ecosystem.
Begich Towers operates like a small-scale arcology encompassing nearly all of the population of Whittier, Alaska. The building contains residential housing as well as a police station, grocery, and municipal offices.
Whittier once boasted a second structure known as the Buckner Building. The Buckner Building still stands but was deemed unfit for habitation after the 1969 earthquake.
The Line was planned as a long and wide linear smart city in Saudi Arabia in Neom, Tabuk Province, designed to have no cars, streets or carbon emissions. The Line is planned to be the first development in Neom, a $500 billion project. The city's plans anticipated a population of 9 million. Excavation work had started along the entire length of the project by October 2022. However, the project was scaled down in 2024 to long, housing 300,000 people.
In popular culture
Most proposals to build real arcologies have failed due to financial, structural or conceptual shortcomings. Arcologies are therefore found primarily in fictional works.
In Robert Silverberg's The World Inside, most of the global population of 75 billion live inside giant skyscrapers, called "urbmons", each of which contains hundreds of thousands of people. The urbmons are arranged in "constellations". Each urbmon is divided into "neighborhoods" of 40 or so floors. All the needs of the inhabitants are provided inside the building – food is grown outside and brought into the building – so the idea of going outside is heretical and can be a sign of madness. The book examines human life when the population density is extremely high.
Another significant example is the 1981 novel Oath of Fealty by Larry Niven and Jerry Pournelle, in which a segment of the population of Los Angeles has moved into an arcology. The plot examines the social changes that result, both inside and outside the arcology. Thus the arcology is not just a plot device but a subject of critique.
In the city-building video game SimCity 2000, self-contained arcologies can be built, reducing the infrastructure needs of the city.
The isometric, cyberpunk themed action roleplay game The Ascent takes place in a futuristic dystopian version of an arcology on the alien world Veles - and prominently uses the structure and its levels to flesh out progression in the game, starting you in the bottom levels of the sewers with the ultimate goal of reaching the top of the structure to leave the city.
See also
References
Notes
Further reading
Soleri, Paolo. Arcology: The City in the Image of Man. 1969: Cambridge, Massachusetts, MIT Press.
External links
Arcology: The City in the Image of Man by Paolo Soleri (full text online)
Arcology.com – useful links
The Night Land by William Hope Hodgson (full text online)
Victory City
A discussion of arcology concepts
What is an Arcology?
Usage of "arcology" vs. "hyperstructure"
Arcology.com ("An arcology in southern China" on front page)
Arcology ("An arcology is a self-contained environment...")
SculptorsWiki: Arcology ("The only arcology yet on Earth...")
Review of Shadowrun: Renraku Arcology ("What's an arcology? A self-contained, largely self-sufficient living, working, recreational structure...")
Megastructures
Exploratory engineering
Environmental design
Human habitats
Planned communities
Urban studies and planning terminology
Cyberpunk themes
Architecture related to utopias | 0.775578 | 0.995773 | 0.7723 |
Water conservation | Water conservation aims to sustainably manage the natural resource of fresh water, protect the hydrosphere, and meet current and future human demand. Water conservation makes it possible to avoid water scarcity. It covers all the policies, strategies and activities to reach these aims. Population, household size and growth and affluence all affect how much water is used.
Climate change and other factors have increased pressure on natural water resources. This is especially the case in manufacturing and agricultural irrigation. Many countries have successfully implemented policies to conserve water conservation. There are several key activities to conserve water. One is beneficial reduction in water loss, use and waste of resources. Another is avoiding any damage to water quality. A third is improving water management practices that reduce the use or enhance the beneficial use of water.
Technology solutions exist for households, commercial and agricultural applications to reduce the . Water conservation programs involved in social solutions are typically initiated at the local level, by either municipal water utilities or regional governments.
Aims
The Aims of water conservation efforts include:
With less than 1% of the worlds water being freshwater, one aim is ensuring the availability of water for future generations where the withdrawal of freshwater from an ecosystem does not exceed its natural replacement rate.
Energy conservation as water pumping, delivery, and wastewater treatment facilities consume a significant amount of energy. In some regions of the world, over 15% of the total electricity consumption is devoted to water management.
Habitat conservation where minimizing human water usage helps to preserve freshwater habitats for local wildlife and migrating waterfowl, but also water quality.
Strategies
The key activities to conserve water are as follows:
Any beneficial reduction in water loss, use and waste of resources.
Avoiding any damage to water quality.
Improving water management practices that reduce the use or enhance the beneficial use of water.
One of the strategies in water conservation is rainwater harvesting. Digging ponds, lakes, canals, expanding the water reservoir, and installing rain water catching ducts and filtration systems on homes are different methods of harvesting rain water. Many people in many countries keep clean containers so they can boil it and drink it, which is useful to supply water to the needy. Harvested and filtered rain water can be used for toilets, home gardening, lawn irrigation, and small scale agriculture.
Another strategy in water conservation is protecting groundwater resources. When precipitation occurs, some infiltrates the soil and goes underground. Water in this saturation zone is called groundwater. Contamination of groundwater causes the groundwater water supply to not be able to be used as a resource of fresh drinking water and the natural regeneration of contaminated groundwater can take years to replenish. Some examples of potential sources of groundwater contamination include storage tanks, septic systems, uncontrolled hazardous waste, landfills, atmospheric contaminants, chemicals, and road salts. Contamination of groundwater decreases the replenishment of available freshwater so taking preventative measures by protecting groundwater resources from contamination is an important aspect of water conservation.
An additional strategy to water conservation is practicing sustainable methods of utilizing groundwater resources. Groundwater flows due to gravity and eventually discharges into streams. Excess pumping of groundwater leads to a decrease in groundwater levels and if continued it can exhaust the resource. Ground and surface waters are connected and overuse of groundwater can reduce and, in extreme examples, diminish the water supply of lakes, rivers, and streams. In coastal regions, over pumping groundwater can increase saltwater intrusion which results in the contamination of groundwater water supply. Sustainable use of groundwater is essential in water conservation.
A fundamental component to water conservation strategy is communication and education outreach of different water programs. Developing communication that educates science to land managers, policy makers, farmers, and the general public is another important strategy utilized in water conservation. Communication of the science of how water systems work is an important aspect when creating a management plan to conserve that system and is often used for ensuring the right management plan to be put into action.
The conservation of water is extremely important in order to preserve wildlife habitats. There are many organisms in temperate regions who are affected by shortages in water. Additionally, many freshwater organisms are increasingly feeling the impacts of water pollution as it disrupts the ecosystem.
"World Water Day" is celebrated on 22 March.
Social solutions
Water conservation programs involved in social solutions are typically initiated at the local level, by either municipal water utilities or regional governments. Common strategies include public outreach campaigns, tiered water rates (charging progressively higher prices as water use increases), or restrictions on outdoor water use such as lawn watering and car washing. Cities in dry climates often require or encourage the installation of xeriscaping or natural landscaping in new homes to reduce outdoor water usage. Most urban outdoor water use in California is residential, illustrating a reason for outreach to households as well as businesses.
One fundamental conservation goal is universal water metering. The prevalence of residential water metering varies significantly worldwide. Recent studies have estimated that water supplies are metered in less than 30% of UK households. Although individual water meters have often been considered impractical in homes with private wells or in multifamily buildings, the US Environmental Protection Agency estimates that metering alone can reduce consumption by 20 to 40 percent. In addition to raising consumer awareness of their water use, metering is also an important way to identify and localize water leakage. Water metering might benefit society by providing a financial incentive to avoid waste in water use.
Some researchers have suggested that water conservation efforts should be primarily directed at farmers, in light of the fact that crop irrigation accounts for 70% of the world's fresh water use. The agricultural sector of most countries is important both economically and politically, and water subsidies are common. Conservation advocates have urged removal of all subsidies to force farmers to grow more water-efficient crops and adopt less wasteful irrigation techniques.
New technology poses a few new options for consumers, features such as full flush and half flush when using a toilet are trying to make a difference in water consumption and waste. It is also possible to use/"pollute" the water in stages (keeping use in flush toilets for last), hereby allowing more use of the water for various tasks within a same cycle (before it needs to be purified again, which can also be done in-situ). Earthships often use such a setup.
Also available are modern shower heads that help reduce wasting water: Old shower heads are said to use 5-10 gallons per minute, while new fixtures available use 2.5 gallons per minute and offer equal water coverage.
Another method is to recycle the water of the shower directly, by means a semi-closed system which features a pump and filter. Such a setup (called a "water recycling shower") has also been employed at the VIRTUe LINQ house. Besides recycling water, it also reuses the heat of the water (which would otherwise be lost).
Contrary to the popular view that the most effective way to save water is to curtail water-using behavior (e.g., by taking shorter showers), experts suggest the most efficient way is replacing toilets and retrofitting washers; as demonstrated by two household end use logging studies in the US.
Water-saving technology for the home includes:
Low-flow shower heads sometimes called energy-efficient shower heads as they also use less energy
Low-flush toilets, composting toilets, and incinerating toilets. Composting toilets have a dramatic impact in the developed world, as conventional Western flush toilets use large volumes of water
Dual flush toilets include two buttons or handles to flush different levels of water. Dual flush toilets use up to 67% less water than conventional toilets
Faucet aerators, which break water flow into fine droplets to maintain "wetting effectiveness" while using less water. An additional benefit is that they reduce splashing while washing hands and dishes
Raw water flushing where toilets use sea water or non-purified water (i.e. greywater)
Wastewater reuse or recycling systems, allowing:
Reuse of greywater for flushing toilets or watering gardens
Recycling of wastewater through purification at a water treatment plant. See also Wastewater - Reuse
Rainwater harvesting
High-efficiency clothes washers
Weather-based irrigation controllers
Garden hose nozzles that shut off the water when it is not being used, instead of letting a hose run.
Low flow taps in wash basins
Swimming pool covers that reduce evaporation and can warm pool water to reduce water, energy and chemical costs.
Automatic faucet is a water conservation faucet that eliminates water waste at the faucet. It automates the use of faucets without the use of hands.
Smart water meters are also a promising technology for reducing household water usage. A study conducted in Valencia, Spain, shows the potential that smart meter-based water consumption feedback has for conserving water in households. The findings showed that households that were equipped with smart water meters increased their water savings. This technology works to show people how much water they were using in their household, suggest ways they can reduce water usage, and incentivize water savings with physical rewards.
Applications
Many water-saving devices (such as low-flush toilets) that are useful in homes can also be useful for business water saving. Other water-saving technology for businesses includes:
Waterless urinals (also can be installed in schools)
Waterless car washes
Infrared or foot-operated taps, which can save water by using short bursts of water for rinsing in a kitchen or bathroom
Pressurized waterbrooms, which can be used instead of a hose to clean sidewalks
X-ray film processor re-circulation systems
Cooling tower conductivity controllers
Water-saving steam sterilizers, for use in hospitals and health care facilities
Rainwater harvesting
Water-to-water heat exchangers.
It is important to consider implementing water-conserving changes to industrial and commercial application use. It was found that high-income countries use roughly 59% of their water for industrial usage while low-income countries use 8% for industrial usage. One big change that industrial and commercial companies can implement are to improve the assessment and maintenance of water systems. It is easy to add water-efficient applications but it is the proper maintenance and inspection of it which will lead to long-term changes. A water conservation plan can be created, including adding various goals and benchmarks for both the employees and the company. Another change that industrial and commercial companies can make are to check water-consuming systems at regular intervals for any leaks or problems. By doing this, it will ensure that water is not unnecessarily being lost and there is no excess money being spent on utility bills. A third change that industrial and commercial companies can implement is installing a rain sensor. This sensor should be able to detect when precipitation is occurring and stop the program which would normally irrigate the land. After the rain ends, the sensor should turn the program back on and resume to its normal watering cycle.
Agricultural applications
Water is an essential part of irrigation. Plants always take a lot of ground water thus ground water should be replenished. For crop irrigation, optimal water efficiency means minimizing losses due to evaporation, runoff, or subsurface drainage while maximizing production. An evaporation pan in combination with specific crop correction factors can be used to determine how much water is needed to satisfy plant requirements. Flood irrigation, the oldest and most common type, is often very uneven in distribution, as parts of a field may receive excess water in order to deliver sufficient quantities to other parts. Overhead irrigation, using center-pivot or lateral-moving sprinklers, has the potential for a much more equal and controlled distribution pattern. Drip irrigation is the most expensive and least-used type, but offers the ability to deliver water to plant roots with minimal losses. However, drip irrigation is increasingly affordable, especially for the home gardener and in light of rising water rates. Using drip irrigation methods can save up to 30,000 gallons of water per year when replacing irrigation systems that spray in all directions. There are also cheap effective methods similar to drip irrigation such as the use of soaking hoses that can even be submerged in the growing medium to eliminate evaporation.
As changing irrigation systems can be a costly undertaking, conservation efforts often concentrate on maximizing the efficiency of the existing system. This may include chiselling compacted soils, creating furrow dikes to prevent runoff, and using soil moisture and rainfall sensors to optimize irrigation schedules. Usually large gains in efficiency are possible through measurement and more effective management of the existing irrigation system. The 2011 UNEP Green Economy Report notes that "[i]mproved soil organic matter from the use of green manures, mulching, and recycling of crop residues and animal manure increases the water holding capacity of soils and their ability to absorb water during torrential rains", which is a way to optimize the use of rainfall and irrigation during dry periods in the season.
As seen in China, plastic mulch also has the potential to conserve water in agricultural practices. The "mulch" is really a thin sheet of plastic that is placed over the soil. There are holes in the plastic for the plants to grow through. Some studies have shown that plastic mulch conserves water by reducing the evaporation of soil moisture, however, there haven't been enough applied studies to determine the total water savings that this practice may bring about.
Water reuse
Water shortage has become an increasingly difficult problem to manage. More than 40% of the world's population live in a region where the demand for water exceeds its supply. The imbalance between supply and demand, along with persisting issues such as climate change and population growth, has made water reuse a necessary method for conserving water. There are a variety of methods used in the treatment of waste water to ensure that it is safe to use for irrigation of food crops and/or drinking water.
Seawater desalination requires more energy than the desalination of fresh water. Despite this, many seawater desalination plants have been built in response to water shortages around the world. This makes it necessary to evaluate the impacts of seawater desalination and to find ways to improve desalination technology. Current research involves the use of experiments to determine the most effective and least energy intensive methods of desalination.
Sand filtration is another method used to treat water. Recent studies show that sand filtration needs further improvements, but it is approaching optimization with its effectiveness at removing pathogens from water. Sand filtration is very effective at removing protozoa and bacteria, but struggles with removing viruses. Large-scale sand filtration facilities also require large surface areas to accommodate them.
The removal of pathogens from recycled water is of high priority because wastewater always contains pathogens capable of infecting humans. The levels of pathogenic viruses have to be reduced to a certain level in order for recycled water to not pose a threat to human populations. Further research is necessary to determine more accurate methods of assessing the level of pathogenic viruses in treated wastewater.
Problem areas
Wasting of water
Wasting of water is the flip side of water conservation and, in household applications, it means causing or permitting discharge of water without any practical purpose. Inefficient water use is also considered wasteful. By EPA estimate, household leaks in the US can waste approximately 900 billion gallons (3.4 billion cubic meters) of water annually nationwide. Generally, water management agencies are reluctant or unwilling to give a concrete definition to a relatively vague concept of water waste.
However, definition of water waste is often given in local drought emergency ordinances. One example refers to any acts or omissions, whether willful or negligent, that are "causing or permitting water to leak, discharge, flow or run to waste into any gutter, sanitary sewer, watercourse or public or private storm drain, or to any adjacent property, from any tap, hose, faucet, pipe, sprinkler, pond, pool, waterway, fountain or nozzle." In this example, the city code also clarifies that "in the case of washing, "discharge," "flow" or "run to waste" means that water in excess of that necessary to wash, wet or clean the dirty or dusty object, such as an automobile, sidewalk, or parking area, flows to waste.
Water utilities (and other media sources) often provide listings of wasteful water-use practices and prohibitions of wasteful uses. Examples include utilities in San Antonio, Texas. Las Vegas, Nevada, California Water Service company in California, and City of San Diego, California. The City of Palo Alto in California enforces permanent water use restrictions on wasteful practices such as leaks, runoff, irrigating during and immediately after rainfall, and use of potable water when non-potable water is available. Similar restrictions are in effect in the State of Victoria, Australia. Temporary water use bans (also known as "hosepipe bans") are used in England, Scotland, Wales and Northern Ireland.
Strictly speaking, water that is discharged into the sewer, or directly to the environment is not wasted or lost. It remains within the hydrologic cycle and returns to the land surface and surface water bodies as precipitation. However, in many cases, the source of the water is at a significant distance from the return point and may be in a different catchment. The separation between extraction point and return point can represent significant environmental degradation in the watercourse and riparian strip. What is "wasted" is the community's supply of water that was captured, stored, transported and treated to drinking quality standards. Efficient use of water saves the expense of water supply provision and leaves more fresh water in lakes, rivers and aquifers for other users and also for supporting ecosystems. For example, we should not treat toilet as a trash can. If we flush cigarette butts or tissues in it, we are wasting gallons of water. Because the process of recycling water cannot be accomplished.
A concept that is closely related to water wasting is "water-use efficiency". Water use is considered inefficient if the same purpose of its use can be accomplished with less water. Technical efficiency derives from engineering practice where it is typically used to describe the ratio of output to input and is useful in comparing various products and processes. For example, one showerhead would be considered more efficient than another if it could accomplish the same purpose (i.e., of showering) by using less water or other inputs (e.g., lower water pressure). The technical efficiency concept is not useful in making decisions of investing money (or resources) in water conservation measures unless the inputs and outputs are measured in value terms. This expression of efficiency is referred to as economic efficiency and is incorporated into the concept of water conservation.
See also
References
Further reading
Online book (the most current version of the text)
Download book – Kindle, Nook, Apple, Kobo, and PDF
External links
Smart WaterMark — Australian Water Conservation Label
Water Conservation Communications Guide — American Water Works Association
Water Conservation — Natural Resources Conservation Service, US Department of Agriculture
Alliance for Water Efficiency (AWE)
Environmental issues with water
Scoutcraft
Sustainable design
Waste minimisation | 0.773408 | 0.998519 | 0.772262 |
Science Based Targets initiative | The Science Based Targets initiative (SBTi) is a collaboration between the CDP, the United Nations Global Compact, World Resources Institute (WRI) and the World Wide Fund for Nature (WWF), with a global team composed of people from these organisations. Since 2015, more than 1,000 companies have joined the initiative to set a science-based climate target.
Organization
The Science Based Targets initiative was established in 2015 to help companies to set emission reduction targets in line with climate sciences and Paris Agreement goals. It is funded by IKEA Foundation, Amazon, Bezos Earth Fund, We Mean Business coalition, Rockefeller Brothers Fund and UPS Foundation. In October 2021, SBTi developed and launched the world's first net zero standard, providing the framework and tools for companies to set science-based net zero targets and limit global temperature rise above pre-industrial levels to 1.5 °C. Best practice as identified by SBTi is for companies to adopt transition plans covering scope 1, 2 and 3 emissions, set out short-term milestones, ensure effective board-level governance and link executive compensation to the company's adopted milestones.
Sector-specific guidance
SBTi developed separate sector-specific methodologies, frameworks and requirements for different industries. As of September 2024, sector guidance is available for:
Aluminium (Scoping phase)
Apparel and footwear (Finalized)
Aviation (In development)
Buildings (Finalized)
Chemicals (In development)
Cement (Finalized)
Financial institutions (Finalized)
Forest, Land and Agriculture (Finalized)
Information and Communication Technology (Finalized)
Land transport (In development)
Maritime (Finalized)
Oil and Gas (In development)
Power (Finalized)
Steel (Finalized)
Carbon offsets controversy
In April 2024 the SBTi Board of Trustees released a statement setting out an intention to permit the use of environmental attribute certificates (EACs) for abatement purposes against Scope 3 emissions reduction targets. SBTi did not previously permit the use of EACs due to the difficulties faced in tracing, measuring and validating their impact. The Bezos Earth Fund, a major funder of the SBTi, exerted influence on SBTi board members to relax the organization's position on carbon offsets. The statement led to a response letter signed by various teams within the SBTi and media speculation about the policy change. The counter argument set out in the response being that carbon offsets are incompatible with the Paris Agreement.
Launched in September 2022, the SBTi's Forestry, Land and Agriculture (FLAG) guidance allows companies to claim the achievement of their emission reduction targets through ‘insetting’, breaking from the long-held SBTi position that emission reduction targets should only be achieved through emission reductions. Insetting is a business-driven concept and not a term defined in international standards and guidelines such as ISO 14050 Environmental Vocabulary and IWA 42 Net zero guidelines.
On 2 July 2024, CEO Luiz Amaral announced that he would step down for personal reasons.
See also
Carbon accounting
Carbon Disclosure Project
Carbon footprint
Carbon neutrality
Carbon offsets and credits
Corporate sustainability
Climate change
Greenhouse gas emissions
Greenhouse gas inventory
Net zero emissions
Paris Agreement
United Nations Global Compact
World Resources Institute
References
Corporate social responsibility
Environmental science
Greenhouse gases
Greenhouse gas emissions
Organizations established in 2015 | 0.779417 | 0.990803 | 0.772248 |
Regenerative agriculture | Regenerative agriculture is a conservation and rehabilitation approach to food and farming systems. It focuses on topsoil regeneration, increasing biodiversity, improving the water cycle, enhancing ecosystem services, supporting biosequestration, increasing resilience to climate change, and strengthening the health and vitality of farm soil.
Regenerative agriculture is not a specific practice. It combines a variety of sustainable agriculture techniques. Practices include maximal recycling of farm waste and adding composted material from non-farm sources. Regenerative agriculture on small farms and gardens is based on permaculture, agroecology, agroforestry, restoration ecology, keyline design, and holistic management. Large farms are also increasingly adopting regenerative techniques, using "no-till" and/or "reduced till" practices.
As soil health improves, input requirements may decrease, and crop yields may increase as soils are more resilient to extreme weather and harbor fewer pests and pathogens.
Regenerative agriculture mitigates climate change through carbon dioxide removal from the atmosphere and sequestration. Along with reduction of carbon emissions, carbon sequestration is gaining popularity in agriculture, and individuals as well as groups are taking action to fight climate change.
History
Origins
Regenerative agriculture is based on various agricultural and ecological practices, with a particular emphasis on minimal soil disturbance and the practice of composting. Maynard Murray had similar ideas, using sea minerals. His work led to innovations in no-till practices, such as slash and mulch in tropical regions. Sheet mulching is a regenerative agriculture practice that smothers weeds and adds nutrients to the soil below.
In the early 1980s, the Rodale Institute began using the term ‘regenerative agriculture’. Rodale Publishing formed the Regenerative Agriculture Association, which began publishing regenerative agriculture books in 1987 and 1988.
However, the institute stopped using the term in the late 1980s, and it only appeared sporadically (in 2005 and 2008), until they released a white paper in 2014, titled "Regenerative Organic Agriculture and Climate Change". The paper's summary states, "we could sequester more than 100% of current annual CO2 emissions with a switch to common and inexpensive organic management practices, which we term 'regenerative organic agriculture.'" The paper described agricultural practices, like crop rotation, compost application, and reduced tillage, that are similar to organic agriculture methods.
In 2002, Storm Cunningham documented the beginning of what he called "restorative agriculture" in his first book, The Restoration Economy. Cunningham defined restorative agriculture as a technique that rebuilds the quantity and quality of topsoil, while also restoring local biodiversity (especially native pollinators) and watershed function. Restorative agriculture was one of the eight sectors of restorative development industries/disciplines in The Restoration Economy.
Recent developments (since 2010)
Indigenous cultures have long been privy to the innate knowledge of many of regenerative agriculture's techniques. These practices have existed for centuries, but the term itself has only been around for some decades, and as of late, has increasingly showed up in academic research since the early to mid 2010s in the fields of environmental science, plant science, and ecology. As the term expands in use, many books have been published on the topic and several organizations started to promote regenerative agriculture techniques. Allan Savory gave a TED talk on fighting and reversing climate change in 2013. He also launched The Savory Institute, which educates ranchers on methods of holistic land management. Abe Collins created LandStream to monitor ecosystem performance in regenerative agriculture farms. Eric Toensmeier had a book published on the subject in 2016. However, researchers at Wageningen University in the Netherlands found there to be no consistent definition of what people referencing "regenerative agriculture" meant. They also found that most of the work around this topic were instead the authors' attempt at shaping what regenerative agriculture meant.
In 2011, the (not for profit) Mulloon Institute was founded in New South Wales, Australia, to develop and promote regenerative practices to reclaim land as water-retentive areas by slowing the loss of water from land. The members of the Institute created a 22-weir in-stream project with neighbours over 2 kilometers of Mulloon Creek. A study indicates that the outcomes were positive but relatively unpredictable, and that suitability of ground conditions on site was a key for success. Bottom-up change in the context of Australian regenerative agriculture is a complex set of narratives and barriers to change affecting farmers. A West Australian government funded survey of land hydration was conducted by the Mulloon Institute in June 2022, which concluded that water retention projects supported the regeneration of native plant species.
Founded in 2013, 501(c)3 non-profit Kiss the Ground was one of the first to publicize the term to a broader audience. Today the group runs a series of media, farmland, education, and policy programs to raise awareness around soil health and support farmers who aim to transition from conventional to regenerative land management practices. The film Kiss the Ground, executive produced by Julian Lennon and Gisele Bündchen and narrated by Woody Harrelson, was released in 2020. A follow-up documentary, Common Ground, premiered in 2023 and was the recipient of the 2023 Human/Nature Award at the Tribeca Film Festival.
Not all regenerative systems emphasize ruminants. In 2017, Reginaldo Haslett Marroquin published "In the Shadow of Green Man" with Per Andreeason, which detailed Haslett Marroquin's early life as a campesino in Guatemala and how these experiences led him to develop regenerative poultry agroforestry systems that are now being practiced and expanding in the United States and elsewhere.
Several large corporations have also announced regenerative agriculture initiatives in the last few years. In 2019, General Mills announced an effort to promote regenerative agriculture practices in their supply chain. The farming practices have received criticism from academic and government experiments on sustainability in farming. In particular, Gunsmoke Farm partnered with General Mills to transition to regenerative agriculture practices and become a teaching hub for others. Experts from the area have expressed concerns about the farm now doing more harm than good, with agronomist Ruth Beck stating that "Environmental marketing got ahead of what farmers can actually do".
In February 2021, the regenerative agriculture market gained traction after Joe Biden's Secretary of Agriculture Tom Vilsack made reference to it during his Senate Confirmation hearing. The Biden administration wants to utilize $30 billion from the USDA's Commodity Credit Corporations to incentivise farmers to adopt sustainable practices. Vilsack stated in the hearing, "It is a great tool for us to create the kind of structure that will inform future farm bills about what will encourage carbon sequestration, what will encourage precision agriculture, what will encourage soil health and regenerative agricultural practices." After this announcement from the Biden administration, several national and international corporations announced initiatives into regenerative agriculture. During the House of Representatives Committee on Agriculture's first hearing on climate change, Gabe Brown, a proponent of regenerative agriculture, testified about the role of regenerative agriculture in both the economics and sustainability of farming.
In 2021, PepsiCo announced that by 2030 they will work with the farmers in their supply chain to establish regenerative agriculture practices across their approximately 7 million acres. In 2021, Unilever announced an extensive implementation plan to incorporate regenerative agriculture throughout their supply chain. VF Corporation, the parent company of The North Face, Timberland, and Vans, announced in 2021 a partnership with Terra Genesis International to create a supply chain for their rubber that comes from sources utilizing regenerative agriculture. Nestle announced in 2021 a $1.8 billion investment in regenerative agriculture in an effort to reduce their emissions by 95%.
Several days before the opening of the 2022 United Nations Climate Change Conference, a report was published, sponsored by some of the biggest agricultural companies. The report was produced by Sustainable Markets Initiative, an organisation of companies trying to become climate friendly, established by King Charles III. According to the report, regenerative agriculture is already implemented on 15% of all cropland. Despite this, the rate of transition is "far too slow" and must be tripled by the year 2030 to prevent the global temperature passing the threshold of 1.5 degrees above preindustrial levels. Agricultural practices must immediately change in order to avoid the damage that would result. One of the authors emphasised that “The interconnection between human health and planetary health is more evident than ever before.” The authors proposed a set of measures for accelerating the transition, like creating metrics for measuring how much farming is sustainable, and paying farmers who will change their farming practices to more sustainable ones.
Principles
There are several individuals, groups, and organizations that have attempted to define what the principles of regenerative agriculture are. In their review of the existing literature on regenerative agriculture, researchers at Wageningen University created a database of 279 published research articles on regenerative agriculture. Their analysis of this database found that people using the term regenerative agriculture were using different principles to guide regenerative agriculture efforts. The 4 most consistent principles were found to be, 1) enhancing and improving soil health, 2) optimization of resource management, 3) alleviation of climate change, and 4) improvement of water quality and availability.
Notable definitions of principles
The organization The Carbon Underground created a set of principles that have been signed on to by a number of non-profits and corporations including Ben & Jerry's, Annie's, and the Rodale Institute, which was one of the first organization to use the term "Regenerative Agriculture". The principles they've outlined include building soil health and fertility, increase water percolation and retention, increasing biodiversity and ecosystem health, and reducing carbon emissions and current atmospheric CO2 levels.
The group Terra Genesis International, and VF Corporation's partner in their regenerative agriculture initiative, created a set of 4 principles, which include:
"Progressively improve whole agroecosystems (soil, water and biodiversity)"
"Create context-specific designs and make holistic decisions that express the essence of each farm"
"Ensure and develop just and reciprocal relationships amongst all stakeholders"
"Continually grow and evolve individuals, farms, and communities to express their innate potential"
Instead of focusing on the specifics of food production technologies, human ecologist Philip Loring suggests a food system-level focus on regeneration, arguing that it is the combination of flexibility and diversity in our food systems that supports regenerative ecological practices. Loring argues that, depending on the relative flexibility of people in the food system with respect to the foods they eat and the overall diversity of foods being produced and harvested, food systems can fall into one of four general patterns:
Regenerative (high diversity, high flexibility), where ecosystems are able to recycle and replenish used energy to usable forms, such as found in many Indigenous food systems
Degenerative (High diversity, low flexibility), where people fixate on specific resources and only switch to alternatives once the preferred commodity is exhausted, such as fishing down the food web.
Coerced (low diversity, low flexibility), where people subsidize prized resources at the expense of the surrounding ecosystem, such as in the Maine Lobster fishery
Impoverished (low diversity, high flexibility), where people are willing to be flexible but, because they are living in degraded ecosystems and possibly a povery trap, cannot allow ecosystems and resources to regenerate.
Loring's typology is based on a principle he calls the Conservation of Change, which states that change must always happen somewhere in ecosystems, and derives from the Second Law of Thermodynamics and Barry Commoner's premise in that, in ecosystems, "there is no free lunch".
Practices
Practices and principles used in regenerative farming include:
Alternative food networks (AFNs), commonly defined by attributes such as the spatial proximity between farmers and consumers.
Aquaculture
Ecological aquaculture
Regenerative ocean farming
Agroecology
Agroforestry
Biochar/terra preta
Borders planted for pollinator habitat and other beneficial insects
Compost, compost tea, animal manures and thermal compost
Conservation farming, no-till farming, minimum tillage, and pasture cropping
Cover crops & multi-species cover crops
Home gardens, to mitigate the adverse effect of global food shocks and food price volatilities, also as a strategy to enhance household food security and nutrition
Regrowing vegetables, for recycling and sustainable living
Keyline subsoiling
Livestock: well-managed grazing, animal integration and holistically managed grazing
Grass-fed cattle
Natural farming
Natural sequence farming
Organic annual cropping and crop rotations
Perennial crops
Ponding banks, to prevent soil erosion also known as grading banks and, in parts of Australia, commonly known as Purvis banks, after Ron Purvis Jr of Woodgreen Station in the Northern Territory
Permaculture design
Polyculture and full-time succession planting of multiple and inter-crop plantings
Silvopasture
Soil food web
Environmental impacts
Carbon sequestration
Conventional agricultural practices such as plowing and tilling release carbon dioxide (CO2) from the soil by exposing organic matter to the surface and thus promoting oxidation. It is estimated that roughly a third of the total anthropogenic inputs of CO2 to the atmosphere since the industrial revolution have come from the degradation of soil organic matter and that 30–75% of global soil organic matter has been lost since the advent of tillage-based farming. Greenhouse gas (GHG) emissions associated with conventional soil and cropping activities represent 13.7% of anthropogenic emissions, or 1.86 Pg-C y−1. The raising of ruminant livestock also contributes GHGs, representing 11.6% of anthropogenic emissions, or 1.58 Pg-C y−1. Furthermore, runoff and siltation of water bodies associated with conventional farming practices promote eutrophication and emissions of methane.
Regenerative agriculture practices such as no-till farming, rotational grazing, mixed crop rotation, cover cropping, and the application of compost and manure have the potential to reverse this trend. No-till farming reintroduces carbon back into the soil as crop residues are pressed down when seeding. Some studies suggest that adoption of no-till practices could triple soil carbon content in less than 15 years. Additionally, 1 Pg-C y−1, representing roughly a fourth to a third of anthropogenic CO2 emissions, may be sequestered by converting croplands to no-till systems on a global scale.
There is mixed evidence on the carbon sequestration potential of regenerative grazing. A meta-analysis of relevant studies between 1972 and 2016 found that Holistic Planned Grazing had no better effect than continuous grazing on plant cover and biomass, although it may have benefited some areas with higher precipitation. However, some studies have found positive impacts compared to conventional grazing. One study found that regenerative grazing management, particularly adaptive multipaddock (AMP) grazing, has been shown to reduce soil degradation compared to continuous grazing and thus has the potential to mitigate carbon emissions from soil. Another study found that crop rotation and maintenance of permanent cover crops help to reduce soil erosion as well, and in conjunction with AMP grazing, may result in net carbon sequestration.
There is a less developed evidence base comparing regenerative grazing with the absence of livestock on grasslands. Several peer-reviewed studies have found that excluding livestock completely from semi-arid grasslands can lead to significant recovery of vegetation and soil carbon sequestration. A 2021 peer-reviewed paper found that sparsely grazed and natural grasslands account for 80% of the total cumulative carbon sink of the world’s grasslands, whereas managed grasslands (i.e. with greater livestock density) have been a net greenhouse gas source over the past decade. A 2011 study found that multi-paddock grazing of the type endorsed by Savory resulted in more soil carbon sequestration than heavy continuous grazing, but very slightly less soil carbon sequestration than "graze exclosure" (excluding grazing livestock from land). Another peer-reviewed paper found that if current pastureland was restored to its former state as wild grasslands, shrublands, and sparse savannas without livestock this could store an estimated 15.2 - 59.9 Gt additional carbon.
The total carbon sequestration potential of regenerative grazing has been debated between advocates and critics. One study suggests that total conversion of livestock raising to AMP grazing practices coupled with conservation cropping has the potential to convert North American farmlands to a carbon sink, sequestering approximately 1.2 Pg-C y−1. Over the next 25–50 years, the cumulative sequestration potential is 30-60 Pg-C. Additions of organic manures and compost further build soil organic carbon, thus contributing to carbon sequestration potential. However, a study by the Food and Climate Research Network in 2017 estimates that, on the basis of meta-study of the scientific literature, the total global soil carbon sequestration potential from grazing management ranges from 0.3-0.8 Gt CO2eq per year, which is equivalent to offsetting a maximum of 4-11% of current total global livestock emissions, and that “Expansion or intensification in the grazing sector as an approach to sequestering more carbon would lead to substantial increases in methane, nitrous oxide and land use change-induced CO2 emissions”, leading to an overall increase in emissions. Consistent with this, Project Drawdown (referenced in the film Kiss the Ground) estimates the total carbon sequestration potential of improved managed grazing at 13.72 - 20.92 Gigatons CO2eq between 2020–2050, equal to 0.46-0.70 Gt CO2eq per year. A 2022 peer-reviewed paper estimated the carbon sequestration potential of improved grazing management at a similar level of 0.15-0.70 Gt CO2eq per year.
A research made by the Rodale institute suggests that a worldwide transition to regenerative agriculture can soak more than 100% of the currently emitted by people.
Nutrient cycling
Soil organic matter is the primary sink of nutrients necessary for plant growth such as nitrogen, phosphorus, zinc, sulfur, and molybdenum. Conventional tillage-based farming promotes rapid erosion and degradation of soil organic matter, depleting soil of plant nutrients and thus lowering productivity. Tillage, in conjunction with additions of inorganic fertilizer, also destroys soil microbial communities, reducing production of organic nutrients in soil. In contrast, use of organic fertilizer will significantly increase the organic matter in the soil. Practices that restore organic matter may be used to increase the total nutrient load of soil. For example, regenerative management of ruminant livestock in mixed-crop and grazing agroecosystems has been shown to improve soil nutrient cycling by encouraging the consumption and decomposition of residual crop biomass and promoting the recovery of nitrogen-fixing plant species. Regenerative crop management practices, namely the use of crop rotation to ensure permanent ground cover, have the potential to increase soil fertility and nutrient levels if nitrogen-fixing crops are included in the rotation. Crop rotation and rotational grazing also allow the nutrients in soil to recover between growing and grazing periods, thus further enhancing overall nutrient load and cycling.
Biodiversity
Conventional agricultural practices are generally understood to simplify agroecosystems through introduction of monocultures and eradication of diversity in soil microbial communities through chemical fertilization. In natural ecosystems, biodiversity serves to regulate ecosystem function internally, but under conventional agricultural systems, such control is lost and requires increasing levels of external, anthropogenic input. By contrast, regenerative agriculture practices including polycultures, mixed crop rotation, cover cropping, organic soil management, and low- or no-tillage methods have been shown to increase overall species diversity while reducing pest population densities. Additionally, practices that favor organic over inorganic inputs aid in restoring below-ground biodiversity by enhancing the functioning of soil microbial communities. A survey of organic and conventional farms in Europe found that on the whole, species across several taxa were higher in richness and/or abundance on organic farms compared to conventional ones, especially species whose populations have been demonstrably harmed as a direct result of conventional agriculture.
AMP grazing can help improve biodiversity since increased soil organic carbon stocks also promotes a diversity of soil microbial communities. Implementation of AMP in North American prairies, for example, has been correlated with an increase in forage productivity and the restoration of plant species that had previously been decimated by continuous grazing practices. Furthermore, studies of arid and semiarid regions of the world where regenerative grazing has been practiced for a long time following prior periods of continuous grazing have shown a recovery of biodiversity, grass species, and pollinator species. Furthermore, crop diversification ensures that the agroecosystem remains productive when facing lower levels of soil fertility. Higher levels of plant diversity led to increases in numerous factors that contribute to soil fertility, such as soil N, K, Ca, Mg, and C, in CEC and in soil pH.
Criticism
Some members of the scientific community have criticized some of the claims made by proponents of regenerative agriculture as exaggerated and unsupported by evidence.
One of the prominent proponents of regenerative agriculture, Allan Savory, claimed in his TED talk that holistic grazing could reduce carbon-dioxide levels to pre-industrial levels in a span of 40 years. According to Skeptical Science: "it is not possible to increase productivity, increase numbers of cattle and store carbon using any grazing strategy, never-mind Holistic Management [...] Long term studies on the effect of grazing on soil carbon storage have been done before, and the results are not promising.[...] Because of the complex nature of carbon storage in soils, increasing global temperature, risk of desertification and methane emissions from livestock, it is unlikely that Holistic Management, or any management technique, can reverse climate change."
Commenting on his TED talk "How to Fight Desertification and Reverse Climate Change", Savory has since denied claiming that holistic grazing can reverse climate change, saying that “I have only used the words address climate change… although I have written and talked about reversing man-made desertification”. Savory has faced criticisms for claiming the carbon sequestration potential of holistic grazing is immune from empirical scientific study. For instance, in 2000, Savory said that "the scientific method never discovers anything" and “the scientific method protects us from cranks like me". A 2017 factsheet authored by Savory stated that “Every study of holistic planned grazing that has been done has provided results that are rejected by range scientists because there was no replication!". TABLE Debates sums this up by saying "Savory argues that standardisation, replication, and therefore experimental testing of HPG [Holistic Planned Grazing] as a whole (rather than just the grazing system associated with it) is not possible, and that therefore, it is incapable of study by experimental science", but "he does not explain how HPG can make causal knowledge claims with regards to combating desertification and climate mitigation, without recourse to science demonstrating such connections."
According to a 2016 study published by the Swedish University of Agricultural Sciences, the actual rate at which improved grazing management could contribute to carbon sequestration is seven times lower than the claims made by Savory. The study concludes that holistic management cannot reverse climate change. A study by the Food and Climate Research Network in 2017 concluded that Savory's claims about carbon sequestration are "unrealistic" and very different from those issued by peer-reviewed studies.
Tim Searchinger and Janet Ranganathan have expressed concerns about emphasis upon "Practices That Increase Soil Carbon at the Field Level" because "overestimating potential soil carbon gains could undermine efforts to advance effective climate mitigation in the agriculture sector." Instead Tim Searchinger and Janet Ranganathan say, "preserving the huge, existing reservoirs of vegetative and soil carbon in the world’s remaining forests and woody savannas by boosting productivity on existing agricultural land (a land sparing strategy) is the largest, potential climate mitigation prize of regenerative and other agricultural practices. Realizing these benefits requires implementing practices in ways that boost productivity and then linking those gains to governance and finance to protect natural ecosystems. In short, produce, protect and prosper are the most important opportunities for agriculture."
See also
Agroecological restoration
Agroecology
Agroforestry
Biointensive agriculture
Carbon farming
Farmer-managed natural regeneration
Korean natural farming
Permaculture
Regenerative design
External links
"Regenerative Agriculture". Regeneration.org. 2021.
References
Agroecology
Organic farming
Permaculture
Permaculture concepts
Climate change and agriculture | 0.775103 | 0.996305 | 0.772239 |
Resource depletion | Resource depletion is the consumption of a resource faster than it can be replenished. Natural resources are commonly divided between renewable resources and non-renewable resources. The use of either of these forms of resources beyond their rate of replacement is considered to be resource depletion. The value of a resource is a direct result of its availability in nature and the cost of extracting the resource. The more a resource is depleted the more the value of the resource increases. There are several types of resource depletion, including but not limited to: mining for fossil fuels and minerals, deforestation, pollution or contamination of resources, wetland and ecosystem degradation, soil erosion, overconsumption, aquifer depletion, and the excessive or unnecessary use of resources. Resource depletion is most commonly used in reference to farming, fishing, mining, water usage, and the consumption of fossil fuels. Depletion of wildlife populations is called defaunation.
Resource depletion also brings up topics regarding its history, specifically its roots in colonialism and the Industrial Revolution, depletion accounting, and the socioeconomic impacts of resource depletion, as well as the morality of resource consumption, how humanity will be impacted and what the future will look like if resource depletion continues at the current rate, Earth Overshoot Day, and when specific resources will be completely exhausted.
History of resource depletion
The depletion of resources has been an issue since the beginning of the 19th century amidst the First Industrial Revolution. The extraction of both renewable and non-renewable resources increased drastically, much further than thought possible pre-industrialization, due to the technological advancements and economic development that lead to an increased demand for natural resources.
Although resource depletion has roots in both colonialism and the Industrial Revolution, it has only been of major concern since the 1970s. Before this, many people believed in the "myth of inexhaustibility", which also has roots in colonialism. This can be explained as the belief that both renewable and non-renewable natural resources cannot be exhausted because there is seemingly an overabundance of these resources. This belief has caused people to not question resource depletion and ecosystem collapse when it occurred, and continues to prompt society to simply find these resources in areas which have not yet been depleted.
Depletion accounting
In an effort to offset the depletion of resources, theorists have come up with the concept of depletion accounting. Related to green accounting, depletion accounting aims to account for nature's value on an equal footing with the market economy. Resource depletion accounting uses data provided by countries to estimate the adjustments needed due to their use and depletion of the natural capital available to them. Natural capital refers to natural resources such as mineral deposits or timber stocks. Depletion accounting factors in several different influences such as the number of years until resource exhaustion, the cost of resource extraction, and the demand for the resource. Resource extraction industries make up a large part of the economic activity in developing countries. This, in turn, leads to higher levels of resource depletion and environmental degradation in developing countries. Theorists argue that the implementation of resource depletion accounting is necessary in developing countries. Depletion accounting also seeks to measure the social value of natural resources and ecosystems. Measurement of social value is sought through ecosystem services, which are defined as the benefits of nature to households, communities and economies.
Importance
There are many different groups interested in depletion accounting. Environmentalists are interested in depletion accounting as a way to track the use of natural resources over time, hold governments accountable, or compare their environmental conditions to those of another country. Economists want to measure resource depletion to understand how financially reliant countries or corporations are on non-renewable resources, whether this use can be sustained and the financial drawbacks of switching to renewable resources in light of the depleting resources.
Issues
Depletion accounting is complex to implement as nature is not as quantifiable as cars, houses, or bread. For depletion accounting to work, appropriate units of natural resources must be established so that natural resources can be viable in the market economy. The main issues that arise when trying to do so are, determining a suitable unit of account, deciding how to deal with the "collective" nature of a complete ecosystem, delineating the borderline of the ecosystem, and defining the extent of possible duplication when the resource interacts in more than one ecosystem. Some economists want to include measurement of the benefits arising from public goods provided by nature, but currently there are no market indicators of value. Globally, environmental economics has not been able to provide a consensus of measurement units of nature's services.
Minerals depletion
Minerals are needed to provide food, clothing, and housing. A United States Geological Survey (USGS) study found a significant long-term trend over the 20th century for non-renewable resources such as minerals to supply a greater proportion of the raw material inputs to the non-fuel, non-food sector of the economy; an example is the greater consumption of crushed stone, sand, and gravel used in construction.
Large-scale exploitation of minerals began in the Industrial Revolution around 1760 in England and has grown rapidly ever since. Technological improvements have allowed humans to dig deeper and access lower grades and different types of ore over that time. Virtually all basic industrial metals (copper, iron, bauxite, etc.), as well as rare earth minerals, face production output limitations from time to time, because supply involves large up-front investments and is therefore slow to respond to rapid increases in demand.
Minerals projected by some to enter production decline during the next 20 years:
Oil conventional (2005)
Oil all liquides (2017). Old expectation: Gasoline (2023)
Copper (2017). Old expectation: Copper (2024). Data from the United States Geological Survey (USGS) suggest that it is very unlikely that copper production will peak before 2040.
Coal per KWh (2017). Old expectation per ton: (2060)
Zinc. Developments in hydrometallurgy have transformed non-sulfide zinc deposits (largely ignored until now) into large low cost reserves.
Minerals projected by some to enter production decline during the present century:
Aluminium (2057)
Iron (2068)
Such projections may change, as new discoveries are made and typically misinterpret available data on Mineral Resources and Mineral Reserves.
Phosphor (2048). The last 80% of world reserves are only one mine.
Petroleum
Deforestation
Controlling deforestation
Overfishing
Overfishing refers to the overconsumption and/or depletion of fish populations which occurs when fish are caught at a rate that exceeds their ability to breed and replenish their population naturally. Regions particularly susceptible to overfishing include the Arctic, coastal east Africa, the Coral Triangle (located between the Pacific and Indian oceans), Central and Latin America, and the Caribbean. The depletion of fish stocks can lead to long-term negative consequences for marine ecosystems, economies, and food security. The depletion of resources hinders economic growth because growing economies leads to increased demand for natural, renewable resources like fish. Thus, when resources are depleted, it initiates a cycle of reduced resource availability, increased demand and higher prices due to scarcity, and lower economic growth. Overfishing can lead to habitat and biodiversity loss, through specifically habitat degradation, which has an immense impact on marine/aquatic ecosystems. Habitat loss refers to when a natural habitat cannot sustain/support the species that live in it, and biodiversity loss refers to when there is a decrease in the population of a species in a specific area and/or the extinction of a species. Habitat degradation is caused by the depletion of resources, in which human activities are the primary driving force. One major impact that the depletion of fish stocks causes is a dynamic change and erosion to marine food webs, which can ultimately lead to ecosystem collapse because of the imbalance created for other marine species. Overfishing also causes instability in marine ecosystems because these ecosystems are less biodiverse and more fragile. This occurs mainly because, due to overfishing, many fish species are unable to naturally sustain their populations in these damaged ecosystems.
Most common causes of overfishing:
Increasing consumption: According to the United Nations Food and Agriculture Organization (FAO), aquatic foods like fish significantly contribute to food security and initiatives to end worldwide hunger. However, global consumption of aquatic foods has increased at twice the rate of population growth since the 1960s, significantly contributing to the depletion of fish stocks.
Climate change: Due to climate change and the sudden increasing temperatures of our oceans, fish stocks and other marine life are being negatively impacted. These changes force fish stocks to change their migratory routes, and without a reduction in fishing, this leads to overfishing and depletion because the same amount of fish are being caught in areas that now have lower fish populations.
Illegal, unreported, and unregulated (IUU) fishing: Illegal fishing involves conducting fishing operations that break the laws and regulations at the regional and international levels around fishing, including fishing without a license or permit, fishing in protected areas, and/or catching protected species of fish. Unreported fishing involves conducting fishing operation which are not reported, or are misreported to authorities according to the International and Regional Fisheries Management Organizations (RFMOs). Unregulated fishing involves conducting fishing operations in areas which do not have conservation measures put in place, and cannot be effectively monitored because of the lack of regulations.
Fisheries subsidies: A subsidy is financial assistance paid by the government to support a particular activity, industry, or group. Subsidies are often provided to reduce start up costs, stimulate production, or encourage consumption. In the case of fisheries subsidies, it enables fishing fleets to catch more fish by fishing further out in a body of water, and fish for longer periods of time.
Wetlands
Wetlands are ecosystems that are often saturated by enough surface or groundwater to sustain vegetation that is usually adapted to saturated soil conditions, such as cattails, bulrushes, red maples, wild rice, blackberries, cranberries, and peat moss. Because some varieties of wetlands are rich in minerals and nutrients and provide many of the advantages of both land and water environments, they contain diverse species and provide a distinct basis for the food chain. Wetland habitats contribute to environmental health and biodiversity. Wetlands are a nonrenewable resource on a human timescale and in some environments cannot ever be renewed. Recent studies indicate that global loss of wetlands could be as high as 87% since 1700 AD, with 64% of wetland loss occurring since 1900. Some loss of wetlands resulted from natural causes such as erosion, sedimentation, subsidence, and a rise in the sea level.
Wetlands provide environmental services for:
Food and habitat
Improving water quality
Commercial fishing
Floodwater reduction
Shoreline stabilization
Recreation
Resources in wetlands
Some of the world's most successful agricultural areas are wetlands that have been drained and converted to farmland for large-scale agriculture. Large-scale draining of wetlands also occurs for real estate development and urbanization. In contrast, in some cases wetlands are also flooded to be converted to recreational lakes or hydropower generation. In some countries ranchers have also moved their property onto wetlands for grazing due to the nutrient rich vegetation. Wetlands in Southern America also prove a fruitful resource for poachers, as animals with valuable hides such a jaguars, maned wolves, caimans, and snakes are drawn to wetlands. The effect of the removal of large predators is still unknown in South African wetlands.
Humans benefit from wetlands in indirect ways as well. Wetlands act as natural water filters, when runoff from either natural or man-made processes pass through, wetlands can have a neutralizing effect. If a wetland is in between an agricultural zone and a freshwater ecosystem, fertilizer runoff will be absorbed by the wetland and used to fuel the slow processes that occur happen, by the time the water reaches the freshwater ecosystem there will not be enough fertilizer to cause destructive algal blooms that poison freshwater ecosystems.
Non-natural causes of wetland degradation
Hydrologic alteration
drainage
dredging
stream channelization
ditching
levees
deposition of fill material
stream diversion
groundwater drainage
impoundment
Urbanization and urban development
Marinas/boats
Industrialization and industrial development
Agriculture
Silviculture/Timber harvest
Mining
Atmospheric deposition
To preserve the resources extracted from wetlands, current strategies are to rank wetlands and prioritize the conservation of wetlands with more environmental services, create more efficient irrigation for wetlands being used for agriculture, and restricting access to wetlands by tourists.
Groundwater
Water is an essential resource needed for survival. Water access has a profound influence on a society's prosperity and success. Groundwater is water that is in saturated zones underground, the upper surface of the saturated zone is called the water table. Groundwater is held in the pores and fractures of underground materials like sand, gravel and other rock, these rock materials are called aquifers. Groundwater can either flow naturally out of rock materials or can be pumped out. Groundwater supplies wells and aquifers for private, agricultural, and public use and is used by more than a third of the world's population every day for their drinking water. Globally there is 22.6 million cubic kilometers of groundwater available; of this, only 0.35 million of that is renewable.
Groundwater as a non-renewable resource
Groundwater is considered to be a non-renewable resource because less than six percent of the water around the world is replenished and renewed on a human timescale of 50 years. People are already using non-renewable water that is thousands of years old, in areas like Egypt they are using water that may have been renewed a million years ago which is not renewable on human timescales. Of the groundwater used for agriculture, 16–33% is non-renewable. It is estimated that since the 1960s groundwater extraction has more than doubled, which has increased groundwater depletion. Due to this increase in depletion, in some of the most depleted areas use of groundwater for irrigation has become impossible or cost prohibitive.
Environmental impacts
Overusing groundwater, old or young, can lower subsurface water levels and dry up streams, which could have a huge effect on ecosystems on the surface. When the most easily recoverable fresh groundwater is removed this leaves a residual with inferior water quality. This is in part from induced leakage from the land surface, confining layers or adjacent aquifers that contain saline or contaminated water. Worldwide the magnitude of groundwater depletion from storage may be so large as to constitute a measurable contributor to sea-level rise.
Mitigation
Currently, societies respond to water-resource depletion by shifting management objectives from location and developing new supplies to augmenting conserving and reallocation of existing supplies. There are two different perspectives to groundwater depletion, the first is that depletion is considered literally and simply as a reduction in the volume of water in the saturated zone, regardless of water quality considerations. A second perspective views depletion as a reduction in the usable volume of fresh groundwater in storage.
Augmenting supplies can mean improving water quality or increasing water quantity. Depletion due to quality considerations can be overcome by treatment, whereas large volume metric depletion can only be alleviated by decreasing discharge or increasing recharge. Artificial recharge of storm flow and treated municipal wastewater, has successfully reversed groundwater declines. In the future improved infiltration and recharge technologies will be more widely used to maximize the capture of runoff and treated wastewater.
Resource depletion and the future
Earth Overshoot Day
Earth Overshoot Day (EOD) is the date when humanity's demand for ecological resources exceeds Earth's ability to regenerate these resources in a given year. EOD is calculated by the Global Footprint Network, and organization that develops annual impact reports, based on data bout resource use in the previous year. EOD is announced each year on June 5, which is World Environment Day, and continues to get earlier each year. For example, Earth Overshoot Day 2023 was August 2, compared to in 2010 where it fell on August 10 and in 2000 where it fell on September 17. The Global Footprint Network calculates Earth Overshoot Day by dividing world biocapacity by world ecological footprint and multiplying that by 365 days (366 days during a leap year). World biocapacity refers to the total amount of natural resources that Earth can regenerate in a year. World ecological footprint refers to the total amount of resource that society consumes in a year, including things like energy, food, water, agricultural land, forest land, etc. Earth Overshoot Day can be calculated for Earth as a whole, but also for each country individually. For example, in a middle income country like Morocco, their 2023 country specific overshoot day was December 22, compared to a high income country like the United States of America which consumes a lot more resources, their 2023 country specific overshoot day was March 14. The goal is to push Earth Overshoot Day back far enough to where humanity would be living within Earth's ecological means and not surpassing what it can sustainably provide each year.
The World Counts
According to The World Counts, a source which collects data from a number of organizations, research institutes, and news services, and produces statistical countdown clocks that illustrate the negative trends related to the environment and other global challenges, humanity is in trouble if current consumption patterns continue. At society's current consumption rate, approximately 1.8 Earths are needed in order to provide resources in a sustainable capacity, and there is just under 26 years until resources are depleted to a point where Earth's capacity to support life may collapse. It is also estimated that approximately 29% of all species on Earth are currently at risk of extinction. As well, 25 billion tons of resources have been extracted this year alone, this includes but is not limited to natural resources like fish, wood, metals, minerals, water, and energy. The World Counts shows that there is 15 years until Earth is exhausted of freshwater, and 23 years until there are no more fish in the oceans. They also estimate that 15 billion trees are cut down every year, while only 2 billion trees are planted every year, and that there is only 75 years until rainforests are completely gone.
Resource scarcity as a moral problem
Researchers who produced an update of the Club of Rome's Limits to Growth report find that many people deny the existence of the problem of scarcity, including many leading scientists and politicians. This may be due, for example, to an unwillingness to change one's own consumption patterns or to share scarce natural resources more equally, or to a psychological defence mechanism.
The scarcity of resources raises a central moral problem concerning the distribution and allocation of natural resources. Competition means that the most advanced get the most resources, which often means the developed West. The problem here is that the West has developed partly through colonial slave labour and violence, and partly through protectionist policies, which together have left many other, non-Western countries underdeveloped.
In the future, international cooperation in sharing scarce resources will become increasingly important. Where scarcity is concentrated on the non-renewable resources that play the most important role in meeting needs, the most essential element for the realisation of human rights is an adequate and equitable allocation of scarcity. Inequality, taken to its extreme, causes intense discontent, which can lead to social unrest and even armed conflict. Many experts believe that ensuring equitable development is the only sure way to a peaceful distribution of scarcity.
Another approach to resource depletion is a combined process of de-resourcification and resourcification. Where one strives to put an end to the social processes of turning unsustainable things into resources, for example, non-renewable natural resources, and the other strives to instead develop processes of turning sustainable things into resources, for example, renewable human resources.
See also
Ecological economics
Holocene extinction
Jevons paradox
Malthusianism
Overexploitation
Overfishing
Overpopulation
Peak coal
Peak copper
Peak gas
Peak gold
Peak minerals
Peak phosphorus
Peak uranium
Peak water
Peak wheat
Planetary boundaries
Progress trap
Resource war
References
Resource economics
Environmental issues | 0.776338 | 0.994717 | 0.772237 |
Anomie | In sociology, anomie or anomy is a social condition defined by an uprooting or breakdown of any moral values, standards or guidance for individuals to follow. Anomie is believed to possibly evolve from conflict of belief systems and causes breakdown of social bonds between an individual and the community (both economic and primary socialization).
The term, commonly understood to mean normlessness, is believed to have been popularized by French sociologist Émile Durkheim in his influential book Suicide (1897). Émile Durkheim suggested that Protestants exhibited a greater degree of anomie than Catholics. However, Durkheim first introduced the concept of anomie in his 1893 work The Division of Labour in Society. Durkheim never used the term normlessness; rather, he described anomie as "derangement", and "an insatiable will." Durkheim used the term "the malady of the infinite" because desire without limit can never be fulfilled; it only becomes more intense.
For Durkheim, anomie arises more generally from a mismatch between personal or group standards and wider social standards; or from the lack of a social ethic, which produces moral deregulation and an absence of legitimate aspirations, i.e.:
History
In 1893, Durkheim introduced the concept of anomie to describe the mismatch of collective guild labour to evolving societal needs when the guild was homogeneous in its constituency. He equated homogeneous (redundant) skills to mechanical solidarity whose inertia hindered adaptation. He contrasted this with the self-regulating behaviour of a division of labour based on differences in constituency, equated to organic solidarity, whose lack of inertia made it sensitive to needed changes.
Durkheim observed that the conflict between the evolved organic division of labour and the homogeneous mechanical type was such that one could not exist in the presence of the other. When solidarity is organic, anomie is impossible, as sensitivity to mutual needs promotes evolution in the division of labour:Durkheim contrasted the condition of anomie as being the result of a malfunction of organic solidarity after the transition to mechanical solidarity:
Durkheim's use of anomie was in regards to the phenomenon of industrialization—mass-regimentation that could not adapt due to its own inertia. More specifically, its resistance to change causes disruptive cycles of collective behavior (e.g. economics) due to the necessity of a prolonged buildup of sufficient force or momentum to overcome the inertia.
Later in 1897, in his studies of suicide, Durkheim associated anomie to the influence of a lack of norms or norms that were too rigid. However, such normlessness or norm-rigidity was a symptom of anomie, caused by the lack of differential adaptation that would enable norms to evolve naturally due to self-regulation, either to develop norms where none existed or to change norms that had become rigid and obsolete. Durkheim found that Protestant communities have noticeably higher suicide rates than Catholic ones, and justified it with individualism and lack of social cohesion prevalent amongst Protestants, creating poorly integrated society and making Protestants less likely to develop close communal ties that would be crucial in times of hardship. Conversely, he states that the Catholic faith binds individuals stronger together and builds strong social ties, decreasing the risk of suicide and alienation. In this, Durkheim argued that religion is much more important than culture in regards to anomic suicide. This allowed Durkheim to successfully tie social cohesion to suicide rates:
In 1938, Robert K. Merton linked anomie with deviance, arguing that the discontinuity between culture and structure have the dysfunctional consequence of leading to deviance within society. He described 5 types of deviance in terms of the acceptance or rejection of social goals and the institutionalized means of achieving them.
Etymology
The term anomie—"a reborrowing with French spelling of anomy"—comes from , namely the privative alpha prefix (a-, 'without'), and nomos. The Greeks distinguished between nomos, and arché. For example, a monarch is a single ruler but he may still be subject to, and not exempt from, the prevailing laws, i.e. nomos. In the original city state democracy, the majority rule was an aspect of arché because it was a rule-based, customary system, which may or may not make laws, i.e. nomos. Thus, the original meaning of anomie defined anything or anyone against or outside the law, or a condition where the current laws were not applied resulting in a state of illegitimacy or lawlessness.
The contemporary English understanding of the word anomie can accept greater flexibility in the word "norm", and some have used the idea of normlessness to reflect a similar situation to the idea of anarchy. However, as used by Émile Durkheim and later theorists, anomie is a reaction against or a retreat from the regulatory social controls of society, and is a completely separate concept from anarchy, which consists of the absence of the roles of rulers and submitted.
Social disorder
Nineteenth-century French pioneer sociologist Émile Durkheim borrowed the term anomie from French philosopher Jean-Marie Guyau. Durkheim used it in his influential book Suicide (1897) in order to outline the social (and not individual) causes of suicide, characterized by a rapid change of the standards or values of societies (often erroneously referred to as normlessness), and an associated feeling of alienation and purposelessness. He believed that anomie is common when the surrounding society has undergone significant changes in its economic fortunes, whether for better or for worse and, more generally, when there is a significant discrepancy between the ideological theories and values commonly professed and what was actually achievable in everyday life. This was contrary to previous theories on suicide which generally maintained that suicide was precipitated by negative events in a person's life and their subsequent depression.
In Durkheim's view, traditional religions often provided the basis for the shared values which the anomic individual lacks. Furthermore, he argued that the division of labor that had been prevalent in economic life since the Industrial Revolution led individuals to pursue egoistic ends rather than seeking the good of a larger community. Robert King Merton also adopted the idea of anomie to develop strain theory, defining it as the discrepancy between common social goals and the legitimate means to attain those goals. In other words, an individual suffering from anomie would strive to attain the common goals of a specific society yet would not be able to reach these goals legitimately because of the structural limitations in society. As a result, the individual would exhibit deviant behavior. Friedrich Hayek notably uses the word anomie with this meaning.
According to one academic survey, psychometric testing confirmed a link between anomie and academic dishonesty among university students, suggesting that universities needed to foster codes of ethics among students in order to curb it. In another study, anomie was seen as a "push factor" in tourism.
As an older variant, the 1913 Webster's Dictionary reports use of the word anomie as meaning "disregard or violation of the law." However, anomie as a social disorder is not to be confused with anarchy: proponents of anarchism claim that anarchy does not necessarily lead to anomie and that hierarchical command actually increases lawlessness. Some anarcho-primitivists argue that complex societies, particularly industrial and post-industrial societies, directly cause conditions such as anomie by depriving the individual of self-determination and a relatively small reference group to relate to, such as the band, clan or tribe.
In 2003, José Soltero and Romeo Saravia analyzed the concept of anomie in regards to Protestantism and Catholicism in El Salvador. Massive displacement of population in the 1970s, economic and political crises as well as cycles of violence are credited with radically changing the religious composition of the country, rendering it one of the most Protestant countries in Latin America. According to Soltero and Saravia, the rise of Protestantism is conversationally claimed to be caused by a Catholic failure to "address the spiritual needs of the poor" and the Protestant "deeper quest for salvation, liberation, and eternal life". However, their research does not support these claims, and showed that Protestantism is not more popular amongst the poor. Their findings do confirm the assumptions of anomie, with Catholic communities of El Salvador enjoying high social cohesion, while the Protestant communities have been associated with poorer social integration, internal migration and tend to be places deeply affected by the Salvadoran Civil War. Additionally, Soltero and Saravia found that Salvadoran Catholicism is tied to social activism, liberation theology and the political left, as opposed to the "right wing political orientation, or at least a passive, personally inward orientation, expressed by some Protestant churches". They conclude that their research contradicts the theory that Protestantism responds to the spiritual needs of the poor more adequately than Catholicism, while also disproving the claim that Protestantism appeals more to women:
The study by Soltero and Saravia has also found a link between Protestantism and no access to healthcare:
Synnomie
Freda Adler coined synnomie as the opposite of anomie. Using Émile Durkheim's concept of social solidarity and collective consciousness, Adler defined synnomie as "a congruence of norms to the point of harmonious accommodation".
Adler described societies in a synnomie state as "characterized by norm conformity, cohesion, intact social controls and norm integration". Social institutions such as the family, religion and communities, largely serve as sources of norms and social control to maintain a synnomic society.
In culture
In Albert Camus's existentialist novel The Stranger, Meursault—the bored, alienated protagonist—struggles to construct an individual system of values as he responds to the disappearance of the old. He exists largely in a state of anomie, as seen from the apathy evinced in the opening lines: "" ("Today mum died. Or maybe yesterday, I don't know").
Fyodor Dostoyevsky expresses a similar concern about anomie in his novel The Brothers Karamazov. The Grand Inquisitor remarks that in the absence of God and immortal life, everything would be lawful. In other words, that any act becomes thinkable, that there is no moral compass, which leads to apathy and detachment.
In The Ink Black Heart of the Cormoran Strike novels, written by J. K. Rowling under the pseudonym Robert Galbraith, the main antagonist goes by the online handle of "Anomie".
See also
References
Sources
Durkheim, Émile. 1893. The Division of Labour in Society.
Marra, Realino. 1987. Suicidio, diritto e anomia. Immagini della morte volontaria nella civiltà occidentale. Napoli: Edizioni Scientifiche Italiane.
—— 1989. "Geschichte und aktuelle Problematik des Anomiebegriffs." Zeitschrift für Rechtssoziologie 11(1):67–80.
Orru, Marco. 1983. "The Ethics of Anomie: Jean Marie Guyau and Émile Durkheim." British Journal of Sociology 34(4):499–518.
Riba, Jordi. 1999. La Morale Anomique de Jean-Marie Guyau. L'Harmattan. .
External links
Deflem, Mathieu. 2015. "Anomie: History of the Concept." pp. 718–721 in International Encyclopedia of Social and Behavioral Sciences, Second Edition (Volume 1), edited by James D. Wright. Oxford, UK: Elsevier.
"Anomie" discussed at the Émile Durkheim Archive.
Featherstone, Richard, and Mathieu Deflem. 2003. "Anomie and Strain: Context and Consequences of Merton's Two Theories." Sociological Inquiry 73(4):471–489, 2003.
Deviance (sociology)
Émile Durkheim
Social philosophy
Sociological terminology
Sociological theories | 0.774391 | 0.997212 | 0.772232 |
Socio-ecological system | A social-ecological system consists of 'a bio-geo-physical' unit and its associated social actors and institutions. Social-ecological systems are complex and adaptive and delimited by spatial or functional boundaries surrounding particular ecosystems and their context problems.
Definitions
A social-ecological system (SES) can be defined as: (p. 163)
A coherent system of biophysical and social factors that regularly interact in a resilient, sustained manner;
A system that is defined at several spatial, temporal, and organisational scales, which may be hierarchically linked;
A set of critical resources (natural, socio-economic, and cultural) whose flow and use is regulated by a combination of ecological and social systems; and
A perpetually dynamic, complex system with continuous adaptation.
Scholars have used the concept of social-ecological systems to emphasise humans as part of nature and to stress that the delineation between social systems and ecological systems is artificial and arbitrary. While resilience has somewhat different meaning in social and ecological context, the SES approach holds that social and ecological systems are linked through feedback mechanisms, and that both display resilience and complexity.
Theoretical foundations
Social-ecological systems are based on the concept that humans are a part of—not separate from—nature. This concept, which holds that the delineation between social systems and natural systems is arbitrary and artificial, was first put forth by Berkes and Folke, and its theory was further developed by Berkes et al. More recent research into social-ecological system theory has pointed to social-ecological keystones as critical to the structure and function of these systems, and to biocultural diversity as essential to the resilience of these systems.
Integrative approaches
Through to the final decades of the twentieth century, the point of contact between social sciences and natural sciences was very limited in dealing with social-ecological systems. Just as mainstream ecology had tried to exclude humans from the study of ecology, many social science disciplines had ignored the environment altogether and limited their scope to humans. Although some scholars (e.g. Bateson 1979) had tried to bridge the nature-culture divide, the majority of studies focused on investigating processes within the social domain only, treating the ecosystem largely as a "black box" and assuming that if the social system performs adaptively or is well organised institutionally it will also manage the environmental resource base in a sustainable fashion.
This changed through the 1970s and 1980s with the rise of several subfields associated with the social sciences but explicitly including the environment in the framing of the issues. These subfields are:
Environmental ethics, which arose from the need to develop a philosophy of relations between humans and their environment, because conventional ethics only applied to relations among people.
Political ecology, which expands ecological concerns to respond to the inclusion of cultural and political activity within an analysis of ecosystems that are significantly but not always entirely socially constructed.
Environmental history which arose from the rich accumulation of material documenting relationships between societies and their environment.
Ecological economics which examines the link between ecology and economics by bridging the two disciplines to promote an integrated view of economics within the ecosystem.
Common property which examines the linkages between resource management and social organisation, analysing how institutions and property rights systems deal with the dilemma of the "tragedy of the commons".
Traditional ecological knowledge, which refers to ecological understanding built, not by experts, but by people who live and use the resources of a place.
Each of the six areas summarised is a bridge spanning different combinations of natural science and social science thinking.
Conceptual foundations and origins
Elinor Ostrom and her many co-researchers developed a comprehensive "Social-Ecological Systems (SES) framework", which includes much of the theory of common-pool resources and collective self-governance. It draws heavily on systems ecology and complexity theory. The studies of SES include some central societal concerns (e.g. equity and human wellbeing) that have traditionally received little attention in complex adaptive systems theory, and there are areas of complexity theory (e.g. quantum physics) that have little direct relevance for understanding SES.
SES theory incorporates ideas from theories relating to the study of resilience, robustness, sustainability, and vulnerability (e.g. Levin 1999, Berkes et al. 2003, Gunderson and Holling 2002, Norberg and Cumming 2008), but it is also concerned with a wider range of SES dynamics and attributes than any one of these terms implies. While SES theory draws on a range of discipline-specific theories, such as island biogeography, optimal foraging theory, and microeconomic theory, it is much broader than any of these individual theories alone.
SES theory emerged from a combination of disciplines and the notion of complexity developed through the work of many scholars, including the Santa Fe Institute (2002). Due to the social context in which SES research was placed, and the possibility of SES research translating into recommendations that may affect real people, SES research was seen as more "self-conscious" and "pluralistic" in its perspectives than complexity theory.
Studying SESs from a complex system perspective attempts to link different disciplines into a body of knowledge that is applicable to serious environmental problems. Management processes in the complex systems can be improved by making them adaptive and flexible, able to deal with uncertainty and surprise, and by building capacity to adapt to change. SESs are both complex and adaptive, meaning that they require continuous testing, learning about, and developing knowledge and understanding in order to cope with change and uncertainty.
A complex system differs from a simple system in that it has a number of attributes that cannot be observed in simple systems, such as nonlinearity, uncertainty, emergence, scale, and self-organisation.
Nonlinearity
Nonlinearity is related to fundamental uncertainty. It generates path dependency, which refers to local rules of interaction that change as the system evolves and develops. A consequence of path dependency is the existence of multiple basins of attraction in ecosystem development and the potential for threshold behaviour and qualitative shifts in system dynamics under changing environmental influences. An example for non-linearity in socio-ecological systems is illustrated by the figure on "Conceptual Model of Socioecological Drivers of Change".
Emergence
Emergence is the appearance of behaviour that could not be anticipated from knowledge of the parts of the system alone.
Scale
Scale is important when dealing with complex systems. In a complex system many subsystems can be distinguished; and since many complex systems are hierarchic, each subsystem is nested in a larger subsystem etc. For example, a small watershed may be considered an ecosystem, but it is a part of a larger watershed that can also be considered an ecosystem and a larger one that encompasses all the smaller watersheds. Phenomena at each level of the scale tend to have their own emergent properties, and different levels may be coupled through feedback relationships. Therefore, complex systems should always be analysed or managed simultaneously at different scales.
Self organisation
Self organisation is one of the defining properties of complex systems. The basic idea is that open systems will reorganise at critical points of instability. Holling's adaptive renewal cycle is an illustration of reorganisation that takes place within the cycles of growth and renewal. The self-organisation principle, operationalised through feedback mechanisms, applies to many biological systems, social systems and even to mixture of simple chemicals. High speed computers and nonlinear mathematical techniques help simulate self-organisation by yielding complex results and yet strangely ordered effects. The direction of self-organisation will depend on such things as the system's history; it is path dependent and difficult to predict.
Examples of conceptual framework for analysis
There are several conceptual frameworks developed in relation to the resilience approach.
A framework that focuses on knowledge and understanding of ecosystem dynamics, how to navigate it through management practices, institutions, organisations and social networks and how they relate to drivers of change (Picture A).
Alternative conceptual model illustrates how it is meaningful to consider a wide range of socio-ecological system properties potentially influencing agricultural intensification, rather than singling out macro-drivers such as population pressure as the primary metric of agrarian change and intensification (Picture B).
A conceptual model in relation to the robustness of social-ecological systems. There resource could be water or a fishery and the resource users could be farmers irrigating or inshore fishermen. Public infrastructure providers involve, for example, local users associations and government bureaus and public infrastructure include institutional rules and engineering works. The number refer to links between the entities and are exemplified in the source of the figure (Picture C).
MuSIASEM or Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism. This is a method of accounting used to analyse social-ecosystems and to simulate possible patterns of development.
Role of traditional knowledge
Berkes and colleagues distinguish four sets of elements which can be used to describe social-ecological system characteristics and linkages:
Ecosystems
Local knowledge
People and technology
Property rights institutions
Knowledge acquisition of SESs is an ongoing, dynamic learning process, and such knowledge often emerges with people's institutions and organisations. To remain effective it requires institutional framework and social networks to be nested across scales. It is thus the communities which interact with ecosystems on the daily basis and over long periods of time that possess the most relevant knowledge of resource and ecosystem dynamics, together with associated management practices. Some scholars have suggested that management and governance of SESs may benefit from combination of different knowledge systems; others have attempted to import such knowledge into the scientific knowledge field There also those who have argued that it would be difficult to separate these knowledge systems from their institutional and cultural contexts, and those who have questioned the role of traditional and local knowledge systems in the current situation of pervasive environmental change and globalised societies. Other scholars have claimed that valuable lessons can be extracted from such systems for complex system management; lessons that also need to account for interactions across temporal and spatial scales and organisational and institutional levels, and in particular during periods of rapid change, uncertainty and system reorganisation.
Adaptive cycle
The adaptive cycle, originally conceptualised by Holling (1986) interprets the dynamics of complex ecosystems in response to disturbance and change. In terms of its dynamics, the adaptive cycle has been described as moving slowly from exploitation (r) to conservation (K), maintaining and developing very rapidly from K to release (Omega), continuing rapidly to reorganisation (alpha) and back to exploitation (r). Depending on the particular configuration of the system, it can then begin a new adaptive cycle or alternatively it may transform into a new configuration, shown as an exit arrow. The adaptive cycle is one of the five heuristics used to understand social-ecological system behaviour. The other four heuristics are: resilience, panarchy, transformability, and adaptability, are of considerable conceptual appeal, and it is claimed to be generally applicable to ecological and social systems as well as to coupled social-ecological systems. Adaptability is the capacity of a social-ecological system to learn and adjust to both internal and external processes. Transformability is the capacity of a system to transform into a completely new system, when ecological, economic, or social structures make the current system unsustainable. Adaptability and transformability are prerequisites for resilience.
The two main dimensions that determine changes in an adaptive cycle are connectedness and potential. The connectedness dimension is the visual depiction of a cycle and stands for the ability to internally control its own destiny. It "reflects the strength of internal connections that mediate and regulate the influences between inside processes and the outside world" (p. 50). The potential dimension is represented by the vertical axis, and stands for the "inherent potential of a system that is available for change" (p. 393). Social or cultural potential can be characterised by the "accumulated networks of relationships-friendship, mutual respect, and trust among people and between people and institutions of governance" (p. 49). According to the adaptive cycle heuristic, the levels of both dimensions differ during the course of the cycle along the four phases. The adaptive cycle thus predicts that the four phases of the cycle can be distinguished based on distinct combinations of high or low potential and connectedness.
The notion of panarchy and adaptive cycles has become an important theoretical lens to describe the resilience of ecological systems and, more recently, social-ecological systems. Although panarchy theory originates in ecology, it has found widespread applications in other disciplines. For example, in management, Wieland (2021) describes a panarchy that represents the planetary, political-economic, and supply chain levels. Hereby, the panarchical understanding of the supply chain leads to a social-ecological interpretation of supply chain resilience.
Adaptive governance
The resilience of social-ecological systems is related to the degree of the shock that the system can absorb and remain within a given state. The concept of resilience is a promising tool for analysing adaptive change towards sustainability because it provides a way for analysing how to manipulate stability in the face of change.
In order to emphasise the key requirements of a social-ecological system for successful adaptive governance, Folke and colleagues contrasted case studies from the Florida Everglades and the Grand Canyon. Both are complex social-ecological systems that have experiences unwanted degradation of their ecosystem services, but differ substantially in terms of their institutional make-up.
The governance structure in the Everglades is dominated by the interests of agriculture and environmentalists who have been in conflict over the need to conserve the habitat at the expense of agricultural productivity throughout history. Here, a few feedbacks between the ecological system and the social system exist, and the SES is unable to innovate and adapt (the α-phase of reorganisation and growth).
In contrast, different stakeholders have formed an adaptive management workgroup in the case of Grand Canyon, using planned management interventions and monitoring to learn about changes occurring in the ecosystem including the best ways to subsequently manage them. Such an arrangement in governance creates the opportunity for institutional learning to take place, allowing for a successful period of reorganisation and growth. Such an approach to institutional learning is becoming more common as NGOs, scientist and communities collaborate to manage ecosystems.
Links to sustainable development
The concept of social-ecological systems has been developed in order to provide both a promising scientific gain as well as impact on problems of sustainable development. A close conceptual and methodological relation exists between the analysis of social-ecological systems, complexity research, and transdisciplinarity. These three research concepts are based on similar ideas and models of reasoning. Moreover, the research on social-ecological systems almost always uses transdisciplinary mode of operation in order to achieve an adequate problem orientation and to ensure integrative results. Problems of sustainable development are intrinsically tied to the social-ecological system defined to tackle them. This means that scientists from the relevant scientific disciplines or field of research as well as the involved societal stakeholders have to be regarded as elements of the social-ecological system in question.
See also
Relational mobility
References
Further reading
Aravindakshan, S., Krupnik, T.J., Groot, J.C., Speelman, E.N., Amjath-Babu, T.S. and Tittonell, P., 2020. Multi-level socioecological drivers of agrarian change: Longitudinal evidence from mixed rice-livestock-aquaculture farming systems of Bangladesh. Agricultural Systems, 177, p. 102695.(Aravindakshan et al. 2020)
Ecology Info Center, 2022. What is Panarchy? http://environment-ecology.com/general-systems-theory/535-panarchy.html.
Gunderson, L. and Holling, C.S. (2002). Panarchy: understanding transformations in human and natural systems. Island Press, Washington, D.C., USA.
Maclean K, Ross H, Cuthill M, Rist P. 2013. Healthy country, healthy people: An Australian Aboriginal organisation's adaptive governance to enhance its social-ecological system. Geoforum. 45:94–105.
Ecosystems | 0.788073 | 0.979893 | 0.772227 |
Plant physiology | Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants.
Plant physiologists study fundamental processes of plants, such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration. Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology.
Aims
The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development, seasonality, dormancy, and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research.
First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments, enzymes, and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores, pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.
Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells. Plant cells have a number of features that distinguish them from cells of animals, and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll, a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do.
Thirdly, plant physiology deals with interactions between cells, tissues, and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue, and the functioning of the various modes of transport is studied by plant physiologists.
Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant.
Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.
Biochemistry of plants
The chemical elements of which plants are constructed—principally carbon, oxygen, hydrogen, nitrogen, phosphorus, sulfur, etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses. Only the details of their individual molecular structures vary.
Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes. Other plant products may be used for the manufacture of commercially important rubber or biofuel. Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine, and digoxin. Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits.
Constituent elements
Plants require some nutrients, such as carbon and nitrogen, in large quantities to survive. Some nutrients are termed macronutrients, where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients, are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey.
The following tables list element nutrients essential to plants. Uses within plants are generalized.
Pigments
Among the most important molecules for plant function are the pigments. Plant pigments include a variety of different kinds of molecules, including porphyrins, carotenoids, and anthocyanins. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment appears to the eye.
Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, red algae possess chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis.
Carotenoids are red, orange, or yellow tetraterpenoids. They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans.
Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, stems, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina. In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light
Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole-derived compounds synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets, and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties.
Signals and regulators
Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals.
Plant hormones
Plant hormones, known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations.
Plant hormones are chemicals that in small amounts promote and influence the growth, development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy, and germination. They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death.
The most important plant hormones are abscissic acid (ABA), auxins, ethylene, gibberellins, and cytokinins, though there are many other substances that serve to regulate plant physiology.
Photomorphogenesis
While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development (morphogenesis). The use of light to control structural development is called photomorphogenesis, and is dependent upon the presence of specialized photoreceptors, which are chemical pigments capable of absorbing specific wavelengths of light.
Plants use four kinds of photoreceptors: phytochrome, cryptochrome, a UV-B photoreceptor, and protochlorophyllide a. The first two of these, phytochrome and cryptochrome, are photoreceptor proteins, complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. Protochlorophyllide a, as its name suggests, is a chemical precursor of chlorophyll.
The most studied of the photoreceptors in plants is phytochrome. It is sensitive to light in the red and far-red region of the visible spectrum. Many flowering plants use it to regulate the time of flowering based on the length of day and night (photoperiodism) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings.
Photoperiodism
Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism. Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity (vernalization) instead.
Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night.
Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia (Euphorbia pulcherrima).
Environmental physiology
Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology.
Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind. Of particular importance are water relations (which can be measured with the Pressure bomb) and the stress of drought or inundation, exchange of gases with the atmosphere, as well as the cycling of nutrients such as nitrogen and carbon.
Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition, herbivory, disease and parasitism, but also positive interactions, such as mutualism and pollination.
While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
Tropisms and nastic movements
Plants may respond both to directional and non-directional stimuli. A response to a directional stimulus, such as gravity or sun light, is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity, is a nastic movement.
Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism, the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones.
Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap, a carnivorous plant. The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects.
Plant disease
Economically, one of the most important areas of research in environmental physiology is that of phytopathology, the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses, bacteria, and fungi, as well as physical invasion by insects and roundworms.
Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors.
One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime. Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry.
History
Early history
Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water.
Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks; though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time.
Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics, the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby.
Economic applications
Food production
In horticulture and agriculture along with food science, plant physiology is an important topic relating to fruits, vegetables, and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening, fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics.
Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density.
See also
Biomechanics
Hyperaccumulator
Phytochemistry
Plant anatomy
Plant morphology
Plant secondary metabolism
Branches of botany
References
Further reading
Lincoln Taiz, Eduardo Zeiger, Ian Max Møller, Angus Murphy: Fundamentals of Plant Physiology. Sinauer, 2018.
Branches of botany | 0.780247 | 0.989696 | 0.772207 |
Postpositivism | Postpositivism or postempiricism is a metatheoretical stance that critiques and amends positivism and has impacted theories and practices across philosophy, social sciences, and various models of scientific inquiry. While positivists emphasize independence between the researcher and the researched person (or object), postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
Philosophy
Epistemology
Postpositivists believe that human knowledge is based not on a priori assessments from an objective individual, but rather upon human conjectures. As human knowledge is thus unavoidably conjectural, the assertion of these conjectures are warranted, or more specifically, justified by a set of warrants, which can be modified or withdrawn in the light of further investigation. However, postpositivism is not a form of relativism, and generally retains the idea of objective truth.
Ontology
Postpositivists believe that a reality exists, but, unlike positivists, they believe reality can be known only imperfectly. Postpositivists also draw from social constructionism in forming their understanding and definition of reality.
Axiology
While positivists believe that research is or can be value-free or value-neutral, postpositivists take the position that bias is undesired but inevitable, and therefore the investigator must work to detect and try to correct it. Postpositivists work to understand how their axiology (i.e. values and beliefs) may have influenced their research, including through their choice of measures, populations, questions, and definitions, as well as through their interpretation and analysis of their work.
History
Historians identify two types of positivism: classical positivism, an empirical tradition first described by Henri de Saint-Simon and Auguste Comte in the first half of the 19th century, and logical positivism, which is most strongly associated with the Vienna Circle, which met near Vienna, Austria, in the 1920s and 1930s. Postpositivism is the name D.C. Phillips gave to a group of critiques and amendments which apply to both forms of positivism.
One of the first thinkers to criticize logical positivism was Karl Popper. He advanced falsification in lieu of the logical positivist idea of verificationism. Falsificationism argues that it is impossible to verify that beliefs about universals or unobservables are true, though it is possible to reject false beliefs if they are phrased in a way amenable to falsification.
In 1965, Karl Popper and Thomas Kuhn had a debate as Thomas Kuhn's theory did not incorporate this idea of falsification. It has influenced contemporary research methodologies.
Thomas Kuhn is credited with having popularized and at least in part originated the post-empiricist philosophy of science. Kuhn's idea of paradigm shifts offers a broader critique of logical positivism, arguing that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. The work of philosophers Nancy Cartwright and Ian Hacking are representative of these ideas. Postpositivism of this type is described in social science guides to research methods.
Structure of a postpositivist theory
Robert Dubin describes the basic components of a postpositivist theory as being composed of basic "units" or ideas and topics of interest, "laws of interactions" among the units, and a description of the "boundaries" for the theory. A postpositivist theory also includes "empirical indicators" to connect the theory to observable phenomena, and hypotheses that are testable using the scientific method.
According to Thomas Kuhn, a postpositivist theory can be assessed on the basis of whether it is "accurate", "consistent", "has broad scope", "parsimonious", and "fruitful".
Main publications
Karl Popper (1934) Logik der Forschung, rewritten in English as The Logic of Scientific Discovery (1959)
Thomas Kuhn (1962) The Structure of Scientific Revolutions
Karl Popper (1963) Conjectures and Refutations
Ian Hacking (1983) Representing and Intervening
Andrew Pickering (1984) Constructing Quarks
Peter Galison (1987) How Experiments End
Nancy Cartwright (1989) Nature's Capacities and Their Measurement
See also
Antipositivism
Philosophy of science
Scientism
Sociology of scientific knowledge
Notes
References
Alexander, J.C. (1995), Fin De Siecle Social Theory: Relativism, Reductionism and The Problem of Reason, London; Verso.
Phillips, D.C. & Nicholas C. Burbules (2000): Postpositivism and Educational Research. Lanham & Boulder: Rowman & Littlefield Publishers.
Zammito, John H. (2004): A Nice Derangement of Epistemes. Post-positivism in the study of Science from Quine to Latour. Chicago & London: The University of Chicago Press.
Popper, K. (1963), Conjectures and Refutations: The Growth of Scientific Knowledge, London; Routledge.
Moore, R. (2009), Towards the Sociology of Truth, London; Continuum.
External links
Positivism and Post-positivism
Positivism
Metatheory of science
Epistemological theories | 0.779259 | 0.990931 | 0.772192 |
Syntrophy | In biology, syntrophy, syntrophism, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the cooperative interaction between at least two microbial species to degrade a single substrate. This type of biological interaction typically involves the transfer of one or more metabolic intermediates between two or more metabolically diverse microbial species living in close proximity to each other. Thus, syntrophy can be considered an obligatory interdependency and a mutualistic metabolism between different microbial species, wherein the growth of one partner depends on the nutrients, growth factors, or substrates provided by the other(s).
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation of these organic compounds cannot occur in fermenting microorganisms unless the hydrogen concentration is reduced to a low level by the methanogens. The key mechanism that ensures the success of syntrophy is interspecies electron transfer. The interspecies electron transfer can be carried out via three ways: interspecies hydrogen transfer, interspecies formate transfer and interspecies direct electron transfer. Reverse electron transport is prominent in syntrophic metabolism.
The metabolic reactions and the energy involved for syntrophic degradation with H2 consumption:
A classical syntrophic relationship can be illustrated by the activity of ‘Methanobacillus omelianskii’. It was isolated several times from anaerobic sediments and sewage sludge and was regarded as a pure culture of an anaerobe converting ethanol to acetate and methane. In fact, however, the culture turned out to consist of a methanogenic archaeon "organism M.o.H" and a Gram-negative Bacterium "Organism S" which involves the oxidization of ethanol into acetate and methane mediated by interspecies hydrogen transfer. Individuals of organism S are observed as obligate anaerobic bacteria that use ethanol as an electron donor, whereas M.o.H are methanogens that oxidize hydrogen gas to produce methane.
Organism S: 2 Ethanol + 2 H2O → 2 Acetate− + 2 H+ + 4 H2 (ΔG°' = +9.6 kJ per reaction)
Strain M.o.H.: 4 H2 + CO2 → Methane + 2 H2O (ΔG°' = -131 kJ per reaction)
Co-culture:2 Ethanol + CO2 → 2 Acetate− + 2 H+ + Methane (ΔG°' = -113 kJ per reaction)
The oxidization of ethanol by organism S is made possible thanks to the methanogen M.o.H, which consumes the hydrogen produced by organism S, by turning the positive Gibbs free energy into negative Gibbs free energy. This situation favors growth of organism S and also provides energy for methanogens by consuming hydrogen. Down the line, acetate accumulation is also prevented by similar syntrophic relationship. Syntrophic degradation of substrates like butyrate and benzoate can also happen without hydrogen consumption.
An example of propionate and butyrate degradation with interspecies formate transfer carried out by the mutual system of Syntrophomonas wolfei and Methanobacterium formicicum:
Propionate+2H2O+2CO2 → Acetate- +3Formate- +3H+ (ΔG°'=+65.3 kJ/mol)
Butyrate+2H2O+2CO2 → 2Acetate- +3Formate- +3H+ ΔG°'=+38.5 kJ/mol)
Direct interspecies electron transfer (DIET) which involves electron transfer without any electron carrier such as H2 or formate was reported in the co-culture system of Geobacter mettalireducens and Methanosaeto or Methanosarcina
Examples
In ruminants
The defining feature of ruminants, such as cows and goats, is a stomach called a rumen. The rumen contains billions of microbes, many of which are syntrophic. Some anaerobic fermenting microbes in the rumen (and other gastrointestinal tracts) are capable of degrading organic matter to short chain fatty acids, and hydrogen. The accumulating hydrogen inhibits the microbe's ability to continue degrading organic matter, but the presence of syntrophic hydrogen-consuming microbes allows continued growth by metabolizing the waste products. In addition, fermentative bacteria gain maximum energy yield when protons are used as electron acceptor with concurrent H2 production. Hydrogen-consuming organisms include methanogens, sulfate-reducers, acetogens, and others.
Some fermentation products, such as fatty acids longer than two carbon atoms, alcohols longer than one carbon atom, and branched chain and aromatic fatty acids, cannot directly be used in methanogenesis. In acetogenesis processes, these products are oxidized to acetate and H2 by obligated proton reducing bacteria in syntrophic relationship with methanogenic archaea as low H2 partial pressure is essential for acetogenic reactions to be thermodynamically favorable (ΔG < 0).
Biodegradation of pollutants
Syntrophic microbial food webs play an integral role in bioremediation especially in environments contaminated with crude oil and petrol. Environmental contamination with oil is of high ecological importance and can be effectively mediated through syntrophic degradation by complete mineralization of alkane, aliphatic and hydrocarbon chains. The hydrocarbons of the oil are broken down after activation by fumarate, a chemical compound that is regenerated by other microorganisms. Without regeneration, the microbes degrading the oil would eventually run out of fumarate and the process would cease. This breakdown is crucial in the processes of bioremediation and global carbon cycling.
Syntrophic microbial communities are key players in the breakdown of aromatic compounds, which are common pollutants. The degradation of aromatic benzoate to methane produces intermediate compounds such as formate, acetate, and H2. The buildup of these products makes benzoate degradation thermodynamically unfavorable. These intermediates can be metabolized syntrophically by methanogens and makes the degradation process thermodynamically favorable
Degradation of amino acids
Studies have shown that bacterial degradation of amino acids can be significantly enhanced through the process of syntrophy. Microbes growing poorly on amino acid substrates alanine, aspartate, serine, leucine, valine, and glycine can have their rate of growth dramatically increased by syntrophic H2 scavengers. These scavengers, like Methanospirillum and Acetobacterium, metabolize the H2 waste produced during amino acid breakdown, preventing a toxic build-up. Another way to improve amino acid breakdown is through interspecies electron transfer mediated by formate. Species like Desulfovibrio employ this method. Amino acid fermenting anaerobes such as Clostridium species, Peptostreptococcus asacchaarolyticus, Acidaminococcus fermentans were known to breakdown amino acids like glutamate with the help of hydrogen scavenging methanogenic partners without going through the usual Stickland fermentation pathway
Anaerobic digestion
Effective syntrophic cooperation between propionate oxidizing bacteria, acetate oxidizing bacteria and H2/acetate consuming methanogens is necessary to successfully carryout anaerobic digestion to produce biomethane
Examples of syntrophic organisms
Syntrophomonas wolfei
Syntrophobacter funaroxidans
Pelotomaculum thermopropinicium
Syntrophus aciditrophicus
Syntrophus buswellii
Syntrophus gentianae
References
Biological interactions
Food chains | 0.79368 | 0.972776 | 0.772073 |
Socratic questioning | Socratic questioning (or Socratic maieutics) is an educational method named after Socrates that focuses on discovering answers by asking questions of students. According to Plato, Socrates believed that "the disciplined practice of thoughtful questioning enables the scholar/student to examine ideas and be able to determine the validity of those ideas". Plato explains how, in this method of teaching, the teacher assumes an ignorant mindset in order to compel the student to assume the highest level of knowledge. Thus, a student is expected to develop the ability to acknowledge contradictions, recreate inaccurate or unfinished ideas, and critically determine necessary thought.
Socratic questioning is a form of disciplined questioning that can be used to pursue thought in many directions and for many purposes, including: to explore complex ideas, to get to the truth of things, to open up issues and problems, to uncover assumptions, to analyze concepts, to distinguish what we know from what we do not know, to follow out logical consequences of thought or to control discussions. Socratic questioning is based on the foundation that thinking has structured logic, and allows underlying thoughts to be questioned. The key to distinguishing Socratic questioning from questioning per se is that the former is systematic, disciplined, deep and usually focuses on fundamental concepts, principles, theories, issues or problems.
Pedagogy
When teachers use Socratic questioning in teaching, their purpose may be to probe student thinking, to determine the extent of student knowledge on a given topic, issue or subject, to model Socratic questioning for students or to help students analyze a concept or line of reasoning. It is suggested that students should learn the discipline of Socratic questioning so that they begin to use it in reasoning through complex issues, in understanding and assessing the thinking of others and in following-out the implications of what they and others think. In fact, Socrates himself thought that questioning was the only defensible form of teaching.
In teaching, teachers can use Socratic questioning for at least two purposes:
To deeply probe student thinking, to help students begin to distinguish what they know or understand from what they do not know or understand (and to help them develop intellectual humility in the process).
To foster students' abilities to ask Socratic questions, to help students acquire the powerful tools of Socratic dialogue, so that they can use these tools in everyday life (in questioning themselves and others). To this end, teachers can model the questioning strategies they want students to emulate and employ. Moreover, teachers need to directly teach students how to construct and ask deep questions. Beyond that, students need practice to improve their questioning abilities.
Socratic questioning illuminates the importance of questioning in learning. This includes differentiating between systematic and fragmented thinking, while forcing individuals to understand the root of their knowledge and ideas. Educators who support the use of Socratic questioning in educational settings argue that it helps students become active and independent learners. Examples of Socratic questions that are used for students in educational settings:
Getting students to clarify their thinking and explore the origin of their thinking
e.g., 'Why do you say that?', 'Could you explain further?'
Challenging students about assumptions
e.g., 'Is this always the case?', 'Why do you think that this assumption holds here?'
Providing evidence as a basis for arguments
e.g., 'Why do you say that?', 'Is there reason to doubt this evidence?'
Discovering alternative viewpoints and perspectives and conflicts between contentions
e.g., 'What is the counter-argument?', 'Can/did anyone see this another way?'
Exploring implications and consequences
e.g., 'But if...happened, what else would result?', 'How does...affect...?'
Questioning the question
e.g., 'Why do you think that I asked that question?', 'Why was that question important?', 'Which of your questions turned out to be the most useful?'
Socratic questioning and critical thinking
The art of Socratic questioning is intimately connected with critical thinking because the art of questioning is important to excellence of thought. Socrates argued for the necessity of probing individual knowledge, and acknowledging what one may not know or understand. Critical thinking has the goal of reflective thinking that focuses on what should be believed or done about a topic. Socratic questioning adds another level of thought to critical thinking, by focusing on extracting depth, interest and assessing the truth or plausibility of thought. Socrates argued that a lack of knowledge is not bad, but students must strive to make known what they don't know through the means of a form of critical thinking.
Critical thinking and Socratic questioning both seek meaning and truth. Critical thinking provides the rational tools to monitor, assess, and perhaps reconstitute or re-direct our thinking and action. This is what educational reformer John Dewey described as reflective inquiry: "in which the thinker turns a subject over in the mind, giving it serious and consecutive consideration." Socratic questioning is an explicit focus on framing self-directed, disciplined questions to achieve that goal.
The technique of questioning or leading discussion is spontaneous, exploratory, and issue-specific. The Socratic educator listens to the viewpoints of the student and considers the alternative points of view. It is necessary to teach students to sift through all the information, form a connection to prior knowledge, and transform the data to new knowledge in a thoughtful way. Some qualitative research shows that the use of the Socratic questioning within a traditional Yeshiva education setting helps students succeed in law school, although it remains an open question as to whether that relationship is causal or merely correlative.
It has been proposed in different studies that the "level of thinking that occurs is influenced by the level of questions asked". Thus, utilizing the knowledge that students don't know stimulates their ability to ask more complex questions. This requires educators to create conducive learning environments that promote and value the role of critical thinking, mobilising their ability to form complex thoughts and questions.
Psychology
Socratic questioning has also been used in psychotherapy, most notably as a cognitive restructuring technique in classical Adlerian psychotherapy, logotherapy, rational emotive behavior therapy, cognitive therapy, and logic-based therapy. The purpose is to help uncover the assumptions and evidence that underpin people's thoughts in respect of problems. A set of Socratic questions in cognitive therapy aim to deal with automatic thoughts that distress the patient:
Revealing the issue: 'What evidence supports this idea? And what evidence is against its being true?'
Conceiving reasonable alternatives: 'What might be another explanation or viewpoint of the situation? Why else did it happen?'
Examining various potential consequences: 'What are worst, best, bearable and most realistic outcomes?'
Evaluate those consequences: 'What's the effect of thinking or believing this? What could be the effect of thinking differently and no longer holding onto this belief?'
Distancing: 'Imagine a specific friend/family member in the same situation or if they viewed the situation this way, what would I tell them?'
Careful use of Socratic questioning enables a therapist to challenge recurring or isolated instances of a person's illogical thinking while maintaining an open position that respects the internal logic to even the most seemingly illogical thoughts.
See also
Argument map
Argumentation theory
Cross-examination
Inquiry
Intellectual virtue
Interrogation
Issue map
Socratic method
References
Questioning
Learning
Problem solving methods
Educational psychology
School qualifications
Education reform
Critical thinking skills
Philosophical methodology
Legal reasoning
de:Mäeutik | 0.775646 | 0.995348 | 0.772038 |
Precautionary principle | The precautionary principle (or precautionary approach) is a broad epistemological, philosophical and legal approach to innovations with potential for causing harm when extensive scientific knowledge on the matter is lacking. It emphasizes caution, pausing and review before leaping into new innovations that may prove disastrous. Critics argue that it is vague, self-cancelling, unscientific and an obstacle to progress.
In an engineering context, the precautionary principle manifests itself as the factor of safety, discussed in detail in the monograph of Elishakoff. It was apparently suggested, in civil engineering, by Belidor in 1729. Interrelation between safety factor and reliability is extensively studied by engineers and philosophers.
The principle is often used by policy makers in situations where there is the possibility of harm from making a certain decision (e.g. taking a particular course of action) and conclusive evidence is not yet available. For example, a government may decide to limit or restrict the widespread release of a medicine or new technology until it has been thoroughly tested. The principle acknowledges that while the progress of science and technology has often brought great benefit to humanity, it has also contributed to the creation of new threats and risks. It implies that there is a social responsibility to protect the public from exposure to such harm, when scientific investigation has found a plausible risk. These protections should be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.
The principle has become an underlying rationale for a large and increasing number of international treaties and declarations in the fields of sustainable development, environmental protection, health, trade, and food safety, although at times it has attracted debate over how to accurately define it and apply it to complex scenarios with multiple risks. In some legal systems, as in law of the European Union, the application of the precautionary principle has been made a statutory requirement in some areas of law.
Origins and theory
The concept "precautionary principle" is generally considered to have arisen in English from a translation of the German term Vorsorgeprinzip in the 1970s in response to forest degradation and sea pollution, where German lawmakers adopted clean air act banning use of certain substances suspected in causing the environmental damage even though evidence of their impact was inconclusive at that time. The concept was introduced into environmental legislation along with other innovative (at that time) mechanisms such as "polluter pays", principle of pollution prevention and responsibility for survival of future ecosystems.
The precautionary principle was promulgated in philosophy by Hans Jonas in his 1979 text, The Imperative of Responsibility, wherein Jonas argued that technology had altered the range of the impact of human action and, as such, ethics must be modified so that the far distant effects of one's actions should now be considered. His maxim is designed to embody the precautionary principle in its prescription that one should "Act so that the effects of your action are compatible with the permanence of genuine human life" or, stated conversely, "Do not compromise the conditions for an indefinite continuation of humanity on earth." To achieve this Jonas argued for the cultivation of a cautious attitude toward actions that may endanger the future of humanity or the biosphere that supported it.
In 1988, Konrad von Moltke described the German concept for a British audience, which he translated into English as the precautionary principle.
In economics, the Precautionary Principle has been analyzed in terms of "the effect on rational decision-making", of "the interaction of irreversibility" and "uncertainty". Authors such as Epstein (1980) and Arrow and Fischer (1974) show that "irreversibility of possible future consequences" creates a "quasi-option effect" which should induce a "risk-neutral" society to favour current decisions that allow for more flexibility in the future. Gollier et al. conclude that "more scientific uncertainty as to the distribution of a future risk – that is, a larger variability of beliefs – should induce society to take stronger prevention measures today."
The principle was also derived from religious beliefs that particular areas of science and technology should be restricted as they "belong to the realm of God", as postulated by Prince Charles and Pope Benedict XVI.
Formulations
Many definitions of the precautionary principle exist: "precaution" may be defined as "caution in advance", "caution practiced in the context of uncertainty", or informed prudence. Two ideas lie at the core of the principle:
An expression of a need by decision-makers to anticipate harm before it occurs. Within this element lies an implicit reversal of the onus of proof: under the precautionary principle it is the responsibility of an activity-proponent to establish that the proposed activity will not (or is very unlikely to) result in significant harm.
The concept of proportionality of the risk and the cost and feasibility of a proposed action.
One of the primary foundations of the precautionary principle, and globally accepted definitions, results from the work of the Rio Conference, or "Earth Summit" in 1992. Principle 15 of the Rio Declaration notes:
In 1998, the Wingspread Conference on the Precautionary Principle was convened by the Science and Environmental Health Network and concluded with the following formulation, described by Stewart Brand as "the clearest and most frequently cited":
In February 2000, the Commission of the European Communities noted in a Communication from the Commission on the Precautionary Principle that "The precautionary principle is not defined in the Treaties of the European Union, which prescribes it [the Precautionary Principle] only once – to protect the environment. But in practice, its scope is much wider, and specifically where preliminary-objective-scientific-evaluation indicates that there are reasonable grounds for concern that potentially dangerous effects on the environment, human, animal or [and] plant health may be inconsistent with the high level of protection [for what] chosen for the Community."
The January 2000 Cartagena Protocol on Biosafety says, in regard to controversies over GMOs: "Lack of scientific certainty due to insufficient relevant scientific information ... shall not prevent the Party of [I]mport, in order to avoid or minimize such potential adverse effects, from taking a decision, as appropriate, with regard to the import of the living modified organism in question."
Pope Francis makes reference to the principle and the Rio Declaration in his 2015 encyclical letter, Laudato si', noting that alongside its environmental significance, the precautionary principle "makes it possible to protect those who are most vulnerable and whose ability to defend their interests and to assemble incontrovertible evidence is limited".
Application
Various interests being represented by various groups proposing the principle resulted in great variability of its formulation: one study identified 14 different formulations of the principle in treaties and non-treaty declarations. R.B. Stewart (2002) reduced the precautionary principle to four basic versions:
Scientific uncertainty should not automatically preclude regulation of activities that pose a potential risk of significant harm (non-preclusion).
Regulatory controls should incorporate a margin of safety; activities should be limited below the level at which no adverse effect has been observed or predicted (margin of safety).
Activities that present an uncertain potential for significant harm should be subject to best technology available requirements to minimize the risk of harm unless the proponent of the activity shows that they present no appreciable risk of harm (BAT).
Activities that present an uncertain potential for significant harm should be prohibited unless the proponent of the activity shows that it presents no appreciable risk of harm (prohibitory).
Carolyn Raffensperger of the Wingspread convention placed the principle in opposition to approaches based on risk management and cost-benefit analysis. Dave Brower (Friends of the Earth) concluded that "all technology should be assumed guilty until proven innocent". Freeman Dyson described the application of precautionary principle as "deliberately one-sided", for example when used as justification to destroy genetic engineering research plantations and threaten researchers in spite of scientific evidence demonstrating lack of harm:
As noted by Rupert and O'Riordan, the challenge in application of the principle is "in making it clear that absence of certainty, or there being insufficient evidence-based analysis, were not impediments to innovation, so long as there was no reasonable likelihood of serious harm". Lack of this nuanced application makes the principle "self-cancelling" according to Stewart Brand, because "nothing is fully established" in science, starting from the precautionary principle itself and including "gravity or Darwinian evolution". A balanced application should ensure that "precautionary measures should be" only taken "during early stages" and as "relevant scientific evidence becomes established", regulatory measures should only respond to that evidence.
Strong vs. weak
Strong precaution holds that regulation is required whenever there is a possible risk to health, safety, or the environment, even if the supporting evidence is speculative and even if the economic costs of regulation are high. In 1982, the United Nations World Charter for Nature gave the first international recognition to the strong version of the principle, suggesting that when "potential adverse effects are not fully understood, the activities should not proceed". The widely publicised Wingspread Declaration, from a meeting of environmentalists in 1998, is another example of the strong version. Strong precaution can also be termed as a "no-regrets" principle, where costs are not considered in preventative action.
Weak precaution holds that lack of scientific evidence does not preclude action if damage would otherwise be serious and irreversible. Humans practice weak precaution every day, and often incur costs, to avoid hazards that are far from certain: we do not walk in moderately dangerous areas at night, we exercise, we buy smoke detectors, we buckle our seatbelts.
According to a publication by the New Zealand Treasury Department:
International agreements and declarations
"Principle" vs. "approach"
No introduction to the precautionary principle would be complete without brief reference to the difference between the precautionary principle and the precautionary approach. Principle 15 of the Rio Declaration 1992 states that: "in order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall be not used as a reason for postponing cost-effective measures to prevent environmental degradation." As Garcia (1995) pointed out, "the wording, largely similar to that of the principle, is subtly different in that: it recognizes that there may be differences in local capabilities to apply the approach, and it calls for cost-effectiveness in applying the approach, e.g., taking economic and social costs into account." The "approach" is generally considered a softening of the "principle":
European Union
On 2 February 2000, the European Commission issued a Communication on the precautionary principle, in which it adopted a procedure for the application of this concept, but without giving a detailed definition of it. Paragraph 2 of article 191 of the Lisbon Treaty states that:
After the adoption of the European Commission's communication on the precautionary principle, the principle has come to inform much EU policy, including areas beyond environmental policy. As of 2006 it had been integrated into EU laws "in matters such as general product safety, the use of additives for use in animal nutrition, the incineration of waste, and the regulation of genetically modified organisms". Through its application in case law, it has become a "general principle of EU law".
In Case T-74/00 Artegodan, the General Court (then Court of First Instance) appeared willing to extrapolate from the limited provision for the precautionary principle in environmental policy in article 191(2) TFEU to a general principle of EU law.
France
In France, the Charter for the Environment contains a formulation of the precautionary principle (article 5):
United States
On 18 July 2005, the City of San Francisco passed a precautionary principle purchasing ordinance, which requires the city to weigh the environmental and health costs of its $600 million in annual purchases – for everything from cleaning supplies to computers. Members of the Bay Area Working Group on the Precautionary Principle contributed to drafting the Ordinance.
Australia
The most important Australian court case so far, due to its exceptionally detailed consideration of the precautionary principle, is Telstra Corporation Limited v Hornsby Shire Council.
The principle was summarised by reference to the NSW Protection of the Environment Administration Act 1991, which itself provides a good definition of the principle:
"If there are threats of serious or irreversible environmental damage, lack of full scientific certainty should not be used as a reasoning for postponing measures to prevent environmental degradation. In the application of the principle... decisions should be guided by:
(i) careful evaluation to avoid, wherever practicable, serious or irreversible damage to the environment; and
(ii) an assessment of risk-weighted consequence of various options".
The most significant points of Justice Preston's decision are the following findings:
The principle and accompanying need to take precautionary measures is "triggered" when two prior conditions exist: a threat of serious or irreversible damage, and scientific uncertainty as to the extent of possible damage.
Once both are satisfied, "a proportionate precautionary measure may be taken to avert the anticipated threat of environmental damage, but it should be proportionate."
The threat of serious or irreversible damage should invoke consideration of five factors: the scale of threat (local, regional etc.); the perceived value of the threatened environment; whether the possible impacts are manageable; the level of public concern, and whether there is a rational or scientific basis for the concern.
The consideration of the level of scientific uncertainty should involve factors which may include: what would constitute sufficient evidence; the level and kind of uncertainty; and the potential to reduce uncertainty.
The principle shifts the burden of proof. If the principle applies, the burden shifts: "a decision maker must assume the threat of serious or irreversible environmental damage is... a reality [and] the burden of showing this threat... is negligible reverts to the proponent..."
The precautionary principle invokes preventative action: "the principle permits the taking of preventative measures without having to wait until the reality and seriousness of the threat become fully known".
"The precautionary principle should not be used to try to avoid all risks."
The precautionary measures appropriate will depend on the combined effect of "the degree of seriousness and irreversibility of the threat and the degree of uncertainty... the more significant and uncertain the threat, the greater...the precaution required". "...measures should be adopted... proportionate to the potential threats".
Philippines
A petition filed 17 May 2013 by environmental group Greenpeace Southeast Asia and farmer-scientist coalition Masipag (Magsasaka at Siyentipiko sa Pagpapaunlad ng Agrikultura) asked the appellate court to stop the planting of Bt eggplant in test fields, saying the impacts of such an undertaking to the environment, native crops and human health are still unknown. The Court of Appeals granted the petition, citing the precautionary principle stating "when human activities may lead to threats of serious and irreversible damage to the environment that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish the threat."
Respondents filed a motion for reconsideration in June 2013 and on 20 September 2013 the Court of Appeals chose to uphold their May decision saying the bt talong field trials violate the people's constitutional right to a "balanced and healthful ecology." The Supreme Court on 8 December 2015 permanently stopped the field testing for Bt (Bacillus thuringiensis) talong (eggplant), upholding the decision of the Court of Appeals which stopped the field trials for the genetically modified eggplant. The court is the first in the world to adopt the precautionary principle regarding GMO products in its decision. The Supreme Court decision was later reversed following an appeal by researchers at the University of the Philippines Los Baños.
Corporate
Body Shop International, a UK-based cosmetics company, included the precautionary principle in their 2006 chemicals strategy.
Environment and health
Fields typically concerned by the precautionary principle are the possibility of:
Global warming or abrupt climate change in general
Extinction of species
Introduction of new products into the environment, with potential impact on biodiversity (e.g., genetically modified organisms)
Threats to public health, due to new diseases and techniques (e.g., HIV transmitted through blood transfusion)
Long-term effects of new technologies (e.g. health concerns regarding radiation from cell phones and other electronics communications devices)
Persistent or acute pollution (e.g., asbestos, endocrine disruptors)
Food safety (e.g., Creutzfeldt–Jakob disease)
Other new biosafety issues (e.g., artificial life, new molecules)
The precautionary principle is often applied to biological fields because changes cannot be easily contained and have the potential of being global. The principle has less relevance to contained fields such as aeronautics, where the few people undergoing risk have given informed consent (e.g., a test pilot). In the case of technological innovation, containment of impact tends to be more difficult if that technology can self-replicate. Bill Joy emphasised the dangers of replicating genetic technology, nanotechnology, and robotic technology in his article in Wired, "Why the future doesn't need us", though he does not specifically cite the precautionary principle. The application of the principle can be seen in the public policy of requiring pharmaceutical companies to carry out clinical trials to show that new medications are safe.
Oxford based philosopher Nick Bostrom discusses the idea of a future powerful superintelligence, and the risks should it attempt to gain atomic level control of matter.
Application of the principle modifies the status of innovation and risk assessment: it is not the risk that must be avoided or amended, but a potential risk that must be prevented. Thus, in the case of regulation of scientific research, there is a third party beyond the scientist and the regulator: the consumer.
In an analysis concerning application of the precautionary principle to nanotechnology, Chris Phoenix and Mike Treder posit that there are two forms of the principle, which they call the "strict form" and the "active form". The former "requires inaction when action might pose a risk", while the latter means "choosing less risky alternatives when they are available, and [...] taking responsibility for potential risks." Thomas Alured Faunce has argued for stronger application of the precautionary principle by chemical and health technology regulators particularly in relation to Ti02 and ZnO nanoparticles in sunscreens, biocidal nanosilver in waterways and products whose manufacture, handling or recycling exposes humans to the risk of inhaling multi-walled carbon nanotubes.
Animal sentience precautionary principle
Appeals to the precautionary principle have often characterized the debates concerning animal sentience – that is, the question of whether animals are able to feel "subjective experiences with an attractive or aversive quality", such as pain, pleasure, happiness, or joy – in relation to the question of whether we should legally protect sentient animals. A version of the precautionary principle suitable for the problem of animal sentience has been proposed by LSE philosopher Jonathan Birch: "The idea is that when the evidence of sentience is inconclusive, we should 'give the animal the benefit of doubt' or 'err on the side of caution' in formulating animal protection legislation." Since we cannot reach absolute certainty with regards to the fact that some animals are sentient, the precautionary principle has been invoked in order to grant potentially sentient animals "basic legal protections". Birch's formulation of the animal sentience precautionary principle runs as follows:This version of the precautionary principle consists of an epistemic and a decision rule. The former concerns the "evidential bar" that should be required for animal sentience. In other words, how much evidence of sentience is necessary before one decides to apply precautionary measures? According to Birch, only some evidence would be sufficient, which means that the evidential bar should be set at low levels. Birch proposes to consider the evidence that certain animals are sentient sufficient whenever "statistically significant evidence ... of the presence of at least one credible indicator of sentience in at least one species of that order" has been obtained. For practical reasons, Birch says, the evidence of sentience should concern the order, so that if one species meets the conditions of sentience, then all the species of the same order should be considered sentient and should be thus legally protected. This is due to the fact that, on the one hand, "to investigate sentience separately in different orders" is feasible, whereas on the other hand, since some orders include thousands of species, it would be unfeasible to study their sentience separately.
What is more, the evidential bar should be so low that only one indicator of sentience in the species of a specific order will be sufficient in order for the precautionary principle to be applied. Such indicator should be "an observable phenomenon that experiments can be designed to detect, and it must be credible that the presence of this indicator is explained by sentience". Lists of such criteria already exist for detecting animal pain. The aim is to create analogous lists for other criteria of sentience, such as happiness, fear, or joy. The presence of one of these criteria should be demonstrated by means of experiments which must meet "the normal scientific standards".
Regarding the second part of the animal sentience precautionary principle, the decision rule concerns the requirement that we have to act once there is sufficient evidence of a seriously bad outcome. According to Birch, "we should aim to include within the scope of animal protection legislation all animals for which the evidence of sentience is sufficient, according to the standard of sufficiency outlined [above]". In other words, the decision rule states that once the aforementioned low evidential bar is met, then we should act in a precautionary way. Birch's proposal also "deliberately leaves open the question of how, and to what extent, the treatment of these animals should be regulated", thus also leaving open the content of the regulations, as this will largely depend on the animal in question.
Criticisms
Critics of the principle use arguments similar to those against other formulations of technological conservatism.
Internal inconsistency: applying strong PP risks causing harm
Strong formulations of the precautionary principle, without regard to its most basic provisions (i.e., that it is to be applied only where risks are potentially catastrophic and not easily calculable), when applied to the principle itself as a policy decision, beats its own purpose of reducing risk. The reason suggested is that preventing innovation from coming to market means that only current technology may be used, and current technology itself may cause harm or leave needs unmet; there is a risk of causing harm by blocking innovation. As Michael Crichton wrote in his novel State of Fear: "The 'precautionary principle', properly applied, forbids the precautionary principle."
For example, forbidding nuclear power plants based on concerns about low-probability high-impact risks means continuing to rely on power plants that burn fossil fuels, which continue to release greenhouse gases and thousands of certain deaths from air pollution.
In 2021 in response to early reports about rare blood clots seen in 25 patients out of 20 million vaccinated by Astra-Zeneca COVID-19 vaccine a number of European Union member states suspended the use of the vaccine, quoting the "precautionary principle". This was criticized by other EU states who refused to suspend the vaccination program, declaring that the "precautionary" decisions are focusing on the wrong risk, as delay in a vaccination program results in a larger number of certain deaths than any yet unconfirmed complications.
In another example, the Hazardous Air Pollutant provisions in the 1990 amendments to the US Clean Air Act are an example of the Precautionary Principle where the onus is now on showing a listed compound is harmless. Under this rule no distinction is made between those air pollutants that provide a higher or lower risk, so operators tend to choose less-examined agents that are not on the existing list.
Blocking innovation and progress generally
Because applications of strong formulations of the precautionary principle can be used to block innovation, a technology which brings advantages may be banned by precautionary principle because of its potential for negative impacts, leaving the positive benefits unrealised.
The precautionary principle has been ethically questioned on the basis that its application could block progress in developing countries.
Vagueness and plausibility
The precautionary principle calls for action in the face of scientific uncertainty, but some formulations do not specify the minimal threshold of plausibility of risk that acts as a "triggering" condition, so that any indication that a proposed product or activity might harm health or the environment is sufficient to invoke the principle. In Sancho vs. DOE, Helen Gillmor, Senior District Judge, wrote in a dismissal of Wagner's lawsuit which included a popular worry that the LHC could cause "destruction of the earth" by a black hole:
The precautionary dilemma
The most commonly pressed objection to the precautionary principle ties together two of the above objections into the form of a dilemma. This maintains that, of the two available interpretations of the principle, neither are plausible: weak formulations (which hold that precaution in the face of uncertain harms is permissible) are trivial, while strong formulations (which hold that precaution in the face of uncertain harms is required) are incoherent. On the first horn of the dilemma Cass Sunstein states: If all that the (weak) principle states is that it is permissible to act in a precautionary manner where there is a possible risk of harm, then it constitutes a trivial truism and thus fails to be useful.
If we formulate the principle in the stronger sense however, it looks like it rules out all courses of action, including the precautionary measures it is intended to advocate. This is because, if we stipulate that precaution is required in the face of uncertain harms, and precautionary measures also carry a risk of harm, the precautionary principle can both demand and prohibit action at the same time. The risk of a policy resulting in catastrophic harm is always possible. For example: prohibiting genetically modified crops risks significantly reduced food production; placing a moratorium on nuclear power risks an over-reliance on coal that could lead to more air pollution; implementing extreme measures to slow global warming risks impoverishment and bad health outcomes for some people. The strong version of the precautionary principle, in that "[i]t bans the very steps that it requires", thus fails to be coherent. As Sunstein states, it is not protective, it is "paralyzing".
See also
Argument from ignorance
Benefit of the doubt (similar concept)
Best available technology
Biosecurity
Centre for the Study of Existential Risk
Chesterton's fence
Clinical equipoise
Complex systems
Diffusion of innovations
Ecologically sustainable development
Environmental law
Environmental Principles and Policies
Health impact assessment
Maximin principle
Micromort
Possible carcinogen
Postcautionary principle
Prevention of disasters principle
Proactionary principle
Risk aversion
Scientific skepticism
Substitution principle (sustainability)
Superconducting Super Collider
Sustainability
Tombstone mentality
Vaccine controversies
References
Further reading
Kai Purnhagen, "The Behavioural Law and Economics of the Precautionary Principle in the EU and its Impact on Internal Market Regulation", Wageningen Working Papers in Law and Governance 2013–04,
Communication from the European Commission on the precautionary principle Brusells (2000)
European Union (2002), European Union consolidated versions of the treaty on European Union and of the treaty establishing the European community, Official Journal of the European Union, C325, 24 December 2002, Title XIX, article 174, paragraph 2 and 3.
Greenpeace, "Safe trade in the 21st Century, Greenpeace comprehensive proposals and recommendations for the 4th Ministerial Conference of the World Trade Organisation" pp. 8–9
O'Riordan, T. and Cameron, J. (1995), Interpreting the Precautionary Principle, London: Earthscan Publications
Raffensperger, C., and Tickner, J. (eds.) (1999) Protecting Public Health and the Environment: Implementing the Precautionary Principle. Island Press, Washington, DC.
Rees, Martin. Our Final Hour (2003).
Recuerda Girela, M.A., (2006), Seguridad Alimentaria y Nuevos Alimentos, Régimen jurídico-administrativo. Thomson-Aranzadi, Cizur Menor.
Recuerda Girela, M.A., (2006), "Risk and Reason in the European Union Law", European Food and Feed Law Review, 5.
Sandin, P. "Better Safe than Sorry: Applying Philosophical Methods to the Debate on Risk and the Precautionary Principle," (2004).
Stewart, R.B. "Environmental Regulatory Decision making under Uncertainty". In An Introduction to the Law and Economics of Environmental Policy: Issues in Institutional Design, Volume 20: 71–126 (2002).
Sunstein, Cass R. (2005), Laws of Fear: Beyond the Precautionary Principle. New York: Cambridge University Press
External links
Report by the UK Interdepartmental Liaison Group on Risk Assessment, 2002. "The Precautionary Principle: Policy and Application"
David Appell, Scientific American, January 2001: "The New Uncertainty Principle"
The Times, 27 July 2007, Only a reckless mind could believe in safety first
The Times, 15 January 2005, "What is . . . the Precautionary Principle?"
Bill Durodié, Spiked, 16 March 2004: The precautionary principle assumes that prevention is better than cure
European Environment Agency (2001), Late lessons from early warnings: the precautionary principle 1896–2000
Applying the Precautionary Principle to Nanotechnology, Center for Responsible Nanotechnology 2004
1998 Wingspread Statement on the Precautionary Principle
Science and Environmental Health Network, The Precautionary Principle in Action – a Handbook]
Gary E. Marchant, Kenneth L. Mossman: Arbitrary and Capricious: The Precautionary Principle in the European Union Courts. American Enterprise Institute Press 2004, ; free online PDF
Umberto Izzo, La precauzione nella responsabilità civile. Analisi di un concetto sul tema del danno da contagio per via trasfusionale (e-book reprint) [The Idea of Precaution in Tort Law. Analysis of a Concept against the Backdrop of the Tainted- Blood Litigation], UNITN e-prints, 2007, first edition Padua, Cedam 2004.free online PDF
Better Safe than Sorry: Applying Philosophical Methods to the Debate on Risk and the Precautionary Principle
Communication from the European Commission on the precautionary principle
UK Interdepartmental Liaison Group on Risk Assessment (ILGRA): The Precautionary Principle: Policy and Application
Report of UNESCO's group of experts on the Precautionary Principle (2005)
Max More (2010), The Perils Of Precaution
Doubt
European Union law
Legal doctrines and principles
Public health
Risk management
Safety
Environmental policy
United Nations Framework Convention on Climate Change | 0.777603 | 0.992832 | 0.772029 |
Effective accelerationism | Effective accelerationism, often abbreviated as "e/acc", is a 21st-century philosophical movement that advocates for an explicitly pro-technology stance. Its proponents believe that unrestricted technological progress (especially driven by artificial intelligence) is a solution to universal human problems like poverty, war and climate change. They see themselves as a counterweight to more cautious views on technological innovation, often giving their opponents the derogatory labels of "doomers" or "decels" (short for deceleration).
The movement carries utopian undertones and argues that humans need to develop and build faster to ensure their survival and propagate consciousness throughout the universe. Its founders Guillaume Verdon and the pseudonymous Bayeslord see it as a way to "usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms."
Although effective accelerationism has been described as a fringe movement and as cult-like, it has gained mainstream visibility in 2023. A number of high-profile Silicon Valley figures, including investors Marc Andreessen and Garry Tan explicitly endorsed it by adding "e/acc" to their public social media profiles.
Etymology and central beliefs
Effective accelerationism, a portmanteau of "effective altruism" and "accelerationism", is a fundamentally techno-optimist movement. According to Guillaume Verdon, one of the movement's founders, its aim is for human civilization to "clim[b] the Kardashev gradient", meaning its purpose is for human civilization to rise to next levels on the Kardashev scale by maximizing energy usage.
To achieve this goal, effective accelerationism wants to accelerate technological progress. It is strongly focused on artificial general intelligence (AGI), because it sees AGI as fundamental for climbing the Kardashev scale. The movement therefore advocates for unrestricted development and deployment of artificial intelligence. Regulation of artificial intelligence and government intervention in markets more generally is met with opposition. Many of its proponents have libertarian views and think that AGI will be most aligned if many AGIs compete against each other on the marketplace.
The founders of the movement see it as rooted in Jeremy England's theory on the origin of life, which is focused on entropy and thermodynamics. According to them, the universe aims to increase entropy, and life is a way of increasing it. By spreading life throughout the universe and making life use up ever increasing amounts of energy, the universe's purpose would thus be fulfilled.
History
Intellectual origins
While Nick Land is seen as the intellectual originator of contemporary accelerationism in general, the precise origins of effective accelerationism remain unclear. The earliest known reference to the movement can be traced back to a May 2022 newsletter published by four pseudonymous authors known by their X (formerly Twitter) usernames @BasedBeffJezos, @bayeslord, @zestular and @creatine_cycle.
Effective accelerationism incorporates elements of older Silicon Valley subcultures such as transhumanism and extropianism, which similarly emphasized the value of progress and resisted efforts to restrain the development of technology, as well as the work of the Cybernetic Culture Research Unit.
Disclosure of the identity of BasedBeffJezos
Forbes disclosed in December 2023 that the @BasedBeffJezos persona is maintained by Guillaume Verdon, a Canadian former Google quantum computing engineer and theoretical physicist. The revelation was supported by a voice analysis conducted by the National Center for Media Forensics of the University of Colorado Denver, which further confirmed the match between Jezos and Verdon. The magazine justified its decision to disclose Verdon's identity on the grounds of it being "in the public interest".
On 29 December 2023 Guillaume Verdon was interviewed by Lex Fridman on the Lex Fridman Podcast and introduced as the "founder of [the] e/acc (effective accelerationism) movement".
Relation to other movements
Traditional accelerationism
Traditional accelerationism, as developed by the British philosopher Nick Land, sees the acceleration of technological change as a way to bring about a fundamental transformation of current culture, society, and the political economy. In his earlier writings he saw the acceleration of capitalism as a way to overcome this economic system itself. In contrast, effective accelerationism does not seek to overcome capitalism or to introduce radical societal change but tries to maximize the probability of a technocapital singularity, triggering an intelligence explosion throughout the universe and maximizing energy usage.
Effective altruism
Effective accelerationism also diverges from the principles of effective altruism, which prioritizes using evidence and reasoning to identify the most effective ways to altruistically improve the world. This divergence comes primarily from one of the causes effective altruists focus on – AI existential risk. Effective altruists argue that AI companies should be cautious and strive to develop safe AI systems, as they fear that any misaligned AGI could eventually lead to human extinction. Proponents of Effective Accelerationism generally consider that existential risks from AGI are negligible, and that even if they were not, decentralized free markets would much better mitigate this risk than centralized governmental regulation.
d/acc
Introduced by Vitalik Buterin in November 2023, d/acc is pro-technology like e/acc. But it assumes that maximizing profit does not automatically lead to the best outcome. The "d" in d/acc primarily means "defensive", but can also refer to "decentralization" or "differential". d/acc acknowledges existential risks and seeks a more targeted approach to technological development than e/acc, intentionally prioritizing technologies that are expected to make the world better or safer.
Degrowth
Effective accelerationism also stands in stark contrast with the degrowth movement, sometimes described by it as "decelerationism" or "decels". The degrowth movement advocates for reducing economic activity and consumption to address ecological and social issues. Effective accelerationism on the contrary embraces technological progress, energy consumption and the dynamics of capitalism, rather than advocating for a reduction in economic activity.
Reception
The "Techno-Optimist Manifesto", a 2023 essay by Marc Andreessen, has been described by the Financial Times and the German Süddeutsche Zeitung as espousing the views of effective accelerationism.
David Swan of The Sydney Morning Herald has criticized effective accelerationism due to its opposition to government and industry self-regulation. He argues that "innovations like AI needs thoughtful regulations and guardrails [...] to avoid the myriad mistakes Silicon Valley has already made". During the 2023 Reagan National Defense Forum, U.S. Secretary of Commerce Gina Raimondo cautioned against embracing the "move fast and break things" mentality associated with "effective acceleration". She emphasized the need to exercise caution in dealing with AI, stating "that's too dangerous. You can't break things when you are talking about AI". In a similar vein, Ellen Huet argued on Bloomberg News that some of the ideas of the movement were "deeply unsettling", focusing especially on Guillaume Verdon's "post-humanism" and the view that "natural selection could lead AI to replace us [humans] as the dominant species."
See also
Technological utopianism
Transhumanism
References
External links
Computational neuroscience
Concepts in ethics
Cybernetics
Doomsday scenarios
Effective altruism
Ethics of science and technology
Existential risk from artificial general intelligence
Future problems
Human extinction
Philosophy of artificial intelligence
Singularitarianism
Technology hazards
Effective accelerationism | 0.774761 | 0.996471 | 0.772027 |
Eugenics | Eugenics ( ; ) is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have altered various human gene frequencies by inhibiting the fertility of people and groups they considered inferior, or promoting that of those considered superior.
The contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock.
Although it originated as a progressive social movement in the 19th century, in contemporary usage in the 21st century, the term is closely associated with scientific racism.
Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, British-Indian scientist J. B. S. Haldane wrote in 1940 that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Early eugenicists were mostly concerned with factors of measured intelligence that often correlated strongly with social class.
Common distinctions
Eugenic programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction.
In other words, positive eugenics is aimed at encouraging reproduction among the genetically advantaged, for example, the eminently intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit.
As opposed to "euthenics"
Historical eugenics
Ancient and medieval origins
Academic origins
The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, directly drawing on the recent work delineating natural selection by his half-cousin Charles Darwin. He published his observations and conclusions chiefly in his influential book Inquiries into Human Faculty and Its Development. Galton himself defined it as "the study of all agencies under human control which can improve or impair the racial quality of future generations". The first to systematically apply Darwinism theory to human relations, Galton believed that various desirable human qualities were also hereditary ones, although Darwin strongly disagreed with this elaboration of his theory. And it should also be noted that many of the early geneticists were not themselves Darwinians.
Eugenics became an academic discipline at many colleges and universities and received funding from various sources. Organizations were formed to win public support for and to sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals. In 1909, the Anglican clergymen William Inge and James Peile both wrote for the Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.
Three International Eugenics Conferences presented a global venue for eugenicists, with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies in the United States were first implemented by state-level legislators in the early 1900s. Eugenic policies also took root in France, Germany, and Great Britain. Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium, Brazil, Canada, Japan and Sweden.
Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy" framed eugenics as a social philosophy—a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits ("positive eugenics") or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits ("negative eugenics").
In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbor Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races.
Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty.
As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics"; which focuses on individual freedom and allegedly pulls away from racism, sexism or a focus on intelligence.
Early opposition
Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Franz Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement.
Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists who were themselves eugenicists, such as J. B. S. Haldane and R. A. Fisher, however, also expressed skepticism in the belief that sterilization of "defectives" (i.e. a purely negative eugenics) would lead to the disappearance of undesirable genetic traits.
Among institutions, the Catholic Church was an opponent of state-enforced sterilizations, but accepted isolating people with hereditary diseases so as not to let them reproduce. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason."
In fact, more generally, "[m]uch of the opposition to eugenics during that era, at least in Europe, came from the right." The eugenicists' political successes in Germany and Scandinavia were not at all matched in such countries as Poland and Czechoslovakia, even though measures had been proposed there, largely because of the Catholic church's moderating influence.
Concerns over human devolution
The Lamarckian backdrop
Dysgenics
Compulsory sterilization
Eugenic feminism
North American eugenics
Eugenics in Mexico
Nazism and the decline of eugenics
The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust.
By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons".
Modern eugenics
Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products".
In a similar spirit, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology. Before any of these technological breakthroughs, however, prenatal screening has long been called by some a contemporary and highly prevalent form of eugenics because it may lead to selective abortions of fetuses with undesirable traits.
In Singapore
Lee Kuan Yew, the founding father of Singapore, actively promoted eugenics as late as 1983. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. For this purpose was introduced the "Graduate Mother Scheme" that incentivized graduate women to get married as much as the rest of their populace. The incentives were extremely unpopular and regarded as eugenic, and were seen as discriminatory towards Singapore's non-Chinese ethnic population. In 1985, the incentives were partly abandoned as ineffective, while the government matchmaking agency, the Social Development Network, remains active.
Contested scientific status
One general concern that many bring to the table, is that the reduced genetic diversity some argue to be a likely feature of long-term, species-wide eugenics plans, could eventually result in inbreeding depression, increased spread of infectious disease, and decreased resilience to changes in the environment.
Arguments for scientific validity
In his original lecture "Darwinism, Medical Progress and Eugenics", Karl Pearson claimed that everything concerning eugenics fell into the field of medicine. Similarly apologetic, Czech-American Aleš Hrdlička, head of the American Anthropological Association from 1925 to 1926 and "perhaps the leading physical anthropologist in the country at the time" posited that its ultimate aim "is that it may, on the basis of accumulated knowledge and together with other branches of research, show the tendencies of the actual and future evolution of man, and aid in its possible regulation or improvement. The growing science of eugenics will essentially become applied anthropology."
More recently, prominent evolutionary biologist Richard Dawkins stated of the matter:The spectre of Hitler has led some scientists to stray from "ought" to "is" and deny that breeding for human qualities is even possible. But if you can breed cattle for milk yield, horses for running speed, and dogs for herding skill, why on Earth should it be impossible to breed humans for mathematical, musical or athletic ability? Objections such as "these are not one-dimensional abilities" apply equally to cows, horses and dogs and never stopped anybody in practice. I wonder whether, some 60 years after Hitler's death, we might at least venture to ask what the moral difference is between breeding for musical ability and forcing a child to take music lessons.
Scientifically possible and already well-established, heterozygote carrier testing is used in the prevention of autosomal recessive disorders, allowing couples to determine if they are at risk of passing various hereditary defects onto a future child. There are various examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not negatively affecting the heterozygote carriers of those diseases themselves. The elevated prevalence of various genetically transmitted diseases among Ashkenazi Jew populations (e.g. per Tay–Sachs, cystic fibrosis, Canavan's disease and Gaucher's disease), has been markedly decreased in more recent cohorts by the widespread adoption of genetic screening (cf. also Dor Yeshorim).
Objections to scientific validity
Amanda Caleb, Professor of Medical Humanities at Geisinger Commonwealth School of Medicine, says "Eugenic laws and policies are now understood as part of a specious devotion to a pseudoscience that actively dehumanizes to support political agendas and not true science or medicine."
The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective.
Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wroclaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pękalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.
While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point there is no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some conditions such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual, so eliminating these genes is undesirable in places where such diseases are common. Such cases in which, furthermore, even individual organisms' massive suffering or even death due to the odd 25 percent of homozygotes ineliminable by natural section under a Mendelian pattern of inheritance may be justified for the greater ecological good that is conspecifics incur a greater so-called heterozygote advantage in turn.
Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. Indeed, the most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics is often considered to be tainted with scientific racism and pseudoscience.
Regarding the lasting controversy above, himself citing recent scholarship, historian of science Aaron Gillette notes that: Others take a more nuanced view. They recognize that there was a wide variety of eugenic theories, some of which were much less race- or class-based than others. Eugenicists might also give greater or lesser acknowledgment to the role that environment played in shaping human behavior. In some cases, eugenics was almost imperceptibly intertwined with health care, child care, birth control, and sex education issues. In this sense, eugenics has been called, "a 'modern' way of talking about social problems in biologizing terms".
Indeed, granting that the historical phenomenon of eugenics was that of a pseudoscience, Gilette further notes that this derived chiefly from its being "an epiphenomenon of a number of sciences, which all intersected at the claim that it was possible to consciously guide human evolution."
Contested ethical status
Contemporary ethical opposition
In a book directly addressed at socialist eugenicist J.B.S. Haldane and his once-influential Daedalus, Betrand Russell, had one serious objection of his own: eugenic policies might simply end up being used to reproduce existing power relations “rather than to make men happy.”
Environmental ethicist Bill McKibben argued against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, he argues, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using Ming China, Tokugawa Japan and the contemporary Amish as examples.
The threat of perfection
Contemporary ethical advocacy
Some, for example Nathaniel C. Comfort of Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making process from the state to patients and their families. Comfort suggests that "the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise." Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral.
In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.
In his book A Theory of Justice (1971), American philosopher John Rawls argued that "[o]ver time a society is to take steps to preserve the general level of natural abilities and to prevent the diffusion of serious defects". The original position, a hypothetical situation developed by Rawls, has been used as an argument for negative eugenics. Accordingly, some morally support germline editing precisely because of its capacity to (re)distribute such Rawlsian primary goods.
Status quo bias and the reversal test
The utilitarian perspective of Procreative Beneficence
Transhuman perspectives
Problematizing the therapy-enhancement distinction
In science fiction
The novel Brave New World by the English author Aldous Huxley (1931), is a dystopian social science fiction novel which is set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy.
Various works by the author Robert A. Heinlein mention the Howard Foundation, a group which attempts to improve human longevity through selective breeding.
Among Frank Herbert's other works, the Dune series, starting with the eponymous 1965 novel, describes selective breeding by a powerful sisterhood, the Bene Gesserit, to produce a supernormal male being, the Kwisatz Haderach.
The Star Trek franchise features a race of genetically engineered humans which is known as "Augments", the most notable of them is Khan Noonien Singh. These "supermen" were the cause of the Eugenics Wars, a dark period in Earth's fictional history, before they were deposed and exiled. They appear in many of the franchise's story arcs, most frequently, they appear as villains.
The film Gattaca (1997) provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. The title alludes to the letters G, A, T and C, the four nucleobases of DNA, and depicts the possible consequences of genetic discrimination in the present societal framework. Relegated to the role of a cleaner owing to his genetically projected death at age 32 due to a heart condition (being told: "The only way you'll see the inside of a spaceship is if you were cleaning it”), the protagonist observes enhanced astronauts as they are demonstrating their superhuman athleticism. Nonetheless, against mere uniformity being the movies key theme, it may be highlighted that it also includes a twelve fingered concert pianist nonetheless taken to be highly esteemed. Even though it was not a box office success, it was critically acclaimed and it is said to have crystallized the debate over human genetic engineering in the public consciousness. As to its accuracy, its production company, Sony Pictures, consulted with a gene therapy researcher and prominent critic of eugenics known to have stated that "[w]e should not step over the line that delineates treatment from enhancement", W. French Anderson, to ensure that the portrayal of science was realistic. Disputing their success in this mission, Philim Yam of Scientific American called the film "science bashing" and Nature's Kevin Davies called it a "surprisingly pedestrian affair", while molecular biologist Lee Silver described its extreme determinism as "a straw man".
In an even more pointed critique, in his 2018 book Blueprint, the behavioral geneticist Robert Plomin writes that while Gattaca warned of the dangers of genetic information being used by a totalitarian state, genetic testing could also favor better meritocracy in democratic societies which already administer a variety of standardized tests to select people for education and employment. He suggests that polygenic scores might supplement testing in a manner that is essentially free of biases. Along similar lines, in the 2004 book Citizen Cyborg, democratic transhumanist James Hughes had already argued against what he considers to be "professional fearmongers", stating of the movie's premises:
Astronaut training programs are entirely justified in attempting to screen out people with heart problems for safety reasons;
In the United States, people are already being screened by insurance companies on the basis of their propensities to disease, for actuarial purposes;
Rather than banning genetic testing or genetic enhancement, society should simply develop genetic information privacy laws, such as the U.S. Genetic Information Nondiscrimination Act, that allow justified forms of genetic testing and data aggregation, but forbid those that are judged to result in genetic discrimination. Enforcing these would not be very hard once a system for reporting and penalties is in place.
See also
References
Notes
Further reading
Anomaly, Jonathan (2018). "Defending eugenics: From cryptic choice to conscious selection." Monash Bioethics Review 35 (1–4):24-35. doi:10.1007/s40592-018-0081-2
Anomaly, Jonathan (2024). Creating Future People The Science and Ethics of Genetic Enhancement. Routledge, 2nd Edition. ,
Paul, Diane B.; Spencer, Hamish G. (1998). "Did Eugenics Rest on an Elementary Mistake?" (PDF). In: The Politics of Heredity: Essays on Eugenics, Biomedicine, and the Nature-Nurture Debate, SUNY Press (pp. 102–118)
Gantsho, Luvuyo (2022). "The principle of procreative beneficence and its implications for genetic engineering." Theoretical Medicine and Bioethics 43 (5):307-328. doi:10.1007/s11017-022-09585-0
Harris, John (2009). "Enhancements are a Moral Obligation." In J. Savulescu & N. Bostrom (Eds.), Human Enhancement, Oxford University Press, pp. 131–154
Kamm, Frances (2010). "What Is And Is Not Wrong With Enhancement?" In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press.
Kamm, Frances (2005). "Is There a Problem with Enhancement?", The American Journal of Bioethics, 5(3), 5–14. PMID 16006376 doi:10.1080/15265160590945101
Ranisch, Robert (2022). "Procreative Beneficence and Genome Editing", The American Journal of Bioethics, 22(9), 20–22. doi:10.1080/15265161.2022.2105435
Robertson, John (2021). Children of Choice: Freedom and the New Reproductive Technologies. Princeton University Press, doi:10.2307/j.ctv1h9dhsh.
Saunders, Ben (2015). "Why Procreative Preferences May be Moral – And Why it May not Matter if They Aren't." Bioethics, 29(7), 499–506. doi:10.1111/bioe.12147
Savulescu, Julian (2001). Procreative beneficence: why we should select the best children. Bioethics. 15(5–6): pp. 413–26
Singer, Peter (2010). "Parental Choice and Human Improvement." In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press.
Wikler, Daniel (1999). "Can we learn from eugenics?" (PDF). J Med Ethics. 25(2):183-94. doi: 10.1136/jme.25.2.183. PMID 10226926; PMCID: PMC479205.
External links
Embryo Editing for Intelligence: A cost-benefit analysis of CRISPR-based editing for intelligence with 2015-2016 state-of-the-art
Embryo Selection For Intelligence: A cost-benefit analysis of the marginal cost of IVF-based embryo selection for intelligence and other traits with 2016-2017 state-of-the-art
Eugenics: Its Origin and Development (1883–Present) by the National Human Genome Research Institute (30 November 2021)
Eugenics and Scientific Racism Fact Sheet by the National Human Genome Research Institute (3 November 2021)
Ableism
Applied genetics
Bioethics
Nazism
Pseudo-scholarship
Pseudoscience
Racism
Technological utopianism
White supremacy | 0.7722 | 0.999752 | 0.772009 |
Olericulture | Olericulture is the science of vegetable growing, dealing with the culture of non-woody (herbaceous) plants for food.
Olericulture is the production of plants for use of the edible parts. Vegetable crops can be classified into nine major categories:
Potherbs and greens – spinach and collards
Salad crops – lettuce, celery
Cole crops – cabbage and cauliflower
Root crops (tubers) – potatoes, beets, carrots, radishes
Bulb crops – onions, leeks
Legumes – beans, peas
Cucurbits – melons, squash, cucumber
Solanaceous crops – tomatoes, peppers, potatoes
Sweet corn
Olericulture deals with the production, storage, processing and marketing of vegetables. It encompasses crop establishment, including cultivar selection, seedbed preparation and establishment of vegetable crops by seed and transplants.
It also includes maintenance and care of vegetable crops as well commercial and non-traditional vegetable crop production including organic gardening and organic farming; sustainable agriculture and horticulture; hydroponics; and biotechnology.
See also
Agriculture – the cultivation of animals, plants, fungi and other life forms for food, fiber, and other products used to sustain life.
Horticulture – the industry and science of plant cultivation including the process of preparing soil for the planting of seeds, tubers, or cuttings.
Pomology – a branch of botany that studies and cultivates pome fruit, and sometimes applied more broadly, to the cultivation of any type of fruit.
Tropical horticulture – a branch of horticulture that studies and cultivates garden plants in the tropics, i.e., the equatorial regions of the world.
References
Introduction to Olericulture by the Department of Horticulture and Landscape Architecture, Purdue University .
Vegetables
Organic farming
Horticulture
Edible plants | 0.789578 | 0.977595 | 0.771888 |
2024 in science | The following scientific events occurred or are scheduled to occur in 2024.
Events
January
– The Japan Meteorological Agency (JMA) publishes its JRA-55 dataset, confirming 2023 as the warmest year on record globally, at above the 1850–1900 baseline. This is above the previous record set in 2016.
– The first functional semiconductor made from graphene is created.
– A review indicates digital rectal examination (DRE) is an outdated routine medical practice, with a lower cancer detection rate compared to prostate-specific antigen (PSA) testing and no benefit from combining DRE and PSA.
Scientists report that newborn galaxies in the very early universe were "banana"-shaped, much to the surprise of researchers.
An analysis of sugar-sweetened beverage (SSB) taxes concludes scaling them could yield substantial public health benefits.
Scientists report studies that seem to support the hypothesis that life may have begun in a shallow lake rather than otherwise - perhaps somewhat like a "warm little pond" originally proposed by Charles Darwin.
A group of scientists from around the globe have charted paradigm-shifting restorative pathways to mitigate the worst effects of climate change and biodiversity loss with a strong emphasis on environmental sustainability, human wellbeing and reducing social and economic inequality.
Researchers have discovered a new phase of matter, named a "light-matter hybrid", which may reshape understanding of how light interacts with matter.
A study of proteins in cerebrospinal fluid indicates there are five subtypes of Alzheimer's disease, suggesting it to be likely that subtype-specific treatments are required.
A study finds seaweed farming could be set up as a resilient food solution within roughly a year in abrupt sunlight reduction scenarios such as after a nuclear war or a large volcano eruption.
Chemists report studies finding that long-chain fatty acids were produced in ancient hydrothermal vents. Such fatty acids may have contributed to the formation of the first cell membranes that are fundamental to protocells and the origin of life.
Scientists report the extinction of Gigantopithecus blacki, the largest primate to ever inhabit the Earth, that lived between 2 million and 350,000 years ago, was largely due to the inability of the ape to adapt to a diet better suited to a significantly changed environment.
Biologists report the discovery of the oldest known skin, fossilized about 289 million years ago, and possibly the skin from an ancient reptile.
Scientists report the discovery of Tyrannosaurus mcraeensis, an older species of Tyrannosaurus that lived 5-7 million years before Tyrannosaurus rex, and which may be fundamentally important to the evolution of the species.
A study of the Caatinga region in Brazil finds that its semi-arid biome could lose over 90% of mammal species by 2060, even in a best-case scenario of climate change.
A graphene-based implant on the surface of mouse brains, in combination with a two-photon microscope, is shown to capture high-resolution information on neural activity at depths of 250 micrometers.
A review of genetic data from 21 studies with nearly one million participants finds more than 50 new genetic loci and 205 novel genes associated with depression, opening potential targets for drugs to treat depression.
The Upano Valley sites are reported as the oldest Amazonian cities built over 2500 years ago, with a unique "garden urbanism" city design.
A study presents results of a Riyadh-based trial of eight urban heat mitigation scenarios, finding large cooling effects with combinations that include reflective rooftop materials, irrigated greenery, and retrofitting.
Global warming: 2023 is confirmed as the hottest year on record by several science agencies.
NASA reports a figure of 1.4 degrees Celsius above the late 19th century average, when modern record-keeping began.
NOAA reports a figure of 1.35 degrees Celsius.
Berkeley Earth reports a figure of 1.54 degrees Celsius.
An AI-based study shows for the first time that fingerprints from different fingers of the same person share strong detectable similarities.
– NASA fully opens the recovered container with samples from the Bennu asteroid, after three months of failed attempts.
– The first successful cloning of a rhesus monkey is reported by scientists in China.
– A study in Nature finds that the Greenland ice sheet is melting 20% faster than previously estimated, due to the effects of calving-front retreat. The loss of 30m tonnes of ice an hour is "sufficient to affect ocean circulation and the distribution of heat energy around the globe."
NASA reports the end of the Ingenuity helicopter's operation, after 72 successful flights on Mars, due to a broken rotor blade.
A potential candidate for the first known radio pulsar-black hole binary is reported by astronomers. The heavier of the two lies in the "mass gap" between neutron stars and black holes. The pair are located in the globular cluster NGC 1851.
Two insect-like robots, a mini-bug and a water strider, are reported as being the smallest, lightest, and fastest fully-functional micro-robots ever created.
Bottom trawling is found to release 340 million tonnes of carbon dioxide (CO2) into the atmosphere each year, nearly 1 percent of all global CO2 emissions in addition to acidifying oceans.
– Japan becomes the fifth country to achieve a soft landing on the Moon, with its SLIM mission.
– Biologists report the discovery of "obelisks", a new class of viroid-like elements, and "oblins", their related group of proteins, in the human microbiome.
– A viable and sustainable approach for gold recovery from e-waste is demonstrated.
The discovery of 85 exoplanet candidates based on data from the TESS observatory is reported. All have orbital periods of between 20 and 700 days, with temperatures similar to those of our own Solar System planets.
A global analysis of groundwater levels reports rapid declines of over 0.5 meters per year are widespread and that declines have accelerated over the past four decades in 30% of the world's regional aquifers. The study also shows cases in which depletion trends have reversed following interventions such as policy changes.
– The Laser Interferometer Space Antenna (LISA) is given the go-ahead by the European Space Agency (ESA). It will launch in 2035.
– Astronomers report the detection of water vapor in the atmosphere of GJ 9827 d, an exoplanet about twice the size of Earth.
Elon Musk's startup Neuralink implants their first microchip into a human brain.
A robotic sensor able to read braille with 87.5% accuracy and at twice the speed of a human is demonstrated.
– NASA reports the discovery of a super-Earth called TOI-715 b, located in the habitable zone of a red dwarf star about 137 light-years away.
: a self-powered solar panel cleaning system using an electrodynamic screen, removing contaminants through high-voltage electric fields, is demonstrated (4 Jan), an atmospheric water generator (WaterCube) for humidity levels above 40% is released (9 Jan).
: mouse-tested novel antibiotics class (including Zosurabalpin) against A. baumannii (3 Jan), small-trialed focused ultrasound for blood–brain barrier opening for better medication (Aducanumab) entry against Alzheimer's disease (3 Jan), a review supports the of exercise against depression (15 Jan), an available blood test to detect Alzheimer's disease with high accuracy using p-tau217 (22 Jan), one of two small-trialed gene therapies against DFNB9-deafness (24 Jan), phase 3-trialed dengue vaccine effective against at least two of four dengue types (31 Jan)
: ~240.000 particles of microplastic and nanoplastics (~90%) per liter are found in samples of plastic-bottled water (8 Jan), a study estimates harmful chemicals used in plastic materials have caused $249 U.S. healthcare system costs in 2018 (11 Jan), a study indicates fungal infections may be causing millions more deaths annually than thought (12 Jan), a study of European plastic waste exports to Vietnam finds a large fraction is dumped in nature and suggests air pollution from melting plastics and untreated wastewater have significant impact on health (18 Jan).
February
Scientists report a possible way of solving the three-body problem; a notable problem of particular importance to physics and classical mechanics.
Apple releases the Vision Pro as a virtual reality tool with visionOS.
The proposed name Zoozve for Venus' quasi-moon 2002 VE is approved and announced by the International Astronomical Union's Working Group Small Bodies Nomenclature (WGSBN).
A study based on 300-years-long temperature records preserved in Caribbean sclerosponge carbonate skeletons shows industrial-era warming already began in the mid-1860s and that by 2020, global warming was already 1.7±0.1 °C above pre-industrial levels. However, their reference period is not used by the IPCC and the 1.5 °C climate goal and the study's authors suggest their results show a better baseline.
A study reports high life satisfaction in people with low incomes among small-scale societies outside mainstream societies, in contrast with conclusions of a 2023 adversarial collaboration.
Scientists report a new species of mussel named Vadumodiolus teredinicola.
Biologists report a new species of jellyfish named Santjordia pagesi.
Reported science studies suggest that cosmic dust particles may have spread, in a process termed panspermia, life to Earth and elsewhere in the Universe.
A battery based on calcium, able to charge and discharge fully 700 times at room temperature, is presented. It is described as a potential alternative to lithium, being 2,500 times more abundant on Earth.
Saturn's moon Mimas is reported to have a subsurface ocean which formed recently (<25 Mya).
– Google renames AI chatbot Bard to Gemini, and makes it available on mobile.
– An analysis of Outer London's Mini-Hollands active transport infrastructures indicates Low Traffic Neighbourhoods are highly effective and cost-efficient measures in terms of health economic benefits.
– The first detection of water molecules on the surface of asteroids is announced, following spectral analysis of 7 Iris and 20 Massalia, two large main-belt objects.
– A study reviews educational content of 18,400 universities worldwide, finding higher education is not transitioning from fossil fuels to renewable energy curricula, to meet the growing demand for a clean energy workforce. On 26 February, a study analyzing funding sources and activities of two prominent academic centers delineates animal agriculture industry entrenchment in academia through support of industry-supported research and policy advocacy amid potential unfavorable policies.
– A global review of harms from personal car automobility finds cars have killed 60–80 million people since their invention, with automobility causing roughly every 34th death, and summarises interventions that are ready for implementation to reduce the, largely crash-linked or pollution-mediated, deaths from automobility-centrism and dependency.
Astronomers announce the most luminous object ever discovered, quasar QSO J0529-4351, located 12 billion light-years away in the constellation Pictor.
Researchers with the University of Tennessee and University of Missouri publish an academic study about how survivors from the 2011 Joplin tornado recover from "Tornado Brain", a new term for the PTSD of tornado survivors.
– The northern green anaconda (Eunectes akayima), a new species of the giant snake, is described for the first time.
Researchers use artificial intelligence to forecast plasma instabilities in fusion reactors up to 300 milliseconds in advance.
The first neuroimaging study that shows flow state-related brain activity during a creative production task, jazz improvisation, is published. Its results support a theory that creative flow represents optimized specialized processing enabled by extensive experience, relaxing conscious control.
– American company Intuitive Machines' Nova-C lander, named Odysseus, becomes the first commercial vehicle to land on the Moon in the IM-1 mission. The lander includes a Lunar Library that contains a version of the English Wikipedia, artworks, selections from the Internet Archive, portions of the Project Gutenberg, and more. It is projected to reside on the Moon in a readable state for billions of years.
Researchers report studies that, for the first time, measured gravity at microscopic levels.
Three new moons of the Solar System are discovered, one around Uranus and two around Neptune, bringing their total known satellites to 28 and 16, respectively.
– A small trial suggests prebiotic resistant starch, contained in many foods, can help in weight loss (~2.8 kg in 8 weeks).
A study links ultra-processed foods to 32 negative health impacts, including a higher risk of heart disease, cancer, type 2 diabetes, adverse mental health, and early death.
A study reconstructs the genetic event of tail-loss in human ancestors around 25 million years ago.
: LAION releases a first version of BUD-E, a fully open source voice assistant (8 Feb), Minesto's Dragon 12 underwater tidal kite turbines are demonstrated successfully, connected to the Faroe Island's power grid (11 Feb), rice grains as scaffolds containing cultured animal cells are demonstrated (14 Feb), an automatic waste sorting system (ZenRobotics 4.0) that can distinguish between over 500 waste categories is released (15 Feb), researchers describe an AI ecosystem interface of foundation models connected to many APIs as specialized subtask-solvers (16 Feb), precision fermentation-derived beta-lactoglobulin is released as a substitute for whey protein amid growth of a nascent animal-free dairy industry (19 Feb), researchers describe an approach for an optical disk with petabit capacity (21 Feb).
: phase 3-trialed R21/Matrix-M vaccine against Malaria (1 Feb), phase 3-trialed resmetirom as first medication against nonalcoholic steatohepatitis of the liver (7 Feb), a blood test against heart attacks, the top cause of human deaths (12 Feb), a low-cost saliva test against breast cancer (13 Feb), pigs-tested patient repositioning method for magnetic microbot navigation against liver cancer (14 Feb), antibiotic cresomycin against multiple drug-resistant bacterial strains (15 Feb), small-trialed 15 min exposure to 670 nm red light against blood glucose spikes following meals (20 Feb), small-trialed Omalizumab against food allergies (25 Feb), a donor heart is transplanted after 12 hours of preservation and transport using an airplane, small-trialed headgear for gamma stimulation to recruit the glymphatic system to remove brain amyloid against Alzheimer's disease (28 Feb).
: several dietary habits and products including teabags are linked to PFAS intake (4 Feb), an additional three billion people may face water scarcity by 2050 when river pollution is considered, an aspect neglected by prior assessments (6 Feb), HPV infection linked to higher cardiovascular mortality (7 Feb), researchers use simulations to develop an early-warning signal for a potential collapse of the atlantic meridional overturning circulation (AMOC) and suggest it indicates the AMOC is "on route to tipping" (9 Feb), researchers report the H5N1 bird flu virus may be changing and adapting to infect more mammals (12 Feb), researchers report how compounding disturbances could trigger unexpected ecosystem transitions in the Amazon rainforest (14 Feb), harmful chlormequat is found in ~80% of U.S. adult urine samples, rising during 2023, and in oat-based foods widely thought to be healthy (15 Feb), excess amounts of widely-supplemented niacin (B3) are linked to cardiovascular risk (19 Feb), a review concludes available evidence on the use of puberty blockers and cross-sex hormones in minors with gender dysphoria is very limited and based on only a few studies with small numbers which have problematic methodology and quality, warning about their use outside of clinical studies or research projects after careful risk-benefit evaluation (27 Feb).
March
Astronomers report that the surface of Europa, a moon of the planet Jupiter, may have much less oxygen than previously inferred, suggesting that the moon has a less hospitable environment for the existence of lifeforms than may have been considered earlier.
Biochemists report making an RNA molecule that was able to make accurate copies of a different type of RNA molecule, moving closer to an RNA that could make accurate copies of itself, and, as a result, providing support for an RNA world that may have been an essential way of starting the origin of life.
– The first creation of induced pluripotent stem cells for the Asian elephant is reported by Colossal Biosciences, a key step towards de-extinction of the woolly mammoth.
– Geologists identify a 2.4-million-year cycle in deep-sea sedimentary data, caused by an orbital interaction between Earth and Mars.
The Artificial Intelligence Act, the world's first comprehensive legal and regulatory framework for artificial intelligence, is passed by the European Union.
The largest inventory of methane emissions from U.S. oil and gas production finds them to be largely concentrated and around three times the national government inventory estimate. On 28 March, methane emissions from U.S. landfills are quantified, with super-emitting point-sources accounting for almost 90% thereof.
– SpaceX successfully launches the Starship spacecraft, but loses the rocket upon re-entering the atmosphere.
Scientists demonstrate a wireless network of 78 tiny sensors able to gather data from the brain, with potential to be scaled up to thousands of such devices.
Researchers with the National Severe Storms Laboratory, Storm Prediction Center, CIWRO, and the University of Oklahoma's School of Meteorology publish a paper where they state, ">20% of supercell tornadoes may be capable of producing EF4–EF5 damage" and that "the legacy F-scale wind speed ranges may ultimately provide a better estimate of peak tornado wind speeds at 10–15 m AGL for strong–violent tornadoes and a better damage-based intensity rating for all tornadoes" and also put the general 0–5 ranking scale in question.
– The removal of HIV from infected cells using CRISPR gene-editing technology is reported.
– A study outlines identified ecological pandemic prevention measures for policy frameworks.
The Event Horizon Telescope team confirms that strong magnetic fields are spiralling at the edge of the Milky Way’s central black hole, Sagittarius A*. A new image released by the team, similar to M87*, suggests that strong magnetic fields may be common to all black holes.
A study calculates the production costs of diabetes medications such as insulin and ozempic and finds them to be much lower than market prices.
– LHS 3844 b is confirmed as the first tidally locked super-Earth exoplanet.
: researchers demonstrate simultaneous radiative cooling and solar power generation from the same area (13 Mar).
: a blood test against colon cancer (13 Mar), mice-tested antibody-mediated depletion of myeloid-biased hematopoietic stem cells against immune system aging (27 Mar).
: a small trial links micro- and nanoplastics in carotid artery plaque to higher risks (6 Mar), U.S. land area of ~1200 km² is threatened by coastal subsidence by 2050 due to sea level rise (6 Mar), an EEA risk assessment finds Europe underprepared for climate risks across five broad clusters (11 Mar), a preprint trial suggests large language models could be used for tailored manipulation, being more persuasive than humans when using personal information (21 Mar).
April
1 April – An entirely new class of antibiotics with potent activity against multi-drug resistant bacteria is discovered. These compounds target a protein called LpxH, and are shown to cure bloodstream infections in mice.
3 April – NASA selects three companies – Intuitive Machines, Lunar Outpost and Venturi Astrolab – to develop its Lunar Terrain Vehicle, for use in crewed Artemis missions from 2030 onwards.
4 April – A study in Nature finds that global CO2 emissions increased by only 0.1% in 2023, suggesting that a plateau may have been reached.
5 April – A numerical toolkit designed for modelling warp drive spacetimes is introduced in Classical and Quantum Gravity.
9 April – A rare genetic variation in a gene that makes fibronectin is shown to reduce the odds of developing Alzheimer's disease by over 70%.
12 April
Biologists report that bonobos behave more aggressively than thought earlier.
Scientists report studies suggesting that tardigrades are protected from massive radiation exposure and damage by unique biochemicals, particularly, the Dsup protein.
15 April – The NOAA confirms a fourth global coral bleaching event.
16 April – Scientists at the Riken institute demonstrate "advanced dual-chirped optical parametric amplification", which provides a 50-fold increase in the energy of single-cycle laser pulses. This new technique may advance the development of attosecond lasers.
23 April – The world's largest 3D printer, dubbed Factory of the Future 1.0 (FoF 1.0), is presented by the University of Maine. Using thermoplastic polymers, the machine can print objects as large as long by wide by high, at a rate of per hour.
24 April – Demonstration of synthetic diamond created at 1 atmosphere of pressure in around 150 minutes without needing seeds.
26 April – mRNA-4157/V940, the first personalised melanoma vaccine based on mRNA, enters a final-stage Phase III trial.
29 April – Timothy A. Coleman, with the University of Alabama in Huntsville, Richard L. Thompson with the NOAA Storm Prediction Center, and Dr. Gregory S. Forbes, a retired meteorologist from The Weather Channel publish an article to the Journal of Applied Meteorology and Climatology stating, "it is apparent that the perceived shift in tornado activity from the traditional tornado alley in the Great Plains to the eastern U.S. is indeed real".
May
1 May – A new brain circuit that may act as a "master regulator" of the immune system is reported by scientists at Columbia University.
3 May – China launches its Chang'e 6 probe, a robotic sample-return mission to the far side of the Moon.
6 May
A new theory states that Venus may have lost its water so quickly due to HCO+ dissociative recombination.
People aged over 65 with two copies of the APOE4 gene variant are found to have a 95% chance of developing Alzheimer's disease.
8 May
Google introduces AlphaFold 3, a new AI model for accurately predicting the structure of proteins, DNA, RNA, ligands and more, and how they interact.
Atmospheric gases surrounding 55 Cancri e, a hot rocky exoplanet 41 light-years from Earth, are detected by researchers using the James Webb Space Telescope. NASA reports this as "the best evidence to date for the existence of any rocky planet atmosphere outside our solar system."
9 May
A record annual increase in atmospheric CO2 is reported from the Mauna Loa Observatory in Hawaii, with a jump of 4.7 parts per million (ppm) compared to a year earlier.
A cubic millimetre of the human brain is mapped at nanoscale resolution by a team at Google. This contains roughly 57,000 cells and 150 million synapses, incorporating 1.4 petabytes of data.
A study in Physical Review Letters concludes that the black hole in VFTS 243 likely formed instantaneously, with energy mainly expelled via neutrinos. This means it would have skipped the supernova stage entirely.
10 May – A series of solar storms and intense solar flares impact the Earth, creating aurorae at more southerly and northerly latitudes than usual.
13 May – OpenAI reveals GPT-4o, its latest AI model, featuring improved multimodal capabilities in real time.
15 May
Astronomers report an overview of preliminary analytical studies on returned samples of asteroid 101955 Bennu by the OSIRIS-REx mission.
SPECULOOS-3 b, an exoplanet nearly identical in size to Earth, is discovered orbiting an ultracool dwarf star as small as Jupiter and located 55 light-years from Earth.
Solar energy is combined with synthetic quartz to generate temperatures of more than 1,000°C. This proof-of-concept method shows the potential of clean energy to replace fossil fuels in heavy manufacturing, according to a research team at ETH Zurich.
16 May – A multimodal algorithm for improved sarcasm detection is revealed by the University of Groningen. Trained on a database known as MUStARD, it can examine multiple aspects of audio recordings and has 75% accuracy.
17 May – The world's smallest quantum light detector on a silicon chip is demonstrated by a team at the University of Bristol, 50 times smaller than their previous version.
20 May – The first measurements of an exoplanet's core mass are obtained by the James Webb Space Telescope. This reveals a surprisingly low amount of methane and a super-sized core within the super-Neptune WASP-107b.
23 May
New images from the Euclid space telescope are published, including a view of the Messier 78 star nursery.
Astronomers using TESS report the discovery of Gliese 12 b, a Venus-sized exoplanet located 40 light-years away, with an equilibrium temperature of 315 K (42 °C; 107 °F). This makes it the nearest, transiting, temperate, Earth-sized world located to date.
A team at Oregon State University shows that iron instead of cobalt and nickel can be used as a cathode material in lithium-ion batteries, improving both safety and sustainability.
30 May – NASA reports that the Webb Telescope has discovered JADES-GS-z14-0, the most distant known galaxy, which existed only 290 million years after the Big Bang. Its redshift of 14.32 shatters the previous record of 13.2, set by JADES-GS-z13-0.
31 May – Biologists report that Tmesipteris oblanceolata, a fern ally plant, was found to contain the largest known genome.
June
2 June – China successfully lands Chang'e 6 on the lunar far side. The robotic probe is set to begin sample collection before returning its 2 kg (4.4 lb) cargo on 4 June.
4 June – The China National Space Administration's Chang'e 6 spacecraft lifts off from the surface of the far side of the Moon carrying samples of lunar soil and rocks back to Earth.
5 June – Astronomers identify ASKAP J1935+2148, the slowest-spinning neutron star ever recorded, which completes a rotation just once every 54 minutes.
11 June – Scientists report that serious kidney disease may be associated with human spaceflight.
12 June
The apparent gap in life expectancy between male and female organisms is explained by a team at Osaka University, Japan, who find that reproductive cells drive sex-dependent differences in lifespan and reveal a role for vitamin D in improving longevity.
The Economist reports that China has become a "scientific superpower", citing numerous examples of its rapid development across a wide range of fields.
20 June – Following a surge in population of the Iberian lynx – from 62 mature individuals in 2001 to 648 in 2022 – the International Union for Conservation of Nature removes the animal from its "endangered" list, classing the animal as "vulnerable" instead.
24 June – The discovery of three Super-Earth candidates around HD 48948, a K-type dwarf star located 55 light-years away, is reported by the University of Exeter and the University of St Andrews. One planet lies within the habitable zone.
25 June – China's Chang'e 6 lunar exploration mission successfully returns to Earth after taking rock and soil samples from the far side of the moon. The orbiter proceeded on a mission to carry out observations at Sun-Earth Lagrange point L2 after dropping the sample off to Earth.
July
2 July
Two new satellite galaxies of the Milky Way are discovered – Sextans II and Virgo III.
The fifth busy beaver is proven.
5 July – The first mouse model with a complete, functional human immune system is demonstrated.
9 July – The first local extinction due to sea level rise in the United States is reported: that of the Key Largo tree cactus (Pilosocereus millspaughii) in Florida.
11 July – Using the Hubble Space Telescope, scientists resolve the 3D velocity dispersion profile of a dwarf galaxy for the first time, helping to uncover its dark matter distribution.
15 July
Scientists announce the discovery of a lunar cave, approximately from Apollo 11's landing site.
China announces a plan to visit the asteroid in 2029. Similar to NASA's Double Asteroid Redirection Test (DART), a probe will impact the body at a speed of 10 kilometres per second, and the resulting changes to its orbit will be studied. This will occur when the asteroid is within seven million kilometres of Earth.
29 July – Scientists at the University of Potsdam publish research on the simulation of gravitational waves from a failing warp drive.
30 July
A study on North Sea oil and gas extraction finds that pollution can spike by more than 10,000% within half a kilometre around offshore drilling sites.
The world's first fully automated dental procedure on a human is reported by Boston company Perceptive.
August
1 August – A study in Nature finds that based on current policies, there is a 45% risk of at least one major tipping point by 2300, even if global warming is brought back to below 1.5 °C. The risk is "strongly accelerated" for peak warming above 2.0 °C. The Atlantic Meridional Overturning Current (AMOC) is identified as being at the most urgent risk of collapse – possibly occurring as early as 2040 – followed by the Amazon rainforest in the 2070s.
7 August – Scientists in Australia publish a new 400-year temperature reconstruction for the Coral Sea, showing that recent ocean heat has led to mass bleaching on the Great Barrier Reef.
8 August – A study on the terraforming of Mars suggests that releasing metal nanorods into the planet's atmosphere could warm it by 30 K, and would be far more efficient than trying to do so with greenhouse gases.
12 August
Liquid water is confirmed to be present at depths of below the surface of Mars, based on a new analysis of data from NASA's InSight lander.
An Earth-sized, ultra-short period exoplanet called TOI-6255b is found to be undergoing extreme tidal distortion, caused by the close proximity of its parent star. This has resulted in an egg-shaped planet, likely to be destroyed within 400 million years.
14 August
The World Health Organization (WHO) declares mpox a public health emergency of international concern for the second time in two years, following the spread of the virus in African countries.
Human aging is found to progress in two accelerated bursts from the ages of 44 and 60, rather than being a gradual and linear process.
16 August – The Planetary Habitability Laboratory publishes a report concluding that the Wow! signal was likely been caused by a rare astrophysical event, the sudden brightening of a cold molecular cloud triggered by a stellar emission.
23 August – BNT116, the world’s first mRNA lung cancer vaccine, begins a Phase I clinical trial in seven countries.
September
4 September – The ESA/JAXA BepiColombo mission performs the closest ever flyby of a planet, as it speeds past Mercury at a distance of just 165 km (103 mi).
10 September – Researchers in Sweden demonstrate a battery made of carbon fibre composite as stiff as aluminium and energy-dense enough to be used commercially.
11 September
A study by Osaka University finds that the bluestreak cleaner wrasse (Labroides dimidiatus), a small tropical fish, may possess a form of self-awareness.
The Jülich Supercomputing Centre in Germany announces the start of installation for JUPITER, Europe's first exascale supercomputer.
12 September
OpenAI releases its "o1" series of large language models (LLMs), featuring improved capabilities in coding, math, science and other complex tasks.
Jared Isaacman and Sarah Gillis complete the first commercial spacewalk and test slimmed-down spacesuits designed by SpaceX.
16 September – A study in The Lancet estimates that antimicrobial resistance could cause 39 million deaths worldwide between 2025 and 2050.
18 September – The largest known pair of astrophysical jets is discovered within the radio galaxy Porphyrion, extending 23 million light-years from end to end. This surpasses Alcyoneus, the previous record holder at 16 million light-years.
19 September – A recently discovered near-Earth object called 2024 PT5 is calculated to become a "mini-moon" with a temporary orbit around Earth from September 29 until November 25. It will return in the year 2055.
23 September – Scientists publish the first multi-century, multi-model forecast of Antarctic Ice Sheet loss derived from global climate models, which indicates that the West Antarctic ice sheet may undergo a near-total collapse by 2300.
24 September – Researchers at ETH Zurich demonstrate an image-based AI model able to solve Google's reCAPTCHA v2, one of the world's most powerful tools for determining whether a user is human in order to deter bot attacks and spam.
30 September – Researchers from Hebrew University of Jerusalem, Ludwig-Maximilians University Munich and Technical University Dortmund develop a new method merging confocal fluorescence microscopy with microfluidic laminar flow, that can detect nanoparticles and viruses quickly. It can be achieved by using the 3D-printed microscopy approach, Brick-MIC.
October
1 October – The European Southern Observatory (ESO) reports the discovery of a sub-Earth-mass planet orbiting Barnard's star, the closest single star to our Sun at just six light-years away.
2 October
Scientists announce the first ever complete mapping of the entire brain of a fruit fly, Drosophila melanogaster, with a detail of 50 million connections between more than 139,000 neurons.
Scientists detect a new jet of carbon monoxide (CO) and previously unseen jets of carbon dioxide (CO2) gas on Centaur 29P by using the James Webb Telescope's Near-Infrared Spectrograph.
Researchers at McGill University report a significant advance in solid-state batteries, which could improve their safety and efficiency. The new technique involves using a polymer-filled porous membrane, allowing lithium ions to move freely and eliminating the interfacial resistance between the solid electrolyte and the electrodes.
3 October – Google releases a new feature, "Video Search", which will allow people to ask a question while filming video of something, and get search results.
4 October – Scientists at Binghamton University develop artificial plants with leaves using biological solar cells, which can perform respiration, photosynthesis and generate electricity.
8 October – Researchers at REMspace achieve the first ever communication between two individuals in lucid dream using specially designed equipments.
9 October
A team of engineers, scientists, and astronauts tests the Handheld Universal Lunar Camera (HULC), a new camera designed for NASA's Artemis missions to the Moon.
Pham Tiep, a professor of mathematics, solves two long-standing problems, the Height Zero Conjecture and the Deligne-Lusztig theory. Mathematicians believe that it may lead to advances in science and technology.
Astronomers confirm that Jupiter's Great Red Spot is wobbling and fluctuating in size after observing its time-lapse video made from the images captured by the Hubble Space Telescope between December 2023 to March 2024.
10 October
Scientists use a high-level machine learning model "SHBoost", to process data and estimate precise stellar properties for 217 million stars observed by the Gaia mission.
NASA's space observatories and ISRO's AstroSat observe that a massive black hole has torn apart one star and is now battering another star by using stellar wreckage, with every strike it creates a huge splash of gas and X-rays.
Doctors at Copenhagen University Hospital complete surgery of a newly variant benign tumor, "Breakdance Bulge", on the scalp of a man who has been practicing headspin for several years, to perform breakdance.
11 October
Astronomers observe the "inside-out" growth of NGC 1549 by using the James Webb Space Telescope. Researchers assume that it could solve the mystery of how these complex structures are being formed from gas clouds.
Astronomers using Gaia Space Telescope observe 55 runway stars being ejected from R136 at about 62,000 mph (100,000 kph) in the Large Magellanic Cloud.
12 October – The long-period comet C/2023 A3 (Tsuchinshan–ATLAS) makes its closest approach to Earth.
13 October – SpaceX achieves the first successful return and capture of a Super Heavy booster from Starship, the biggest and most powerful rocket ever to fly.
14 October – NASA launches the Europa Clipper from Kennedy Space Center, which will study the Jovian moon Europa while orbiting around Jupiter.
15 October – Physicists and Researchers from the Institute of Nuclear Physics of the Polish Academy of Sciences achieve the first coherent picture of atomic nuclei made from only quarks and gluons, fusing this picture with the model of nuclei made from protons and neutrons.
16 October
NASA/ESA's Hubble Space Telescope observes a stellar volcano on R Aquarii that blasts out huge filaments of glowing gas.
Researchers at the University of Birmingham Medical School encounter a human cadaver with three penises.
Researchers from the University of Science and Technology of China achieve coherent population trapping in a semiconductor double quantum dot system.
A new world record for wireless transmission is set by a team at University College London, who achieve 938 Gigabits per second (Gb/s) over a frequency range of 5-150 Gigahertz (GHz).
17 October
Scientists observe a black hole corona using NASA's Imaging X-ray Polarimetry Explorer, and determine its shape for the first time.
A study of 25 crowdsourced ideas aimed at reducing political polarisation in the United States is published in the journal Science.
Neuroscientists discover that SUMO proteins trigger the reactivation of neural stem cells and allow them to develop and repair the brain. Scientists believe this mechanism could advance the treatment of common neurodegenerative diseases.
18 October
Researchers discover a fossilized sawfly in Australia and describe it as Baladi warru with the approval of the Mudgee Local Aboriginal Land Council.
Researchers from Hebrew University of Jerusalem introduce a holography-based computational technique that can excel at medical optical imaging.
Researchers from the University of Toronto develop an antibiotic that triggers bacterial cells to self-destruct.
Researchers at Kumamoto University achieve reproduction of hematopoietic stem cells in vitro.
19 October
Researchers at Rosario University develop a plant-based food supplement that will protect bees' brains from neurotoxins.
Researchers from University of Michigan demonstrate an ultrafast all-optical switch by pulsing circularly polarized light, that can excel fiber-optic communication.
Predicted and scheduled events
Upcoming astronomical and space events for 2024 according to The New York Times.
Expected system first light of the Vera C. Rubin Observatory and launch of the NASA-ISRO Synthetic Aperture Radar.
Science-related budgets
: Various requested changes to budgets of science-related US institutions have been described with some information about the respective planned research programs.
Astronomical events
Close approach of asteroid to Earth
Potential collision of lost asteroid with Earth
See also
:Category:Science events
:Category:Science timelines
List of emerging technologies
List of years in science
References
2024 in science
21st century in science
2020s in technology
2024-related lists
Science timelines by year | 0.773741 | 0.997558 | 0.771852 |
Possibilism (geography) | Possibilism in cultural geography is the theory that the environment sets certain constraints or limitations, but culture is otherwise determined by social conditions.
In cultural ecology, Marshall Sahlins used this concept in order to develop alternative approaches to the environmental determinism dominant at that time in ecological studies. Strabo posited in 64 BC that humans can make things happen by their own intelligence over time. Strabo cautioned against the assumption that nature and actions of humans were determined by the physical environment they inhabited. He observed that humans were the active elements in a human-environmental partnership and partnering.
The controversy between geographical possibilism and determinism might be considered one of (at least) three dominant epistemologic controversies of contemporary geography. The other two controversies are:
1) the reason why economic strategies can revive life on Earth
2) the contention between Mackinder and Kropotkin about what is—or should be—geography".
Possibilism in geography is, thus, considered a distinct approach to geographical knowledge, directly opposed to geographical determinism.
References
External links
University of Washington lecture
Valparaiso University on La Blache
Cultural geography
History of geography | 0.784606 | 0.983682 | 0.771802 |
Geniocracy | Geniocracy is the framework for a system of government which was first proposed by Raël (leader of the International Raëlian Movement) in 1977 and which advocates a certain minimal criterion of intelligence for political candidates and also the electorate.
Definition
The term geniocracy comes from the word genius, and describes a system that is designed to select for intelligence and compassion as the primary factors for governance. While having a democratic electoral apparatus, it differs from traditional liberal democracy by instead suggesting that candidates for office and the body electorate should meet a certain minimal criterion of problem-solving or creative intelligence. The thresholds proposed by the Raëlians are 50% above the mean for an electoral candidate and 10% above the mean for an elector. Notably, if the distribution of intelligence is assumed to be symmetric (as it is for the IQ), this would imply that the majority of population has no right to vote.
Justifying the method of selection
This method of selectivity is deliberate so as to address what the concept considers to be flaws in the current systems of democracy. The primary object of criticism is the inability of majoritarian consensus to provide a reasonable platform for intelligent decision-making for the purpose of solving problems permanently. Geniocracy's criticism of this system is that the institutions of democracy become more concerned with appealing to popular consensus through emotive issues than they are in making long-term critical decisions, especially those that may involve issues that are not immediately relevant to the electorate. It asserts that political mandate is something that is far too important to simply leave to popularity, and asserts that the critical decision-making that is required for government, especially in a world of globalization, cannot be based upon criteria of emotive or popular decision-making. In this respect, geniocracy derides liberal democracy as a form of "mediocracy". In a geniocracy, Earth would be ruled by a worldwide geniocratic government.
Agenda
Part of the geniocratic agenda is to promote the idea of a world government system, deriding the current state-system as inadequate for dealing with contemporary global issues that are typical of globalisation, such as environmentalism, social justice, human rights, and the current economic system. In line with this, geniocracy proposes a different economic model(where it is given a name called Humanitarianism in the book Intelligent Design: Message from the Designers).
Response to criticism
As a response to its controversial attitudes about selectivity, one of the more general responses is to point out that universal suffrage, the current system, already discriminates to some degree and varyingly in different countries as to who is allowed to vote. Primarily, this discrimination is against women, minority racial groups, refugees, immigrants, minority religious groups, minority ethnic groups, minors, elderly people, those living in poverty and homelessness, incarcerated and previously incarcerated people, and the mentally or physically incapacitated. This is on the basis that their ability to contribute to the decision-making process is either flawed or invalid for the purpose of the society.
Status
The current difficulty in the ideas of geniocracy is that the means of assessing intelligence are ill-defined. One idea offered by Raël in Geniocracy is to have specialists such as psychologists, neurologists, ethnologists, etc., perfect or choose among existing ones, a series of tests that would define each person's level of intelligence. They should be designed to measure intellectual potential rather than accumulation of knowledge.
Some argue other components deemed necessary for a more rounded understanding of intelligence include concepts like emotional intelligence. As such, geniocracy's validity cannot really be assessed until better and more objective methods of intelligence assessment are made available.
The matter of confronting moral problems that may arise is not addressed in the book Geniocracy; many leaders may be deeply intelligent and charismatic (having both high emotional/social intelligence and IQ) according to current means of measuring such factors, but no current scientific tests are a reliable enough measure for one's ability to make humanitarian choices (although online tests such as those used by retail chains to select job applicants may be relevant).
The lack of scientific rigour necessary for inclusion of geniocracy as properly testable political ideology can be noted in number of modern and historical dictatorships as well as oligarchies. Because of the controversies surrounding geniocracy, Raël presents the idea as a classic utopia or provocative ideal and not necessarily a model that humanity will follow.
Democratically defined regions
The author of Geniocracy recommends (though does not necessitate) a world government with 12 regions. Inhabitants would vote for which region they want to be part of. After the regions are defined, they are further divided into 12 sectors after the same principle of democracy is applied. While sectors of the same region are defined as having equal numbers of inhabitants, the regions themselves may have different levels of population, which would be proportional to its voting power.
See also
Idiocracy (a dark comedy film) depicts the United States in 2505 where the vast majority are mentally backwards (by current standards) despite widespread use of IQ tests.
Superman: Red Son ends with Lex Luthor establishing a utopian but elitist world government under the philosophy of "Luthorism" which is essentially a geniocracy run by Luthor and other geniuses.
Plato's Republic
Meritocracy
Netocracy
Noocracy
Transhumanism
Technocracy
Notes
References
Rael, La géniocratie . L'Edition du message, 1977. .
Rael, Geniocracy: Government of the People, for the People, by the Geniuses . Nova Distribution, 2008.
Further reading
External links
Geniocracy.org
Geniocracy Review on RaelNews
Geniocracy piece on RaelRadio
'Geniocracy is the solution' - article on Raelnews
Raëlian practices
Religious texts
Books about human intelligence | 0.781659 | 0.987371 | 0.771788 |
Built environment | The term built environment refers to human-made conditions and is often used in architecture, landscape architecture, urban planning, public health, sociology, and anthropology, among others. These curated spaces provide the setting for human activity and were created to fulfill human desires and needs. The term can refer to a plethora of components including the traditionally associated buildings, cities, public infrastructure, transportation, open space, as well as more conceptual components like farmlands, dammed rivers, wildlife management, and even domesticated animals.
The built environment is made up of physical features. However, when studied, the built environment often highlights the connection between physical space and social consequences. It impacts the environment and how society physically maneuvers and functions, as well as less tangible aspects of society such as socioeconomic inequity and health. Various aspects of the built environment contribute to scholarship on housing and segregation, physical activity, food access, climate change, and environmental racism.
Features
There are multiple different components that make up the built environment. Below are some prominent examples of what makes up the urban fabric:
Buildings
Buildings are used for a multitude of purposes: residential, commercial, community, institutional, and governmental. Building interiors are often designed to mediate external factors and provide space to conduct activities, whether that is to sleep, eat, work, etc. The structure of the building helps define the space around it, giving form to how individuals move through the space around the building.
Public infrastructure
Public infrastructure covers a variety of things like roads, highways, pedestrian circulation, public transportation, and parks.
Roads and highways are an important feature of the built environment that enable vehicles to access a wide range of urban and non urban spaces. They are often compared to veins within a cardiovascular system in that they circulate people and materials throughout a city similar to how veins distribute energy and materials to the cells. Pedestrian circulation is vital for the walkability of a city and general access on a human scale. The quality of sidewalks and walkways have an impact on safety and accessibility for those using these spaces. Public transportation is essential in urban areas, particularly in cities and areas that have a diverse population and income range.
Agriculture
Agricultural production accounts for roughly 52% of U.S. land use. Not only does population growth cause an expansion of cities, it also necessitates more agriculture to accommodate the demand for food for an expanding population.
History
"Built environment" as a term was coined in the 1980s, becoming widespread in the 1990s and places the concept in direct contrast to the supposedly "unbuilt" environment. The term describes a wide range of fields that form an interdisciplinary concept that has been accepted as an idea since classical antiquity and potentially before. Through the study of anthropology, the progression of the built environment into what it is today has been able to be examined. When people are able to travel outside of urban centers and areas where the built environment is already prominent, it pushes the boundaries of said built environment into new areas. While there are other factors that influence the built environment, like advancements in architecture or agriculture, transportation allowed for the spread and expansion of the built environment.
Pre–industrial Revolution
Agriculture, the cultivation of soil to grow crops and animals to provide food as well as products, was first developed about 12,000 years ago. This switch, also called the Neolithic Revolution, was the beginning of favoring permanent settlements and altering the land to grow crops and farm animals. This can be thought of as the start of the built environment, the first attempt to make permanent changes to the surrounding environment for human needs. The first appearance of cities was around 7500 BCE, dotted along where land was fertile and good for agricultural use. In these early communities, a priority was to ensure basic needs were being met. The built environment, while not as extensive as it is today, was beginning to be cultivated with the implementation of buildings, paths, farm land, domestication of animals and plants, etc. Over the next several thousand years, these smaller cities and villages grew into larger ones where trade, culture, education, and economics were driving factors. As cities began to grow, they needed to accommodate more people, as well as shifted from focusing on meeting survival needs to prioritizing comfort and desires – there are still many individuals today who do not have their basic needs met and this idea of a shift is within the framework of the evolution of society. This shift caused the built aspect of these cities to grow and expand to meet the growing population needs.
Industrial Revolution
The pinnacle of city growth was during the Industrial Revolution due to the demand for jobs created by the rise in factories. Cities rapidly grew from the 1880s to the early 1900s within the United States. This demand led individuals to move from farms to cities which resulted in the need to expand city infrastructure and created a boom in population size. This rapid growth in population in cities led to issues of noise, sanitation, health problems, traffic jams, pollution, compact living quarters, etc. In response to these issues, mass transit, trolleys, cable cars, and subways, were built and prioritized in an effort to improve the quality of the built environment. An example of this during the industrial revolution was the City Beautiful movement. The City Beautiful movement emerged in the 1890s as a result of the disorder and unhealthy living conditions within industrial cities. The movement promoted improved circulation, civic centers, better sanitation, and public spaces. With these improvements, the goal was to improve the quality of life for those living in them, as well as make them more profitable. The City Beautiful movement, while declined in popularity over the years, provided a range of urban reforms. The movement highlighted city planning, civic education, public transportation, and municipal housekeeping.
Post Industrial Revolution to present
The invention of cars, as well as train usage, became more accessible to the general masses due to the advancements in the steel, chemicals, and fuel generated production. In the 1920s, cars became more accessible to the general public due to Henry Ford's advances in the assembly line production. With this new burst of personal transportation, new infrastructure was built to accommodate. Freeways were first built in 1956 to attempt to eliminate unsafe roads, traffic jams, and insufficient routes. The creation of freeways and interstate transportation systems opened up the possibility and ease of transportation outside a person's city. This allowed ease of travel not previously found and changed the fabric of the built environment. New streets were being built within cities to accommodate cars as they became increasingly popular, railway lines were being built to connect areas not previously connected, for both public transportation as well as goods transportation. With these changes, the scope of a city began to expand outside its borders. The widespread use of cars and public transportation allowed for the implementation of suburbs; the working individual was able to commute long distances to work everyday. Suburbs blurred the line of city "borders", the day-to-day life that may have originally been relegated to a pedestrian radius now encompassed a wide range of distances due to the use of cars and public transportation. This increased accessibility allowed for the continued expansion of the built environment.
Currently, the built environment is typically used to describe the interdisciplinary field that encompasses the design, construction, management, and use of human-made physical influence as an interrelated whole. The concept also includes the relationship of these elements of the built environment with human activities over time—rather than a particular element in isolation or at a single moment in time, these aspects act together via the multiplier effect. The field today draws upon areas such as economics, law, public policy, sociology, anthropology, public health, management, geography, design, engineering, technology, and environmental sustainability to create a large umbrella that is the built environment.
There are some in modern academia who look at the built environment as all-encompassing, that there is no natural environment left. This argument comes from the idea that the built environment not only refers to that which is built, arranged, or curated, but also to what is managed, controlled, or allowed to continue. What is referred to as "nature" today can be seen as only a commodity that is placed into an environment that is constructed to fulfill the human will and desire. This commodity allows humans to enjoy the view and experience of nature without it inconveniencing their day-to-day life. It can be argued that the forests and wild-life parks that are held on a pedestal and are seemingly natural are in reality curated and allowed to exist for the enjoyment of the human experience. The planet has been irrevocably changed by human interaction. Wildlife has been hunted, harvested, brought to the brink of extinction, modified to fit human needs, the list goes on. This argument juxtaposes the argument that the built environment is only what is built, that the forests, oceans, wildlife, and other aspects of nature are their own entity.
Impact
The term built environment encompasses a broad range of categories, all of which have potential impacts. When looking at these potential impacts, the environment, as well as people, are heavily affected.
Health
The built environment can heavily impact the public's health. Historically, unsanitary conditions and overcrowding within cities and urban environments have led to infectious diseases and other health threats. Dating back to Georges-Eugene Haussmann's comprehensive plans for urban Paris in the 1850s, concern for lack of air-flow and sanitary living conditions has inspired many strong city planning efforts. During the 19th century in particular, the connection between the built environment and public health became more apparent as life expectancy decreased and diseases, as well as epidemics, increased. Today, the built environment can expose individuals to pollutants or toxins that cause chronic diseases like asthma, diabetes, and coronary vascular disease, along with many others. There is evidence to suggest that chronic disease can be reduced through healthy behaviors like a proper active lifestyle, good nutrition, and reduced exposure to toxins and pollutants. Yet, the built environment is not always designed to facilitate those healthy behaviors. Many urban environments, in particular suburbs, are automobile reliant, making it difficult or unreasonable to walk or bike to places. This condition not only adds to pollution, but can also make it hard to maintain a proper active lifestyle. Public health research has expanded the list of concerns associated with the built environment to include healthy food access, community gardens, mental health, physical health, walkability, and cycling mobility. Designing areas of cities with good public health is linked to creating opportunities for physical activity, community involvement, and equal opportunity within the built environment. Urban forms that encourage physical activity and provide adequate public resources for involvement and upward mobility are proven to have far healthier populations than those that discourage such uses of the built environment.
Social
Housing and segregation
Features in the built environment present physical barriers which constitute the boundaries between neighborhoods. Roads and railways, for instance, play a large role in how people can feasibly navigate their environment. This can result in the isolation of certain communities from various resources and from each other. The placement of roads, highways, and sidewalks also determines what access people have to jobs and childcare close to home, especially in areas where most people do not own vehicles. Walkability directly influences community, so the way a neighborhood is built affects the outcomes and opportunities of the community that lives there. Even less physically imposing features, such as architectural design, can distinguish the boundaries between communities and decrease movement across neighborhood lines.
The segregation of communities is significant because the qualities of any given space directly impact the wellbeing of the people who live and work there. George Galster and Patrick Sharkey refer to this variation in geographic context as "spatial opportunity structure", and claim that the built environment influences socioeconomic outcomes and general welfare. For instance, the history of redlining and housing segregation means that there is less green space in many Black and Hispanic neighborhoods. Access to parks and green space has been proven to be good for mental health which puts these communities at a disadvantage. The historical segregation has contributed to environmental injustice, as these neighborhoods suffer from hotter summers since urban asphalt absorbs more heat than trees and grass. The effects of spatial segregation initiatives in the built environment, such as redlining in the 1930s and 1940s, are long lasting. The inability to feasibly move from forcibly economically depressed areas into more prosperous ones creates fiscal disadvantages that are passed down generationally. With proper public education access tied to the economic prosperity of a neighborhood, many formerly redlined areas continue to lack educational opportunities for residents and, thus, job and higher-income opportunities are limited.
Environmental
The built environment has a multitude of impacts on the planet, some of the most prominent effects are greenhouse gas emissions and Urban Heat Island Effect.
The built environment expands along with factors like population and consumption which directly impact the output of greenhouse gases. As cities and urban areas grow, the need for transportation and structures grows as well. In 2006, transportation accounted for 28% of total greenhouse gas emissions in the U.S. Building's design, location, orientation, and construction process heavily influence greenhouse gas emissions. Commercial, industrial, and residential buildings account for roughly 43% of U.S. emissions in energy usage. In 2005, agricultural land use accounted for 10–12% of total human-caused greenhouse gas emissions worldwide.
Urban heat islands are pockets of higher temperature areas, typically within cities, that effect the environment, as well as quality of life. Urban Heat Islands are caused by reduction of natural landscape in favor of urban materials like asphalt, concrete, brick, etc. This change from natural landscape to urban materials is the epitome of the built environment and its expansion.
See also
Center for the Built Environment
City planning
Environmental psychology
Environmental sustainability
Healing environments
Healthy building
Indoor air quality
International Association of People-Environment Studies
Microbiomes of the built environment
National Building Museum
Natural environment
Public health
Social environment
Urbanism
Urban planning
Vernacular architecture
Weatherization
References
Further reading
Jeb Brugmann, Welcome to the urban revolution: how cities are changing the world, Bloomsbury Press, 2009
Jane Jacobs, The Death and Life of Great American Cities, Random House, New York, 1961
Andrew Knight & Les Ruddock, Advanced Research Methods in the Built Environment, Wiley-Blackwell 2008
Paul Chynoweth, The Built Environment Interdiscipline: A Theoretical Model for Decision Makers in Research and Teaching, Proceedings of the CIB Working Commission (W089) Building Education and Research Conference, Kowloon Sangri-La Hotel, Hong Kong, 10 - 13 April 2006.
Richard J. Jackson with Stacy Sinclair, Designing Healthy Communities, Jossey-Bass, San Francisco, 2012
Russell P. Lopez, The Built Environment and Public Health, Jossey-Bass, San Francisco, 2012
External links
Australian Sustainable Built Environment Council (ASBEC)
Faculty of Built Environment, UTM, Skudai, Johor, Malaysia
Designing Healthy Communities, link to nonprofit organization and public television documentary of same name
The Built Environment and Health: 11 Profiles of Neighborhood Transformation
Architectural terminology
Urban studies and planning terminology
Built
Human geography
Cultural landscapes
Environmental social science concepts | 0.777362 | 0.992821 | 0.771782 |
Curriculum vitae | In English, a curriculum vitae (, Latin for 'course of life', often shortened to CV) is a short written summary of a person's career, qualifications, and education. This is the most common usage in British English. In North America, the term résumé (also spelled resume) is used, referring to a short career summary.
The term curriculum vitae and its abbreviation, CV, are also used especially in academia to refer to extensive or even complete summaries of a person's career, qualifications, and education, including publications and other information. This has caused the widespread misconception that it is incorrect to refer to short CVs as CVs in American English and that short CVs should be called résumés, but this is not supported by the usage recorded in American dictionaries. For example, the University of California, Davis notes that "[i]n the United States and Canada, CV and resume are sometimes used interchangeably" while describing the common distinction made in North-American academia between the use of these terms to refer to documents with different contents and lengths.
In many countries, a short CV is typically the first information that a potential employer receives from a job-seeker, and CVs are typically used to screen applicants, often followed by an interview. CVs may also be requested for applicants to postsecondary programs, scholarships, grants, and bursaries. In the 2010s it became popular for applicants to provide an electronic version of their CV to employers by email, through an employment website, or published on a job-oriented social-networking service such as LinkedIn.
Contents
General usage
In general usage in all English-speaking countries, a CV is short (usually a maximum of two sides of A4 paper), and therefore contains only a summary of the job seeker's employment history, qualifications, education, and some personal information. Such a short CV is often also called a résumé only in North America, where it is however also often called a CV outside academia. CVs are often tailored to change the emphasis of the information according to the particular position for which the job seeker is applying. A CV can also be extended to include an extra page for the jobseeker's publications if these are important for the job.
In academia
In academic and medical careers, a CV is usually a comprehensive document that provides extensive information on education, publications, and other achievements. Such a CV is generally used when applying for a position in academia, while shorter CVs (also called résumés in North America) are generally used when applying for a position in industry, non-profit organizations, and the public sector.
Etymology, spelling, and plural
The term curriculum vitae can be loosely translated as '[the] course of [one's] life'. It is a loanword from Neo-Latin, which is why it was traditionally spelled curriculum vitæ using the ligature æ, also in English, but this is now rare.
In English, the plural of curriculum alone is often curriculums instead of the traditional Latin plural , which is why both forms are recorded in English dictionaries. The English plural of curriculum vitae is however almost always curricula vitae as in Latin, and this is the only form recorded in the Merriam-Webster, American Heritage, and Oxford English dictionaries, for example. (The very rare claim that the Latin plural should be curricula vitarum is in fact an incorrect hypercorrection based on superficial knowledge of Latin.)
See also
Applicant tracking system
Background check
Cover letter
Europass – European Standardised model
Human resources
Résumé fraud
Video résumé
Explanatory notes
References
External links
CV guide – Massachusetts Institute of Technology – Global Education & Career Development, United States
Cover Letter guide – Massachusetts Institute of Technology – Global Education & Career Development, United States
Recruitment
Business documents
Latin words and phrases | 0.77278 | 0.998698 | 0.771774 |
Human ecology | Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The philosophy and study of human ecology has a diffuse history with advancements in ecology, geography, sociology, psychology, anthropology, zoology, epidemiology, public health, and home economics, among others.
Historical development
The roots of ecology as a broader discipline can be traced to the Greeks and a lengthy list of developments in natural history science. Ecology also has notably developed in other cultures. Traditional knowledge, as it is called, includes the human propensity for intuitive knowledge, intelligent relations, understanding, and for passing on information about the natural world and the human experience. The term ecology was coined by Ernst Haeckel in 1866 and defined by direct reference to the economy of nature.
Like other contemporary researchers of his time, Haeckel adopted his terminology from Carl Linnaeus where human ecological connections were more evident. In his 1749 publication, Specimen academicum de oeconomia naturae, Linnaeus developed a science that included the economy and polis of nature. Polis stems from its Greek roots for a political community (originally based on the city-states), sharing its roots with the word police in reference to the promotion of growth and maintenance of good social order in a community. Linnaeus was also the first to write about the close affinity between humans and primates. Linnaeus presented early ideas found in modern aspects to human ecology, including the balance of nature while highlighting the importance of ecological functions (ecosystem services or natural capital in modern terms): "In exchange for performing its function satisfactorily, nature provided a species with the necessaries of life" The work of Linnaeus influenced Charles Darwin and other scientists of his time who used Linnaeus' terminology (i.e., the economy and polis of nature) with direct implications on matters of human affairs, ecology, and economics.
Ecology is not just biological, but a human science as well. An early and influential social scientist in the history of human ecology was Herbert Spencer. Spencer was influenced by and reciprocated his influence onto the works of Charles Darwin. Herbert Spencer coined the phrase "survival of the fittest", he was an early founder of sociology where he developed the idea of society as an organism, and he created an early precedent for the socio-ecological approach that was the subsequent aim and link between sociology and human ecology.
The history of human ecology has strong roots in geography and sociology departments of the late 19th century. In this context a major historical development or landmark that stimulated research into the ecological relations between humans and their urban environments was founded in George Perkins Marsh's book Man and Nature; or, physical geography as modified by human action, which was published in 1864. Marsh was interested in the active agency of human-nature interactions (an early precursor to urban ecology or human niche construction) in frequent reference to the economy of nature.
In 1894, an influential sociologist at the University of Chicago named Albion W. Small collaborated with sociologist George E. Vincent and published a "'laboratory guide' to studying people in their 'every-day occupations.'" This was a guidebook that trained students of sociology how they could study society in a way that a natural historian would study birds. Their publication "explicitly included the relation of the social world to the material environment."
The first English-language use of the term "ecology" is credited to American chemist and founder of the field of home economics, Ellen Swallow Richards. Richards first introduced the term as "oekology" in 1892, and subsequently developed the term "human ecology".
The term "human ecology" first appeared in Ellen Swallow Richards' 1907 Sanitation in Daily Life, where it was defined as "the study of the surroundings of human beings in the effects they produce on the lives of men". Richard's use of the term recognized humans as part of rather than separate from nature. The term made its first formal appearance in the field of sociology in the 1921 book "Introduction to the Science of Sociology", published by Robert E. Park and Ernest W. Burgess (also from the sociology department at the University of Chicago). Their student, Roderick D. McKenzie helped solidify human ecology as a sub-discipline within the Chicago school. These authors emphasized the difference between human ecology and ecology in general by highlighting cultural evolution in human societies.
Human ecology has a fragmented academic history with developments spread throughout a range of disciplines, including: home economics, geography, anthropology, sociology, zoology, and psychology. Some authors have argued that geography is human ecology. Much historical debate has hinged on the placement of humanity as part or as separate from nature. In light of the branching debate of what constitutes human ecology, recent interdisciplinary researchers have sought a unifying scientific field they have titled coupled human and natural systems that "builds on but moves beyond previous work (e.g., human ecology, ecological anthropology, environmental geography)." Other fields or branches related to the historical development of human ecology as a discipline include cultural ecology, urban ecology, environmental sociology, and anthropological ecology. Even though the term ‘human ecology' was popularized in the 1920s and 1930s, studies in this field had been conducted since the early nineteenth century in England and France.
In 1969, College of the Atlantic in Bar Harbor, Maine, was founded as a school of human ecology. Since its first enrolled class of 32 students, the college has grown into a small liberal arts institution with about 350 students and 35 full-time faculty. Every graduate receives a degree in human ecology, an interdisciplinary major which each student designs to fit their own interests and needs.
Biological ecologists have traditionally been reluctant to study human ecology, gravitating instead to the allure of wild nature. Human ecology has a history of focusing attention on humans' impact on the biotic world. Paul Sears was an early proponent of applying human ecology, addressing topics aimed at the population explosion of humanity, global resource limits, pollution, and published a comprehensive account on human ecology as a discipline in 1954. He saw the vast "explosion" of problems humans were creating for the environment and reminded us that "what is important is the work to be done rather than the label." "When we as a profession learn to diagnose the total landscape, not only as the basis of our culture, but as an expression of it, and to share our special knowledge as widely as we can, we need not fear that our work will be ignored or that our efforts will be unappreciated." Recently, the Ecological Society of America has added a Section on Human Ecology, indicating the increasing openness of biological ecologists to engage with human dominated systems and the acknowledgement that most contemporary ecosystems have been influenced by human action.
Overview
Human ecology has been defined as a type of analysis applied to the relations in human beings that was traditionally applied to plants and animals in ecology. Toward this aim, human ecologists (which can include sociologists) integrate diverse perspectives from a broad spectrum of disciplines covering "wider points of view". In its 1972 premier edition, the editors of Human Ecology: An Interdisciplinary Journal gave an introductory statement on the scope of topics in human ecology. Their statement provides a broad overview on the interdisciplinary nature of the topic:
Genetic, physiological, and social adaptation to the environment and to environmental change;
The role of social, cultural, and psychological factors in the maintenance or disruption of ecosystems;
Effects of population density on health, social organization, or environmental quality;
New adaptive problems in urban environments;
Interrelations of technological and environmental changes;
The development of unifying principles in the study of biological and cultural adaptation;
The genesis of maladaptions in human biological and cultural evolution;
The relation of food quality and quantity to physical and intellectual performance and to demographic change;
The application of computers, remote sensing devices, and other new tools and techniques
Forty years later in the same journal, Daniel G. Bates (2012) notes lines of continuity in the discipline and the way it has changed:
Today there is greater emphasis on the problems facing individuals and how actors deal with them with the consequence that there is much more attention to decision-making at the individual level as people strategize and optimize risk, costs and benefits within specific contexts. Rather than attempting to formulate a cultural ecology or even a specifically "human ecology" model, researchers more often draw on demographic, economic and evolutionary theory as well as upon models derived from field ecology.
While theoretical discussions continue, research published in Human Ecology Review suggests that recent discourse has shifted toward applying principles of human ecology. Some of these applications focus instead on addressing problems that cross disciplinary boundaries or transcend those boundaries altogether. Scholarship has increasingly tended away from Gerald L. Young's idea of a "unified theory" of human ecological knowledge—that human ecology may emerge as its own discipline—and more toward the pluralism best espoused by Paul Shepard: that human ecology is healthiest when "running out in all directions". But human ecology is neither anti-discipline nor anti-theory, rather it is the ongoing attempt to formulate, synthesize, and apply theory to bridge the widening schism between man and nature. This new human ecology emphasizes complexity over reductionism, focuses on changes over stable states, and expands ecological concepts beyond plants and animals to include people.
Application to epidemiology and public health
The application of ecological concepts to epidemiology has similar roots to those of other disciplinary applications, with Carl Linnaeus having played a seminal role. However, the term appears to have come into common use in the medical and public health literature in the mid-twentieth century. This was strengthened in 1971 by the publication of Epidemiology as Medical Ecology, and again in 1987 by the publication of a textbook on Public Health and Human Ecology. An "ecosystem health" perspective has emerged as a thematic movement, integrating research and practice from such fields as environmental management, public health, biodiversity, and economic development. Drawing in turn from the application of concepts such as the social-ecological model of health, human ecology has converged with the mainstream of global public health literature.
Connection to home economics
In addition to its links to other disciplines, human ecology has a strong historical linkage to the field of home economics through the work of Ellen Swallow Richards, among others. However, as early as the 1960s, a number of universities began to rename home economics departments, schools, and colleges as human ecology programs. In part, this name change was a response to perceived difficulties with the term home economics in a modernizing society, and reflects a recognition of human ecology as one of the initial choices for the discipline which was to become home economics. Current human ecology programs include the University of Wisconsin School of Human Ecology, the Cornell University College of Human Ecology, and the University of Alberta's Department of Human Ecology, among others.
Niche of the Anthropocene
Changes to the Earth by human activities have been so great that a new geological epoch named the Anthropocene has been proposed. The human niche or ecological polis of human society, as it was known historically, has created entirely new arrangements of ecosystems as we convert matter into technology. Human ecology has created anthropogenic biomes (called anthromes). The habitats within these anthromes reach out through our road networks to create what has been called technoecosystems containing technosols. Technodiversity exists within these technoecosystems. In direct parallel to the concept of the ecosphere, human civilization has also created a technosphere. The way that the human species engineers or constructs technodiversity into the environment threads back into the processes of cultural and biological evolution, including the human economy.
Ecosystem services
The ecosystems of planet Earth are coupled to human environments. Ecosystems regulate the global geophysical cycles of energy, climate, soil nutrients, and water that in turn support and grow natural capital (including the environmental, physiological, cognitive, cultural, and spiritual dimensions of life). Ultimately, every manufactured product in human environments comes from natural systems. Ecosystems are considered common-pool resources because ecosystems do not exclude beneficiaries and they can be depleted or degraded. For example, green space within communities provides sustainable health services that reduce mortality and regulate the spread of vector-borne disease. Research shows that people who are more engaged with and who have regular access to natural areas benefit from lower rates of diabetes, heart disease and psychological disorders. These ecological health services are regularly depleted through urban development projects that do not factor in the common-pool value of ecosystems.
The ecological commons delivers a diverse supply of community services that sustains the well-being of human society. The Millennium Ecosystem Assessment, an international UN initiative involving more than 1,360 experts worldwide, identifies four main ecosystem service types having 30 sub-categories stemming from natural capital. The ecological commons includes provisioning (e.g., food, raw materials, medicine, water supplies), regulating (e.g., climate, water, soil retention, flood retention), cultural (e.g., science and education, artistic, spiritual), and supporting (e.g., soil formation, nutrient cycling, water cycling) services.
Sixth mass extinction
Global assessments of biodiversity indicate that the current epoch, the Holocene (or Anthropocene) is a sixth mass extinction. Species loss is accelerating at 100–1000 times faster than average background rates in the fossil record. The field of conservation biology involves ecologists that are researching, confronting, and searching for solutions to sustain the planet's ecosystems for future generations.
"Human activities are associated directly or indirectly with nearly every aspect of the current extinction spasm."
Nature is a resilient system. Ecosystems regenerate, withstand, and are forever adapting to fluctuating environments. Ecological resilience is an important conceptual framework in conservation management and it is defined as the preservation of biological relations in ecosystems that persevere and regenerate in response to disturbance over time.
However, persistent, systematic, large and non-random disturbance caused by the niche-constructing behavior of human beings, including habitat conversion and land development, has pushed many of the Earth's ecosystems to the extent of their resilience thresholds. Three planetary thresholds have already been crossed, including biodiversity loss, climate change, and nitrogen cycles. These biophysical systems are ecologically interrelated and are naturally resilient, but human civilization has transitioned the planet to an Anthropocene epoch and the ecological state of the Earth is deteriorating rapidly, to the detriment of humanity. The world's fisheries and oceans, for example, are facing dire challenges as the threat of global collapse appears imminent, with serious ramifications for the well-being of humanity.
While the Anthropocene is yet to be classified as an official epoch, current evidence suggest that "an epoch-scale boundary has been crossed within the last two centuries." The ecology of the planet is further threatened by global warming, but investments in nature conservation can provide a regulatory feedback to store and regulate carbon and other greenhouse gases.
Ecological footprint
In 1992, William Rees developed the ecological footprint concept. The ecological footprint and its close analog the water footprint has become a popular way of accounting for the level of impact that human society is imparting on the Earth's ecosystems. All indications are that the human enterprise is unsustainable as the footprint of society is placing too much stress on the ecology of the planet. The WWF 2008 living planet report and other researchers report that human civilization has exceeded the bio-regenerative capacity of the planet. This means that the footprint of human consumption is extracting more natural resources than can be replenished by ecosystems around the world.
Ecological economics
Ecological economics is an economic science that extends its methods of valuation onto nature in an effort to address the inequity between market growth and biodiversity loss. Natural capital is the stock of materials or information stored in biodiversity that generates services that can enhance the welfare of communities. Population losses are the more sensitive indicator of natural capital than are species extinction in the accounting of ecosystem services. The prospect for recovery in the economic crisis of nature is grim. Populations, such as local ponds and patches of forest are being cleared away and lost at rates that exceed species extinctions. The mainstream growth-based economic system adopted by governments worldwide does not include a price or markets for natural capital. This type of economic system places further ecological debt onto future generations.
Human societies are increasingly being placed under stress as the ecological commons is diminished through an accounting system that has incorrectly assumed "... that nature is a fixed, indestructible capital asset." The current wave of threats, including massive extinction rates and concurrent loss of natural capital to the detriment of human society, is happening rapidly. This is called a biodiversity crisis, because 50% of the worlds species are predicted to go extinct within the next 50 years. Conventional monetary analyses are unable to detect or deal with these sorts of ecological problems. Multiple global ecological economic initiatives are being promoted to solve this problem. For example, governments of the G8 met in 2007 and set forth The Economics of Ecosystems and Biodiversity (TEEB) initiative:
In a global study we will initiate the process of analyzing the global economic benefit of biological diversity, the costs of the loss of biodiversity and the failure to take protective measures versus the costs of effective conservation.
The work of Kenneth E. Boulding is notable for building on the integration between ecology and its economic origins. Boulding drew parallels between ecology and economics, most generally in that they are both studies of individuals as members of a system, and indicated that the "household of man" and the "household of nature" could somehow be integrated to create a perspective of greater value.
Interdisciplinary approaches
Human ecology expands functionalism from ecology to the human mind. People's perception of a complex world is a function of their ability to be able to comprehend beyond the immediate, both in time and in space. This concept manifested in the popular slogan promoting sustainability: "think global, act local." Moreover, people's conception of community stems from not only their physical location but their mental and emotional connections and varies from "community as place, community as way of life, or community of collective action."
In the last century, the world has faced several challenges, including environmental degradation, public health issues, and climate change. Addressing these issues requires interdisciplinary and transdisciplinary interventions, allowing for a comprehensive understanding of the intricate connections between human societies and the environment. In the early years, human ecology was still deeply enmeshed in its respective disciplines: geography, sociology, anthropology, psychology, and economics. Scholars through the 1970s until present have called for a greater integration between all of the scattered disciplines that has each established formal ecological research.
In art
While some of the early writers considered how art fit into a human ecology, it was Sears who posed the idea that in the long run human ecology will in fact look more like art. Bill Carpenter (1986) calls human ecology the "possibility of an aesthetic science", renewing dialogue about how art fits into a human ecological perspective. According to Carpenter, human ecology as an aesthetic science counters the disciplinary fragmentation of knowledge by examining human consciousness.
In education
While the reputation of human ecology in institutions of higher learning is growing, there is no human ecology at the primary or secondary education levels, with one notable exception, Syosset High School, in Long Island, New York. Educational theorist Sir Kenneth Robinson has called for diversification of education to promote creativity in academic and non-academic (i.e., educate their "whole being") activities to implement a "new conception of human ecology".
Bioregionalism and urban ecology
In the late 1960s, ecological concepts started to become integrated into the applied fields, namely architecture, landscape architecture, and planning. Ian McHarg called for a future when all planning would be "human ecological planning" by default, always bound up in humans' relationships with their environments. He emphasized local, place-based planning that takes into consideration all the "layers" of information from geology to botany to zoology to cultural history. Proponents of the new urbanism movement, like James Howard Kunstler and Andres Duany, have embraced the term human ecology as a way to describe the problem of—and prescribe the solutions for—the landscapes and lifestyles of an automobile oriented society. Duany has called the human ecology movement to be "the agenda for the years ahead." While McHargian planning is still widely respected, the landscape urbanism movement seeks a new understanding between human and environment relations. Among these theorists is Frederich Steiner, who published Human Ecology: Following Nature's Lead in 2002 which focuses on the relationships among landscape, culture, and planning. The work highlights the beauty of scientific inquiry by revealing those purely human dimensions which underlie our concepts of ecology. While Steiner discusses specific ecological settings, such as cityscapes and waterscapes, and the relationships between socio-cultural and environmental regions, he also takes a diverse approach to ecology—considering even the unique synthesis between ecology and political geography. Deiter Steiner's 2003 Human Ecology: Fragments of Anti-fragmentary view of the world is an important expose of recent trends in human ecology. Part literature review, the book is divided into four sections: "human ecology", "the implicit and the explicit", "structuration", and "the regional dimension". Much of the work stresses the need for transciplinarity, antidualism, and wholeness of perspective.
Key journals
Ecology and Society
Human Ecology: An Interdisciplinary Journal
Human Ecology Review
Journal of Human Ecology and Sustainability
See also
Agroecology
Collaborative intelligence
College of the Atlantic
Contact zone
Ecological overshoot
Environmental anthropology
Environmental archaeology
Environmental communication
Environmental economics
Environmental racism
Ecology, espc. Ecology#Human ecology
Environmental psychology
Environmental sociology
Ecological systems theory
Ecosemiotics
Family and consumer science
Green economy
Home economics
Human behavioral ecology
Human ecosystem
Industrial ecology
Integrated landscape management
Otium
Political ecology
Rural sociology
Sociobiology
Social ecology (theory)
Spome
Urie Bronfenbrenner
Ernest Burgess
John Paul Goode
Robert E. Park
Louis Wirth
Rights of nature
Anthropogenic metabolism
Anthroposphere
Collective consciousness
Scale (analytical tool)
Ecological civilization
References
Further reading
Cohen, J. 1995. How Many People Can the Earth Support? New York: Norton and Co.
Dyball, R. and Newell, B. 2015 Understanding Human Ecology: A Systems Approach to Sustainability London, England: Routledge.
Henderson, Kirsten, and Michel Loreau. "An ecological theory of changing human population dynamics." People and Nature 1.1 (2019): 31–43.
Eisenberg, E. 1998. The Ecology of Eden. New York: Knopf.
Hansson, L.O. and B. Jungen (eds.). 1992. Human Responsibility and Global Change. Göteborg, Sweden: University of Göteborg.
Hens, L., R.J. Borden, S. Suzuki and G. Caravello (eds.). 1998. Research in Human Ecology: An Interdisciplinary Overview. Brussels, Belgium: Vrije Universiteit Brussel (VUB) Press.
Marten, G.G. 2001. Human Ecology: Basic Concepts for Sustainable Development. Sterling, VA: Earthscan.
McDonnell, M.J. and S.T. Pickett. 1993. Humans as Components of Ecosystems: The Ecology of Subtle Human Effects and Populated Areas. New York: Springer-Verlag.
Miller, J.R., R.M. Lerner, L.B. Schiamberg and P.M. Anderson. 2003. Encyclopedia of Human Ecology. Santa Barbara, CA: ABC-CLIO.
Polunin, N. and J.H. Burnett. 1990. Maintenance of the Biosphere. (Proceedings of the 3rd International Conference on Environmental Future — ICEF). Edinburgh: University of Edinburgh Press.
Quinn, J.A. 1950. Human Ecology. New York: Prentice-Hall.
Sargent, F. (ed.). 1974. Human Ecology. New York: American Elsevier.
Suzuki, S., R.J. Borden and L. Hens (eds.). 1991. Human Ecology — Coming of Age: An International Overview. Brussels, Belgium: Vrije Universiteit Brussel (VUB) Press.
Tengstrom, E. 1985. Human Ecology — A New Discipline?: A Short Tentative Description of the Institutional and Intellectual History of Human Ecology. Göteborg, Sweden: Humanekologiska Skrifter.
Theodorson, G.A. 1961. . Evanston, IL: Row, Peterson and Co.
Wyrostkiewicz, M. 2013. "Human Ecology. An Outline of the Concept and the Relationship between Man and Nature". Lublin, Poland: Wydawnictwo KUL
Young, G.L. (ed.). 1989. Origins of Human Ecology. Stroudsburg, PA: Hutchinson Ross.
External links
Environmental studies
Human geography | 0.778191 | 0.991735 | 0.771759 |
Scientific drilling | Scientific drilling into the Earth is a way for scientists to probe the Earth's sediments, crust, and upper mantle. In addition to rock samples, drilling technology can unearth samples of connate fluids and of the subsurface biosphere, mostly microbial life, preserved in drilled samples. Scientific drilling is carried out on land by the International Continental Scientific Drilling Program (ICDP) and at sea by the Integrated Ocean Drilling Program (IODP). Scientific drilling on the continents includes drilling down into solid ground as well as drilling from small boats on lakes. Sampling thick glaciers and ice sheets to obtain ice cores is related but will not be described further here.
Like probes sent into outer space, scientific drilling is a technology used to obtain samples from places that people cannot reach. Human beings have descended as deep as 2,212 m (7,257 ft) in Veryovkina Cave, the world's deepest known cave, located in the Caucasus Mountains of the country of Georgia. Gold miners in South Africa regularly go deeper than 3,400 m, but no human has ever descended to greater depths than this below the Earth's solid surface. As depth increases into the Earth, temperature and pressure rise. Temperatures in the crust increase about 15 °C per kilometer, making it impossible for humans to exist at depths greater than several kilometers, even if it was somehow possible to keep shafts open in spite of the tremendous pressure.
Scientific drilling is interdisciplinary and international in scope. Individual scientists cannot generally undertake scientific drilling projects alone. Teamwork between scientists, engineers, and administrators is often required for success in planning and in carrying out a drilling project, analyzing the samples, and interpreting and publishing the results in scientific journals.
Purposes
Scientific drilling is used to address a wide range of problems, which cannot be addressed using rocks exposed on the surface or the seafloor. The Integrated Ocean Drilling Program has a broad set of research objectives, which can be divided into three principal themes:
The nature of the deep biosphere and the oceanic sub-seafloor
Understanding environmental change, processes and effects
Cycles and geodynamics of the solid Earth
ICDP focuses on scientific drilling to address the following questions about the history, chemistry, and physics of Earth and the biosphere:
What are the physical and chemical processes responsible for earthquakes and volcanic eruptions, and what are the best ways to minimize their effects?
How has Earth's climate changed in the recent past and what are the reasons for such changes?
What have been the effects of meteorite impacts (bolides) on climate and mass extinctions of life?
What is the nature of the deep biosphere and its relation to geologic processes such as hydrocarbon maturation, ore deposition and evolution of life on Earth?
What are the ways to safely dispose of radioactive and other toxic waste materials?
How do sedimentary basins and fossil fuel resources originate and evolve?
How do mineral, and metal ore deposits form?
What are the fundamental physics of plate tectonics and heat, mass, and fluid transfer through Earth's crust?
How can people better interpret geophysical data used to determine the structure and properties of Earth's crust?
Deepest drillings
The Kola Superdeep Borehole on the Kola peninsula of Russia reached and is the deepest penetration of the Earth's solid surface. The German Continental Deep Drilling Program at has shown the earth crust to be mostly porous. Drillings as deep as into the seafloor were achieved at DSDP/ODP/IODP Hole 504B. Because the continental crust is about 45 km thick on average, whereas oceanic crust is 6–7 km thick, deep drillings have penetrated only the upper 25-30% of both crusts.
Ocean drilling
The drillship that has been used for the past 20 and more years, the JOIDES Resolution, drills without a riser. Riser-less drilling uses seawater as its primary drilling fluid, which is pumped down through the drill pipe. This cleans and cools the drill bit and lifts cuttings out of the hole, piling them in a cone around the hole. Japan's new drillship, the Chikyu, uses a riser for drilling. The riser system includes an outer casing that surrounds the drill pipe, to provide return-circulation of drilling fluid for maintaining pressure balance within the borehole. A blowout preventer (BOP) protects the vessel and the environment from any unexpected release of gas and oil. This technology is necessary for drilling several thousand meters into the Earth.
References
External links
Integrated Ocean Drilling Program official website
International Continental Scientific Drilling Program
USA initiative DOSECC (Drilling, Observation, and Sampling of the Earth's Continental Crust)
Geophysics | 0.790039 | 0.976839 | 0.771741 |
Ecofeminism | Ecofeminism is a branch of feminism and political ecology. Ecofeminist thinkers draw on the concept of gender to analyse the relationships between humans and the natural world. The term was coined by the French writer Françoise d'Eaubonne in her book (1974). Ecofeminist theory asserts a feminist perspective of Green politics that calls for an egalitarian, collaborative society in which there is no one dominant group. Today, there are several branches of ecofeminism, with varying approaches and analyses, including liberal ecofeminism, spiritual/cultural ecofeminism, and social/socialist ecofeminism (or materialist ecofeminism). Interpretations of ecofeminism and how it might be applied to social thought include ecofeminist art, social justice and political philosophy, religion, contemporary feminism, and poetry.
Ecofeminist analysis explores the connections between women and nature in culture, economy, religion, politics, literature and iconography, and addresses the parallels between the oppression of nature and the oppression of women. These parallels include, but are not limited to, seeing women and nature as property, seeing men as the curators of culture and women as the curators of nature, and how men dominate women and humans dominate nature. Ecofeminism emphasizes that both women and nature must be respected.
Though the scope of ecofeminist analysis is dynamic, American author and ecofeminist Charlene Spretnak has offered one way of categorizing ecofeminist work: 1) through the study of political theory as well as history; 2) through the belief and study of nature-based religions; 3) through environmentalism.
Overview
While diverse ecofeminist perspectives have emerged from female activists and thinkers all over the world, academic studies of ecofeminism have been dominated by North American universities. Thus, in the 1993 essay entitled "Ecofeminism: Toward Global Justice and Planetary Health", authors Greta Gaard and Lori Gruen outline what they call the "ecofeminist framework". The essay provides a wealth of data and statistics in addition to outlining the theoretical aspects of the ecofeminist critique. The framework was intended to establish ways of viewing and understanding our current global situations so that we can better understand how we arrived at this point and what may be done to ameliorate the ills.
Building on the work of North American scholars Rosemary Ruether and Carolyn Merchant, Gaard and Gruen argue that there are four sides to this framework:
The mechanistic materialist model of the universe that resulted from the scientific revolution and the subsequent reduction of all things into mere resources to be optimized, dead inert matter to be used.
The rise of patriarchal religions and their establishment of gender hierarchies along with their denial of immanent divinity.
The self and other dualisms and the inherent power and domination ethic it entails.
Capitalism and its claimed intrinsic need for the exploitation, destruction and instrumentalization of animals, earth and people for the sole purpose of creating wealth.
They hold that these four factors have brought us to what ecofeminists see as a "separation between nature and culture" that is for them the root source of our planetary ills.
Ecofeminism developed out of anarcha-feminist concerns with abolishing all forms of domination, while focusing on the oppressive nature of humanity's relationship to the natural world. According to Françoise d'Eaubonne in her book Le Féminisme ou la Mort (1974), ecofeminism relates the oppression and domination of all marginalized groups (women, people of color, children, the poor) to the oppression and domination of nature (animals, land, water, air, etc.). In the book, the author argues that oppression, domination, exploitation, and colonization from the Western patriarchal society has directly caused irreversible environmental damage. Françoise d'Eaubonne was an activist and organizer, and her writing encouraged the eradication of all social injustice, not just injustice against women and the environment.
This tradition includes a number of influential texts including: Women and Nature (Susan Griffin 1978), The Death of Nature (Carolyn Merchant 1980) and Gyn/Ecology (Mary Daly 1978). These texts helped to propel the association between domination by men of women and the domination of culture over nature. From these texts feminist activism of the 1980s linked ideas of ecology and the environment. Movements such as the National Toxics Campaign, Mothers of East Los Angeles (MELA), and Native Americans for a Clean Environment (NACE) were led by women devoted to issues of human health and environmental justice. Writings in this circle discussed ecofeminism drawing from Green Party politics, peace movements, and direct action movements.
Gendering nature
Ecofeminist theory asserts that capitalism reflects only paternalistic and patriarchal values. This notion implies that the effects of capitalism have not benefited women and has led to a harmful split between nature and culture. In the 1970s, early ecofeminists discussed that the split can only be healed by the feminine instinct for nurture and holistic knowledge of nature's processes.
Since then, several ecofeminist scholars have made the distinction that it is not because women are female or "feminine" that they relate to nature, but because of their similar states of oppression by the same male-dominant forces. The marginalization is evident in the gendered language used to describe nature, such as "Mother Earth" or "Mother Nature", and the animalized language used to describe women in derogatory terms. Some discourses link women specifically to the environment because of their traditional social role as a nurturer and caregiver. Ecofeminists following in this line of thought believe that these connections are illustrated through the coherence of socially-labeled values associated with 'femininity' such as nurturing, which are present both among women and in nature.
Alternatively, ecofeminist and activist Vandana Shiva wrote that women have a special connection to the environment through their daily interactions and that this connection has been underestimated. According to Shiva, women in subsistence economies who produce "wealth in partnership with nature, have been experts in their own right of holistic and ecological knowledge of nature's processes". She makes the point that "these alternative modes of knowing, which are oriented to the social benefits and sustenance needs are not recognized by the capitalist reductionist paradigm, because it fails to perceive the interconnectedness of nature, or the connection of women's lives, work and knowledge with the creation of wealth (23)". Shiva blames this failure on the Western patriarchal perceptions of development and progress. According to Shiva, patriarchy has labeled women, nature, and other groups not growing the economy as "unproductive". Similarly, Australian ecofeminist Ariel Salleh deepens this materialist ecofeminist approach in dialogue with green politics, ecosocialism, genetic engineering and climate policy.
Concepts
Modern science and ecofeminism
In Ecofeminism (1993) authors Vandana Shiva and Maria Mies ponder modern science and its acceptance as a universal and value-free system. They view the dominant stream of modern science not as objective science but as a projection of Western men's values. The privilege of determining what is considered scientific knowledge and its usage has been controlled by men, and for the most part of history restricted to men. Many examples exist, including the medicalization of childbirth and the industrialization of plant reproduction.
A common claim within ecofeminist literature is that patriarchal structures justify their dominance through binary opposition, these include but are not limited to: heaven/earth, mind/body, male/female, human/animal, spirit/matter, culture/nature and white/non-white. Oppression, according to them, is reinforced by assuming truth in these binaries, which factuality they challenge, and instilling them as 'marvelous to behold' through what they consider to be religious and scientific constructs.
Vegetarian ecofeminism
The application of ecofeminism to animal rights has established vegetarian ecofeminism, which asserts that "omitting the oppression of animals from feminist and ecofeminist analyses … is inconsistent with the activist and philosophical foundations of both feminism (as a "movement to end all forms of oppression") and ecofeminism." It puts into practice "the personal is political", as many ecofeminists believe that "meat-eating is a form of patriarchal domination…that suggests a link between male violence and a meat-based diet." During a 1995 interview with On the Issues, Carol J. Adams stated, "Manhood is constructed in our culture in part by access to meat-eating and control of other bodies, whether it's women or animals". According to Adams, "We cannot work for justice and challenge the oppression of nature without understanding that the most frequent way we interact with nature is by eating animals". Vegetarian ecofeminism combines sympathy with the analysis of culture and politics to refine a system of ethics and action.
Materialist ecofeminism
The key activist-scholars in materialist ecofeminism are Maria Mies and Veronika Bennholdt-Thomsen in Germany; Vandana Shiva in India; Ariel Salleh in Australia; Mary Mellor in the UK; and Ana Isla in Peru. Materialist ecofeminism is not widely known in North America aside from the journal collective at Capitalism Nature Socialism. A materialist view connects institutions such as labor, power, and property as the source of domination over women and nature. There are connections made between these subjects because of the values of production and reproduction. This dimension of ecofeminism may also be referred to as "social feminism", "socialist ecofeminism", or "Marxist ecofeminism". According to Carolyn Merchant, "Social ecofeminism advocates the liberation of women through overturning economic and social hierarchies that turn all aspects of life into a market society that today even invades the womb". Ecofeminism in this sense seeks to eliminate social hierarchies which favor the production of commodities (dominated by men) over biological and social reproduction.
Spiritual and cultural ecofeminism
Spiritual ecofeminism is another branch of ecofeminism, and it is popular among ecofeminist authors such as Starhawk, Riane Eisler, and Carol J. Adams. Starhawk calls this an earth-based spirituality, which recognizes that the Earth is alive, and that we are an interconnected community. Spiritual ecofeminism is not linked to one specific religion, but is centered around values of caring, compassion, and non-violence. Often, ecofeminists refer to more ancient traditions, such as the worship of Gaia, the Goddess of nature and spirituality (also known as Mother Earth). Wicca and Paganism are particularly influential to spiritual ecofeminism. Most Wicca covens demonstrate a deep respect for nature, a feminine outlook, and an aim to establish strong community values.
In her book Radical Ecology, Carolyn Merchant refers to spiritual ecofeminism as "cultural ecofeminism". According to Merchant, cultural ecofeminism, "celebrates the relationship between women and nature through the revival of ancient rituals centered on goddess worship, the moon, animals, and the female reproductive system." In this sense, cultural ecofeminists tend to value intuition, an ethic of caring, and human-nature interrelationships.
Environmental movements
Susan A. Mann, an eco-feminist and professor of sociological and feminist theory, considers the roles women played in these activisms to be the starter for ecofeminism in later centuries. Mann associates the beginning of ecofeminism not with feminists but with women of different races and class backgrounds who made connections among gender, race, class, and environmental issues. This ideal is upheld through the notion that in activist and theory circles marginalized groups must be included in the discussion. In early environmental and women's movements, issues of varying races and classes were often separated.
Beginning in the late 20th century, women worked in efforts to protect wildlife, food, air and water. These efforts depended largely on new developments in the environmental movement from influential writers, such as Henry David Thoreau, Aldo Leopold, John Muir, and Rachel Carson. Fundamental examples of women's efforts in the 20th century are the books Silent Spring by Rachel Carson and Refuge by Terry Tempest Williams.
Ecofeminist author Karen Warren lists Aldo Leopold's essay "Land Ethic" (1949) as a fundamental work to the ecofeminist conception, as Leopold was the first to pen an ethic for the land which understands all non-human parts of that community (animals, plants, land, air, water) as equal to and in a relationship with humans. This inclusive understanding of the environment launched the modern preservation movement and illustrated how issues can be viewed through a framework of caring.
Women have participated in environmental movements, specifically preservation and conservation beginning in the late nineteenth century and continuing into the early twentieth century.
Movements of the 1970s and 80s
In India, in the state of Uttarakhand in 1973, women took part in the Chipko movement to protect forests from deforestation. Many men during this time were moving to cities in search of work, and women that stayed in the rural parts of India were reliant on the forests for subsistence. Non-violent protest tactics were used to occupy trees so that loggers could not cut them down.
In Kenya in 1977, the Green Belt Movement was initiated by environmental and political activist Professor Wangari Maathai. It is a rural tree planting program led by women, which Maathai designed to help prevent desertification in the area. The program created a 'green belt' of at least 1,000 trees around villages, and gave participants the ability to take charge in their communities. In later years, the Green Belt Movement was an advocate for informing and empowering citizens through seminars for civic and environmental education, as well as holding national leaders accountable for their actions and instilling agency in citizens. The work of the Green Belt Movement continues today.
In 1978 in New York, mother and environmentalist Lois Gibbs led her community in protest after discovering that their entire neighborhood, Love Canal, was built on top of a toxic dump site. The toxins in the ground were causing illness among children and reproductive issues among women, as well as birth defects in babies born to pregnant women exposed to the toxins. The Love Canal movement eventually led to the evacuation and relocation of nearly 800 families by the federal government.
In 1980 and 1981, women like ecofeminist Ynestra King organized a peaceful protest at the Pentagon. Women stood, hand in hand, demanding equal rights (including social, economic, and reproductive rights) as well as an end to militaristic actions taken by the government and exploitation of the community (people and the environment). This movement is known as the Women's Pentagon Actions.
In 1985, the Akwesasne Mother's Milk Project was launched by Katsi Cook. This study was funded by the government, and investigated how the higher level of contaminants in water near the Mohawk reservation impacted babies. It revealed that through breast milk, Mohawk children were being exposed to 200% more toxins than children not on the reservation. Toxins contaminate water all over the world, but due to environmental racism, certain marginalized groups are exposed to a much higher amount.
The Greening of Harlem Coalition is another example of an ecofeminist movement. In 1989, Bernadette Cozart founded the coalition, which is responsible for many urban gardens around Harlem. Cozart's goal is to turn vacant lots into community gardens. This is economically beneficial, and also provides a way for very urban communities to be in touch with nature and each other. The majority of people interested in this project (as noted in 1990) were women. Through these gardens, they were able to participate in and become leaders of their communities. Urban greening exists in other places as well. Beginning in 1994, a group of African-American women in Detroit have developed city gardens, and call themselves the Gardening Angels. Similar garden movements have occurred globally.
The development of vegetarian ecofeminism can be traced to the mid-80s and 90s, where it first appeared in writing. However, the roots of a vegetarian ecofeminist view can be traced back further by looking at sympathy for non-humans and counterculture movements of the 1960s and 1970s. At the culmination of the decade ecofeminism had spread to both coasts and articulated an intersectional analysis of women and the environment. Eventually, challenging ideas of environmental classism and racism, resisting toxic dumping and other threats to the impoverished.
Major critiques
Accused essentialism
In the 1980s and 1990s ecofeminism began to be heavily critiqued as 'essentialism'. The critics believed ecofeminism to be reinforcing patriarchal dominance and norms. Post structural and third wave feminists argued that ecofeminism equated women with nature and that this dichotomy grouped all women into one category enforcing the very societal norms that feminism is trying to break.
The ascribed essentialism appears in two main areas:
Ecofeminism demonstrates an adherence to the strict dichotomy, among others, between men and women. Some critiques of ecofeminism note that the dichotomy between women and men and nature and culture creates a dualism that is too stringent and focused on the differences of women and men. In this sense, ecofeminism too strongly correlates the social status of women with the social status of nature, rather than the non-essentialist view that women along with nature have both feminine and masculine qualities, and that just as feminine qualities have often been seen as less worthy, nature is also seen as having lesser value than culture.
Ecofeminism asserts a divergent view regarding participation in existing social structures. As opposed to radical and liberation-based feminist movements, mainstream feminism is tightly bound with hegemonic social status and strives to promote equality within the existing social and political structure, such as making it possible for women to occupy positions of power in business, industry and politics, using direct involvement as the main tactic for achieving pay equity and influence. In contrast, many ecofeminists oppose active engagement in these areas, as these are the very structures that the movement intends to dismantle.
Ecofeminist and author Noel Sturgeon says in an interview that what anti-essentialists are critiquing is a strategy used to mobilize large and diverse groups of both theorists and activists. Additionally, according to ecofeminist and author Charlene Spretnak, modern ecofeminism is concerned about a variety of issues, including reproductive technology, equal pay and equal rights, toxic pollution, Third World development, and more.
As it propelled into the 21st century, ecofeminists became aware of the criticisms, and in response they began doing research and renaming the topic, i.e. queer ecologies, global feminist environmental justice, and gender and the environment. The essentialism concern was mostly found among North American academics. In Europe and the global South, class, race, gender and species dominations were framed by more grounded materialist understandings.
Socialist feminist critiques
Social ecologist and feminist Janet Biehl has criticized ecofeminism for focusing too much on a mystical connection between women and nature and not enough on the actual conditions of women. She has also stated that rather than being a forward-moving theory, ecofeminism is an anti-progressive movement for women. The ecofeminist believes that women and nature have a strong bond because of their shared history of patriarchal oppression; whereas, the socialist feminist focuses on gender roles in the political economy. The socialist feminist may oppose the ecofeminist by arguing that women do not have an intrinsic connection with nature; rather, that is a socially constructed narrative.
Rosemary Radford Ruether also critiqued this focus on mysticism over work that focuses on helping women, but argues that spirituality and activism can be combined effectively in ecofeminism.
(Anti-essentialist) Intersectional ecofeminisms
A. E. Kings as well as Norie Ross Singer theorize that ecofeminism must be approached from an intersectionality perspective, and advance an anti-essentialist critical ecofeminism of difference accounting for how multiple axes of identity such as gender, race, and class variously intermesh in human-nonhuman relationships. Kings argues that the discipline is fundamentally intersectional given that it is built upon the idea that patriarchal violence against women is connected to domination of nature. Simultaneously, Kings warns against the presumption of intersectional thought as a natural component of ecofeminism, so as not to disregard the distinctive academic contributions of intersectional feminists.
Feminist thought surrounding ecofeminism grew in some areas as it was criticized; vegetarian ecofeminism contributed intersectional analysis; and ecofeminisms that analyzed animal rights, labor rights and activisms as they could draw lines among oppressed groups. To some, the inclusion of non-human animals also came to be viewed as essentialist.
Ableism and white saviorism
Environmental movements have often been criticized for their lack of consideration for the participation of people with disabilities. Although environmental justice and feminist care ethics have made political pushes for participation of marginalized groups, people with disabilities face issues of access and representation in policy making. In a paper by author Andrew Charles, Deaf people in Wales show concern about their quality of life when unable to safely access outdoor spaces and engage in political movements. There is also an overt nurturing aspect of essentialist ecofeminism that is potentially both oppressive and patronizing to marginalized groups. Through a bioessentialist and matriarchal lens, ecofeminism can create environments where activists may speak for underrepresented groups they aren't a part of or participate in volunteer tourism. This form of radical white savior complex is not unique to ecofeminism, but in any intersectional space it has potential to disrupt self advocacy of marginalized groups.
Wild animal suffering
Catia Faria argues that the view held by ecofeminists that the largest source of harm to non-human animals in the wild is patriarchal culture and that the conservation of nature and natural processes is the best way to help these individuals is mistaken. She instead contends that natural processes are a source of immense suffering for these animals and that we should work towards alleviating the harms they experience, as well as eliminating patriarchal sources of harm, such as hunting.
Theorists
Judi Bari – Bari was a principal organizer of the Earth First! movement and experienced sexist hostility.
Françoise d'Eaubonne – Called upon women to lead an ecological revolution in order to save the planet. This entailed revolutionizing gender relations and human relations with the natural world.
Greta Gaard – Greta Gaard is an American ecofeminist scholar and activist. Her major contributions to the field connect ideas of queer theory, vegetarianism, and animal liberation. Her major theories include ecocriticism which works to include literary criticism and composition to inform ecofeminism and other feminist theories to address a wider range of social issues within ecofeminism. She is an ecological activist and leader in the U.S. Green Party, and the Green Movement.
Susan Griffin - A radical feminist philosopher, essayist and playwright particularly known for her innovative, hybrid-form ecofeminist works. A Californian, she taught as an adjunct professor at UC Berkeley as well as at Stanford University and California Institute of Integral Studies.
Sallie McFague – A prominent ecofeminist theologian, McFague uses the metaphor of God's body to represent the universe at large. This metaphor values inclusive, mutualistic and interdependent relations amongst all things.
Carolyn Merchant – Historian of science who taught at University of California, Berkeley for many years. Her book The Death of Nature: Women, Ecology and the Scientific Revolution is a classic ecofeminist text.
Mary Mellor – UK sociologist who moved to ecofeminist ideas from an interest in cooperatives. Her books Breaking the Boundaries and Feminism and Ecology are grounded in a materialist analysis.
Maria Mies – Mies is a German social critic who has been involved in feminist work throughout Europe and India. She works particularly on the intersections of patriarchy, poverty, and the environment on a local and global scale.
Adrian Parr – A cultural and environmental theorist. She has published eight books and numerous articles on environmental activism, feminist new materialism, and imagination. Most notable is her trilogy – Hijacking Sustainability, The Wrath of Capital, and Birth of a New Earth.
Val Plumwood – Val Plumwood, formerly Val Routley, was an Australian ecofeminist intellectual and activist, who was prominent in the development of radical ecosophy from the early 1970s through the remainder of the 20th century. In her work Feminism and the Mastery of Nature she describes the relationship of mankind and the environment relating to an eco-feminist ideology.
Alicia Puleo – The author of several books and articles on ecofeminism and gender inequality, Alicia Puleo has been characterized as "arguably Spain's most prominent explicator-philosopher of the worldwide movement or theoretical orientation known as ecofeminism."
Rosemary Radford Ruether – Has written 36 books and over 600 articles exploring the intersections of feminism, theology, and creation care. Ruether was the first person to connect the domination of the earth with the oppression of women.
Ariel Salleh – Australian ecofeminist with a global perspective; a founding editor of the journal Capitalism Nature Socialism; author of three books and some 200 articles examining links with deep and social ecology, green politics and eco-socialism.
Vandana Shiva – Shiva is a scientist by training, prolific author and Indian ecofeminist activist. She was a participant in the Chipko movement of the 1970s, which used non-violent activism to protest and prevent deforestation in the Garhwal Himalayas of Uttarakhand, India, then in Uttar Pradesh. Her fight against genetically modified organisms (GMOs) (together with the fights led by Rachel Carson against DDT and Erin Brockovich against hexavalent chromium) has been described as an example of ecofeminist position.
Charlene Spretnak – Spretnak is an American writer largely known for her writing on ecology, politics and spirituality. Through these writings Spretnak has become a prominent ecofeminist. She has written many books which discuss ecological issues in terms of effects with social criticisms, including feminism. Spretnak's works had a major influence in the development of the Green Party. She has also won awards based on her visions on ecology and social issues as well as feminist thinking.
Starhawk – An American writer and activist, Starhawk is known for her work in spiritualism and ecofeminism. She advocates for social justice in issues surrounding nature and spirit. These social justice issues fall under the scope of feminism and ecofeminism. She believes in fighting oppression through intersectionality and the importance of spirituality, eco consciousness and sexual and gender liberation.
Vanessa Lemgruber – Lemgruber is a Brazilian lawyer, writer, activist, and ecofeminist. She defends the Doce river in Brazil and advocates for water quality and zero waste movements.
Douglas Vakoch – An American ecocritic whose edited volumes include Ecofeminism and Rhetoric: Critical Perspectives on Sex, Technology, and Discourse (2011), Feminist Ecocriticism: Environment, Women, and Literature (2012), Dystopias and Utopias on Earth and Beyond: Feminist Ecocriticism of Science Fiction (2021), Ecofeminist Science Fiction: International Perspectives on Gender, Ecology, and Literature (2021), The Routledge Handbook of Ecofeminism and Literature (2023), (with Nicole Anae) Indian Feminist Ecocriticism (2022), and (with Sam Mickey) Ecofeminism in Dialogue (2018), Literature and Ecofeminism: Intersectional and International Voices (2018), and Women and Nature?: Beyond Dualism in Gender, Body, and Environment (2018).
Karen J. Warren – Warren received her B.A. in philosophy from the University of Minnesota (1970) and her Ph.D. from the University of Massachusetts-Amherst in 1978. Before her long tenure at Macalester College, which began in 1985, Warren was Professor of Philosophy at St. Olaf College in the early 1980s. Warren was the Ecofeminist-Scholar-in-Residence at Murdoch University in Australia. In 2003, she served as an Oxford University Round Table Scholar and as Women's Chair in Humanistic Studies at Marquette University in 2004. She has spoken widely on environmental issues, feminism, critical thinking skills and peace studies in many international locations including Buenos Aires, Gothenburg, Helsinki, Oslo, Manitoba, Melbourne, Moscow, Perth, the U.N. Earth Summit in Rio de Janeiro (1992), and San Jose.
Laura Wright — Wright proposed Vegan studies as an academic discipline.
See also
Chipko movement
Climate change and gender
Cottagecore
Critical animal studies
Cultural feminism
Deep ecology
Deep Green Resistance
Ecofeminist art
Green syndicalism
Intersectionality
List of ecofeminist authors
Queer ecology
Romanticism
Sexecology
Social ecology
Vegan studies
Vegetarian ecofeminism
Women and the environment through history
References
Further reading
Key works
Ancient Futures: Learning from Ladakh, by Helena Norberg-Hodge
The Body of God by Sallie McFague
The Chalice & The Blade: Our History, Our Future, by Riane Eisler
The Death of Nature: Women, Ecology, and the Scientific Revolution by Carolyn Merchant
Ecofeminism by Maria Mies and Vandana Shiva
Ecofeminism in Latin America by Mary Judith Ross
Ecofeminist Philosophy by Karen J. Warren
Environmental Culture by Val Plumwood
Feminism and the Mastery of Nature by Val Plumwood
Gaia & God: An Ecofeminist Theology of Earth Healing by Rosemary Radford Ruether
Integrating Ecofeminism, Globalization, and World Religions by Rosemary Radford Ruether
Neither Man Nor Beast by Carol J. Adams
Refuge: An Unnatural History of Family and Place by Terry Tempest Williams
The Resurgence of the Real: Body, Nature, and Place in a Hypermodern World by Charlene Spretnak
Sacred Longings: Ecofeminist theology and Globalization by Mary Grey
The Sexual Politics of Meat by Carol J. Adams
Silent Spring by Rachel Carson
The Spiral Dance by Starhawk
Staying Alive: Women, Ecology and Development by Vandana Shiva
Thinking Green! Essays on Environmentalism, Feminism, and Nonviolence by Petra Kelly
Tomorrow's Biodiversity by Vandana Shiva
Woman and Nature: The Roaring Inside Her by Susan Griffin
Breaking the Boundaries by Mary Mellor
Feminism and Ecology by Mary Mellor
Ecofeminism as Politics: nature, Marx, and the postmodern by Ariel Salleh
The Greening of Costa Rica by Ana Isla
Anthologies
Animals and Women: Feminist Theoretical Explorations, edited by Carol J. Adams and Josephine Donovan
Dystopias and Utopias on Earth and Beyond: Feminist Ecocriticism of Science Fiction, edited by Douglas A. Vakoch
Ecofeminism: Women, Animals, Nature, edited by Greta Gaard
Ecofeminism: Women, Culture, Nature, edited by Karen J. Warren with editorial assistance from Nisvan Erkal
EcoFeminism & Globalization: exploring culture, context and religion, edited by Heather Eaton & Lois Ann Lorentzen
Ecofeminism and Rhetoric: Critical Perspectives on Sex, Technology, and Discourse, edited by Douglas A. Vakoch
Ecofeminism and the Sacred, edited by Carol J. Adams
Ecofeminism in Dialogue, edited by Douglas A. Vakoch and Sam Mickey
Ecofeminist Science Fiction: International Perspectives on Gender, Ecology, and Literature, edited by Douglas A. Vakoch
Eco-Sufficiency & Global Justice: Women write Political Ecology, edited by Ariel Salleh
Feminist Ecocriticism: Environment, Women, and Literature, edited by Douglas A. Vakoch
Indian Feminist Ecocriticism, edited by Douglas A. Vakoch and Nicole Anae
Literature and Ecofeminism: Intersectional and International Voices, edited by Douglas A. Vakoch and Sam Mickey
The Politics of Women's Spirituality: Essays on the Rise of Spiritual Power within the Feminist Movement, edited by Charlene Spretnak
Readings in Ecology and Feminist Theology, edited by Mary Heather MacKinnon and Moni McIntyre
Reclaim the Earth, edited by Leonie Caldecott & Stephanie Leland
Reweaving the World: The Emergence of Ecofeminism, edited by Irene Diamond and Gloria Feman Orenstein
The Routledge Handbook of Ecofeminism and Literature, edited by Douglas A. Vakoch
Women and Nature?: Beyond Dualism in Gender, Body, and Environment, edited by Douglas A. Vakoch and Sam Mickey
Women Healing Earth: Third World Women on Ecology, Feminism, and Religion, edited by Rosemary Radford Ruether
GUIA ECOFEMINISTA - mulheres, direito, ecologia, written by Vanessa Lemgruber edited by Ape'Ku
Journal articles
Mann, Susan A. 2011. Pioneers of U.S. Ecofeminism and Environmental Justice, "Feminist Formations" 23(2): 1-25.
Salleh, Ariel (1984) 'From Feminism to Ecology', Social Alternatives, Vol. 4, No. 3, 8–12.
Salleh, Ariel (2019) 'Ecofeminist Sociology as a New Class Analysis' in Klaus Dorre and Brigitte Aulenbacher (eds.), Global Dialogue, International Sociological Association Newsletter: Vol. 9, No. 1.
Fiction
A Door Into Ocean by Joan Slonczewski
Always Coming Home by Ursula K. Le Guin
Buffalo Gals, Won't You Come Out Tonight by Ursula K. Le Guin
The Fifth Sacred Thing by Starhawk
The Gate to Women's Country by Sheri S. Tepper
The Holdfast Chronicles by Suzy McKee Charnas
The Madonna Secret by Sophie Strand
Native Tongue by Suzette Haden Elgin
The Parable of the Sower by Octavia Butler
Prodigal Summer by Barbara Kingsolver
Surfacing by Margaret Atwood
The Wanderground by Sally Miller Gearhart
Woman on the Edge of Time by Marge Piercy
The Kin of Ata are Waiting for You by Dorothy Bryant
Bear by Marian Engel
The Temple of My Familiar by Alice Walker
Sultana's Dream by Begum Rokeya Sakhawat Hossain
Poetry
The Sea of Affliction (1987, reprinted 2010) by Rosemarie Rowley
External links
Ecofeminism: Toward global justice and planetary health Feminist Greta Gaard and Lori Gruen's ecofeminist framework
"An Ecology of Knowledge: Feminism, Ecology and the Science and Religion Discourse" Metanexus Institute by Lisa Stenmark
"Ecofeminism and the Democracy of Creation" by Catherine Keller (2005); cf. Carol P. Christ, "Ecofeminism", in Michel Weber and Will Desmond (eds.), Handbook of Whiteheadian Process Thought, Frankfurt / Lancaster, ontos verlag, 2008, pp. 87–98.
"Toward a Queer Ecofeminism" by Greta Gaard
Feminism and ecology: the same struggle? – The shaping of ecofeminism by Marijke Colle
Feminist Environmental Philosophy by Karen Warren
What is Ecofeminism? Perlego Books
1974 neologisms
Articles containing video clips
Environmental humanities
Environmental movements
Environmental social science concepts
Environmentalism
Feminism and health
Feminism and history
Feminist movements and ideologies
Feminist theory
Green politics
Left-wing politics
Liberalism
Political ecology
Progressivism
Relational ethics
Social justice | 0.774993 | 0.995783 | 0.771725 |
Biological system | A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism.
Organ and tissue systems
These specific systems are widely studied in human anatomy and are also present in many other animals.
Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm.
Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus.
Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels.
Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine.
Integumentary system: skin, hair, fat, and nails.
Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons.
Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands.
Exocrine system: various functions including lubrication and protection by exocrine glands such sweat glands, mucous glands, lacrimal glands and mammary glands
Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies.
Immune system: protects the organism from foreign bodies.
Nervous system: collecting, transferring and processing information with brain, spinal cord, peripheral nervous system and sense organs.
Sensory systems: visual system, auditory system, olfactory system, gustatory system, somatosensory system, vestibular system.
Muscular system: allows for manipulation of the environment, provides locomotion, maintains posture, and produces heat. Includes skeletal muscles, smooth muscles and cardiac muscle.
Reproductive system: the sex organs, such as ovaries, fallopian tubes, uterus, vagina, mammary glands, testes, vas deferens, seminal vesicles and prostate.
History
The notion of system (or apparatus) relies upon the concept of vital or organic function: a system is a set of organs with a definite function. This idea was already present in Antiquity (Galen, Aristotle), but the application of the term "system" is more recent. For example, the nervous system was named by Monro (1783), but Rufus of Ephesus (c. 90–120), clearly viewed for the first time the brain, spinal cord, and craniospinal nerves as an anatomical unit, although he wrote little about its function, nor gave a name to this unit.
The enumeration of the principal functions - and consequently of the systems - remained almost the same since Antiquity, but the classification of them has been very various, e.g., compare Aristotle, Bichat, Cuvier.
The notion of physiological division of labor, introduced in the 1820s by the French physiologist Henri Milne-Edwards, allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
Cellular organelle systems
The exact components of a cell are determined by whether the cell is a eukaryote or prokaryote.
Nucleus (eukaryotic only): storage of genetic material; control center of the cell.
Cytosol: component of the cytoplasm consisting of jelly-like fluid in which organelles are suspended within
Cell membrane (plasma membrane):
Endoplasmic reticulum: outer part of the nuclear envelope forming a continuous channel used for transportation; consists of the rough endoplasmic reticulum and the smooth endoplasmic reticulum
Rough endoplasmic reticulum (RER): considered "rough" due to the ribosomes attached to the channeling; made up of cisternae that allow for protein production
Smooth endoplasmic reticulum (SER): storage and synthesis of lipids and steroid hormones as well as detoxification
Ribosome: site of biological protein synthesis essential for internal activity and cannot be reproduced in other organs
Mitochondrion (mitochondria): powerhouse of the cell; site of cellular respiration producing ATP (adenosine triphosphate)
Lysosome: center of breakdown for unwanted/unneeded material within the cell
Peroxisome: breaks down toxic materials from the contained digestive enzymes such as H2O2(hydrogen peroxide)
Golgi apparatus (eukaryotic only): folded network involved in modification, transport, and secretion
Chloroplast: site of photosynthesis; storage of chlorophyllyourmom.com.in.us.33.11.44.55.66.77.88.99.1010.1111.1212.1313.1414.1515.1616.1717.1818.1919.2020
See also
Biological network
Artificial life
Biological systems engineering
Evolutionary systems
Organ system
Systems biology
Systems ecology
Systems theory
External links
Systems Biology: An Overview by Mario Jardon: A review from the Science Creative Quarterly, 2005.
Synthesis and Analysis of a Biological System, by Hiroyuki Kurata, 1999.
It from bit and fit from bit. On the origin and impact of information in the average evolution. Includes how life forms and biological systems originate and from there evolve to become more and more complex, including evolution of genes and memes, into the complex memetics from organisations and multinational corporations and a "global brain", (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/
Schmidt-Rhaesa, A. 2007. The Evolution of Organ Systems. Oxford University Press, Oxford, .
References
Biological systems | 0.776692 | 0.993594 | 0.771717 |
Biometrics | Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance.
Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological characteristics which are related to the shape of the body. Examples include, but are not limited to fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina, odor/scent, voice, shape of ears and gait. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to mouse movement, typing rhythm, gait, signature, voice, and behavioral profiling. Some researchers have coined the term behaviometrics (behavioral biometrics) to describe the latter class of biometrics.
More traditional means of access control include token-based identification systems, such as a driver's license or passport, and knowledge-based identification systems, such as a password or personal identification number. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods; however, the collection of biometric identifiers raises privacy concerns.
Biometric functionality
Many different aspects of human physiology, chemistry or behavior can be used for biometric authentication. The selection of a particular biometric for use in a specific application involves a weighting of several factors. Jain et al. (1999) identified seven such factors to be used when assessing the suitability of any trait for use in biometric authentication. Biometric authentication is based upon biometric recognition which is an advanced method of recognising biological and behavioural characteristics of an Individual.
Universality means that every person using a system should possess the trait.
Uniqueness means the trait should be sufficiently different for individuals in the relevant population such that they can be distinguished from one another.
Permanence relates to the manner in which a trait varies over time. More specifically, a trait with good permanence will be reasonably invariant over time with respect to the specific matching algorithm.
Measurability (collectability) relates to the ease of acquisition or measurement of the trait. In addition, acquired data should be in a form that permits subsequent processing and extraction of the relevant feature sets.
Performance relates to the accuracy, speed, and robustness of technology used (see performance section for more details).
Acceptability relates to how well individuals in the relevant population accept the technology such that they are willing to have their biometric trait captured and assessed.
Circumvention relates to the ease with which a trait might be imitated using an artifact or substitute.
Proper biometric use is very application dependent. Certain biometrics will be better than others based on the required levels of convenience and security. No single biometric will meet all the requirements of every possible application.
The block diagram illustrates the two basic modes of a biometric system. First, in verification (or authentication) mode the system performs a one-to-one comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the person they claim to be. Three steps are involved in the verification of a person. In the first step, reference models for all the users are generated and stored in the model database. In the second step, some samples are matched with reference models to generate the genuine and impostor scores and calculate the threshold. The third step is the testing step. This process may use a smart card, username, or ID number (e.g. PIN) to indicate which template should be used for comparison. Positive recognition is a common use of the verification mode, "where the aim is to prevent multiple people from using the same identity".
Second, in identification mode the system performs a one-to-many comparison against a biometric database in an attempt to establish the identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to a template in the database falls within a previously set threshold. Identification mode can be used either for positive recognition (so that the user does not have to provide any information about the template to be used) or for negative recognition of the person "where the system establishes whether the person is who she (implicitly or explicitly) denies to be". The latter function can only be achieved through biometrics since other methods of personal recognition, such as passwords, PINs, or keys, are ineffective.
The first time an individual uses a biometric system is called enrollment. During enrollment, biometric information from an individual is captured and stored. In subsequent uses, biometric information is detected and compared with the information stored at the time of enrollment. Note that it is crucial that storage and retrieval of such systems themselves be secure if the biometric system is to be robust. The first block (sensor) is the interface between the real world and the system; it has to acquire all the necessary data. Most of the times it is an image acquisition system, but it can change according to the characteristics desired. The second block performs all the necessary pre-processing: it has to remove artifacts from the sensor, to enhance the input (e.g. removing background noise), to use some kind of normalization, etc. In the third block, necessary features are extracted. This step is an important step as the correct features need to be extracted in an optimal way. A vector of numbers or an image with particular properties is used to create a template. A template is a synthesis of the relevant characteristics extracted from the source. Elements of the biometric measurement that are not used in the comparison algorithm are discarded in the template to reduce the file size and to protect the identity of the enrollee. However, depending on the scope of the biometric system, original biometric image sources may be retained, such as the PIV-cards used in the Federal Information Processing Standard Personal Identity Verification (PIV) of Federal Employees and Contractors (FIPS 201).
During the enrollment phase, the template is simply stored somewhere (on a card or within a database or both). During the matching phase, the obtained template is passed to a matcher that compares it with other existing templates, estimating the distance between them using any algorithm (e.g. Hamming distance). The matching program will analyze the template with the input. This will then be output for a specified use or purpose (e.g. entrance in a restricted area), though it is a fear that the use of biometric data may face mission creep.
Selection of biometrics in any practical application depending upon the characteristic measurements and user requirements. In selecting a particular biometric, factors to consider include, performance, social acceptability, ease of circumvention and/or spoofing, robustness, population coverage, size of equipment needed and identity theft deterrence. The selection of a biometric is based on user requirements and considers sensor and device availability, computational time and reliability, cost, sensor size, and power consumption.
Multimodal biometric system
Multimodal biometric systems use multiple sensors or biometrics to overcome the limitations of unimodal biometric systems. For instance iris recognition systems can be compromised by aging irises and electronic fingerprint recognition can be worsened by worn-out or cut fingerprints. While unimodal biometric systems are limited by the integrity of their identifier, it is unlikely that several unimodal systems will suffer from identical limitations. Multimodal biometric systems can obtain sets of information from the same marker (i.e., multiple images of an iris, or scans of the same finger) or information from different biometrics (requiring fingerprint scans and, using voice recognition, a spoken passcode).
Multimodal biometric systems can fuse these unimodal systems sequentially, simultaneously, a combination thereof, or in series, which refer to sequential, parallel, hierarchical and serial integration modes, respectively.
Fusion of the biometrics information can occur at different stages of a recognition system. In case of feature level fusion, the data itself or the features extracted from multiple biometrics are fused. Matching-score level fusion consolidates the scores generated by multiple classifiers pertaining to different modalities. Finally, in case of decision level fusion the final results of multiple classifiers are combined via techniques such as majority voting. Feature level fusion is believed to be more effective than the other levels of fusion because the feature set contains richer information about the input biometric data than the matching score or the output decision of a classifier. Therefore, fusion at the feature level is expected to provide better recognition results.
Furthermore, the evolving biometric market trends underscore the importance of technological integration, showcasing a shift towards combining multiple biometric modalities for enhanced security and identity verification, aligning with the advancements in multimodal biometric systems.
Spoof attacks consist in submitting fake biometric traits to biometric systems, and are a major threat that can curtail their security. Multi-modal biometric systems are commonly believed to be intrinsically more robust to spoof attacks, but recent studies have shown that they can be evaded by spoofing even a single biometric trait.
One such proposed system of Multimodal Biometric Cryptosystem Involving the Face, Fingerprint, and Palm Vein by Prasanalakshmi The Cryptosystem Integration combines biometrics with cryptography, where the palm vein acts as a cryptographic key, offering a high level of security since palm veins are unique and difficult to forge. The Fingerprint Involves minutiae extraction (terminations and bifurcations) and matching techniques. Steps include image enhancement, binarization, ROI extraction, and minutiae thinning. The Face system uses class-based scatter matrices to calculate features for recognition, and the Palm Vein acts as an unbreakable cryptographic key, ensuring only the correct user can access the system. The cancelable Biometrics concept allows biometric traits to be altered slightly to ensure privacy and avoid theft. If compromised, new variations of biometric data can be issued.
The Encryption fingerprint template is encrypted using the palm vein key via XOR operations. This encrypted Fingerprint is hidden within the face image using steganographic techniques. Enrollment and Verification for the Biometric data (Fingerprint, palm vein, face) are captured, encrypted, and embedded into a face image. The system extracts the biometric data and compares it with stored values for Verification. The system was tested with fingerprint databases, achieving 75% verification accuracy at an equal error rate of 25% and processing time approximately 50 seconds for enrollment and 22 seconds for Verification. High security due to palm vein encryption, effective against biometric spoofing, and the multimodal approach ensures reliability if one biometric fails. Potential for integration with smart cards or on-card systems, enhancing security in personal identification systems.
Performance
The discriminating powers of all biometric technologies depend on the amount of entropy they are able to encode and use in matching.
The following are used as performance metrics for biometric systems:
False match rate (FMR, also called FAR = False Accept Rate): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs that are incorrectly accepted. In case of similarity scale, if the person is an imposter in reality, but the matching score is higher than the threshold, then he is treated as genuine. This increases the FMR, which thus also depends upon the threshold value.
False non-match rate (FNMR, also called FRR = False Reject Rate): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs that are incorrectly rejected.
Receiver operating characteristic or relative operating characteristic (ROC): The ROC plot is a visual characterization of the trade-off between the FMR and the FNMR. In general, the matching algorithm performs a decision based on a threshold that determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be fewer false non-matches but more false accepts. Conversely, a higher threshold will reduce the FMR but increase the FNMR. A common variation is the Detection error trade-off (DET), which is obtained using normal deviation scales on both axes. This more linear graph illuminates the differences for higher performances (rarer errors).
Equal error rate or crossover error rate (EER or CER): the rate at which both acceptance and rejection errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is the most accurate.
Failure to enroll rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low-quality inputs.
Failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly.
Template capacity: the maximum number of sets of data that can be stored in the system.
History
An early cataloguing of fingerprints dates back to 1885 when Juan Vucetich started a collection of fingerprints of criminals in Argentina. Josh Ellenbogen and Nitzan Lebovic argued that Biometrics originated in the identification systems of criminal activity developed by Alphonse Bertillon (1853–1914) and by Francis Galton's theory of fingerprints and physiognomy. According to Lebovic, Galton's work "led to the application of mathematical models to fingerprints, phrenology, and facial characteristics", as part of "absolute identification" and "a key to both inclusion and exclusion" of populations. Accordingly, "the biometric system is the absolute political weapon of our era" and a form of "soft control". The theoretician David Lyon showed that during the past two decades biometric systems have penetrated the civilian market, and blurred the lines between governmental forms of control and private corporate control. Kelly A. Gates identified 9/11 as the turning point for the cultural language of our present: "in the language of cultural studies, the aftermath of 9/11 was a moment of articulation, where objects or events that have no necessary connection come together and a new discourse formation is established: automated facial recognition as a homeland security technology."
Adaptive biometric systems
Adaptive biometric systems aim to auto-update the templates or model to the intra-class variation of the operational data. The two-fold advantages of these systems are solving the problem of limited training data and tracking the temporal variations of the input data through adaptation. Recently, adaptive biometrics have received a significant attention from the research community. This research direction is expected to gain momentum because of their key promulgated advantages. First, with an adaptive biometric system, one no longer needs to collect a large number of biometric samples during the enrollment process. Second, it is no longer necessary to enroll again or retrain the system from scratch in order to cope with the changing environment. This convenience can significantly reduce the cost of maintaining a biometric system. Despite these advantages, there are several open issues involved with these systems. For mis-classification error (false acceptance) by the biometric system, cause adaptation using impostor sample. However, continuous research efforts are directed to resolve the open issues associated to the field of adaptive biometrics. More information about adaptive biometric systems can be found in the critical review by Rattani et al.
Recent advances in emerging biometrics
In recent times, biometrics based on brain (electroencephalogram) and heart (electrocardiogram) signals have emerged. An example is finger vein recognition, using pattern-recognition techniques, based on images of human vascular patterns. The advantage of this newer technology is that it is more fraud resistant compared to conventional biometrics like fingerprints. However, such technology is generally more cumbersome and still has issues such as lower accuracy and poor reproducibility over time.
On the portability side of biometric products, more and more vendors are embracing significantly miniaturized biometric authentication systems (BAS) thereby driving elaborate cost savings, especially for large-scale deployments.
Operator signatures
An operator signature is a biometric mode where the manner in which a person using a device or complex system is recorded as a verification template. One potential use for this type of biometric signature is to distinguish among remote users of telerobotic surgery systems that utilize public networks for communication.
Proposed requirement for certain public networks
John Michael (Mike) McConnell, a former vice admiral in the United States Navy, a former director of U.S. National Intelligence, and senior vice president of Booz Allen Hamilton, promoted the development of a future capability to require biometric authentication to access certain public networks in his keynote speech at the 2009 Biometric Consortium Conference.
A basic premise in the above proposal is that the person that has uniquely authenticated themselves using biometrics with the computer is in fact also the agent performing potentially malicious actions from that computer. However, if control of the computer has been subverted, for example in which the computer is part of a botnet controlled by a hacker, then knowledge of the identity of the user at the terminal does not materially improve network security or aid law enforcement activities.
Animal biometrics
Rather than tags or tattoos, biometric techniques may be used to identify individual animals: zebra stripes, blood vessel patterns in rodent ears, muzzle prints, bat wing patterns, primate facial recognition and koala spots have all been tried.
Issues and concerns
Human dignity
Biometrics have been considered also instrumental to the development of state authority (to put it in Foucauldian terms, of discipline and biopower). By turning the human subject into a collection of biometric parameters, biometrics would dehumanize the person, infringe bodily integrity, and, ultimately, offend human dignity.
In a well-known case, Italian philosopher Giorgio Agamben refused to enter the United States in protest at the United States Visitor and Immigrant Status Indicator (US-VISIT) program's requirement for visitors to be fingerprinted and photographed. Agamben argued that gathering of biometric data is a form of bio-political tattooing, akin to the tattooing of Jews during the Holocaust. According to Agamben, biometrics turn the human persona into a bare body. Agamben refers to the two words used by Ancient Greeks for indicating "life", zoe, which is the life common to animals and humans, just life; and bios, which is life in the human context, with meanings and purposes. Agamben envisages the reduction to bare bodies for the whole humanity. For him, a new bio-political relationship between citizens and the state is turning citizens into pure biological life (zoe) depriving them from their humanity (bios); and biometrics would herald this new world.
In Dark Matters: On the Surveillance of Blackness, surveillance scholar Simone Browne formulates a similar critique as Agamben, citing a recent study relating to biometrics R&D that found that the gender classification system being researched "is inclined to classify Africans as males and Mongoloids as females." Consequently, Browne argues that the conception of an objective biometric technology is difficult if such systems are subjectively designed, and are vulnerable to cause errors as described in the study above. The stark expansion of biometric technologies in both the public and private sector magnifies this concern. The increasing commodification of biometrics by the private sector adds to this danger of loss of human value. Indeed, corporations value the biometric characteristics more than the individuals value them. Browne goes on to suggest that modern society should incorporate a "biometric consciousness" that "entails informed public debate around these technologies and their application, and accountability by the state and the private sector, where the ownership of and access to one's own body data and other intellectual property that is generated from one's body data must be understood as a right."
Other scholars have emphasized, however, that the globalized world is confronted with a huge mass of people with weak or absent civil identities. Most developing countries have weak and unreliable documents and the poorer people in these countries do not have even those unreliable documents. Without certified personal identities, there is no certainty of right, no civil liberty. One can claim his rights, including the right to refuse to be identified, only if he is an identifiable subject, if he has a public identity. In such a sense, biometrics could play a pivotal role in supporting and promoting respect for human dignity and fundamental rights.
Privacy and discrimination
It is possible that data obtained during biometric enrollment may be used in ways for which the enrolled individual has not consented. For example, most biometric features could disclose physiological and/or pathological medical conditions (e.g., some fingerprint patterns are related to chromosomal diseases, iris patterns could reveal sex, hand vein patterns could reveal vascular diseases, most behavioral biometrics could reveal neurological diseases, etc.). Moreover, second generation biometrics, notably behavioral and electro-physiologic biometrics (e.g., based on electrocardiography, electroencephalography, electromyography), could be also used for emotion detection.
There are three categories of privacy concerns:
Unintended functional scope: The authentication goes further than authentication, such as finding a tumor.
Unintended application scope: The authentication process correctly identifies the subject when the subject did not wish to be identified.
Covert identification: The subject is identified without seeking identification or authentication, i.e. a subject's face is identified in a crowd.
Danger to owners of secured items
When thieves cannot get access to secure properties, there is a chance that the thieves will stalk and assault the property owner to gain access. If the item is secured with a biometric device, the damage to the owner could be irreversible, and potentially cost more than the secured property. For example, in 2005, Malaysian car thieves cut off a man's finger when attempting to steal his Mercedes-Benz S-Class.
Attacks at presentation
In the context of biometric systems, presentation attacks may also be called "spoofing attacks".
As per the recent ISO/IEC 30107 standard, presentation attacks are defined as "presentation to the biometric capture subsystem with the goal of interfering with the operation of the biometric system". These attacks can be either impersonation or obfuscation attacks. Impersonation attacks try to gain access by pretending to be someone else. Obfuscation attacks may, for example, try to evade face detection and face recognition systems.
Several methods have been proposed to counteract presentation attacks.
Surveillance humanitarianism in times of crisis
Biometrics are employed by many aid programs in times of crisis in order to prevent fraud and ensure that resources are properly available to those in need. Humanitarian efforts are motivated by promoting the welfare of individuals in need, however the use of biometrics as a form of surveillance humanitarianism can create conflict due to varying interests of the groups involved in the particular situation. Disputes over the use of biometrics between aid programs and party officials stalls the distribution of resources to people that need help the most. In July 2019, the United Nations World Food Program and Houthi Rebels were involved in a large dispute over the use of biometrics to ensure resources are provided to the hundreds of thousands of civilians in Yemen whose lives are threatened. The refusal to cooperate with the interests of the United Nations World Food Program resulted in the suspension of food aid to the Yemen population. The use of biometrics may provide aid programs with valuable information, however its potential solutions may not be best suited for chaotic times of crisis. Conflicts that are caused by deep-rooted political problems, in which the implementation of biometrics may not provide a long-term solution.
Cancelable biometrics
One advantage of passwords over biometrics is that they can be re-issued. If a token or a password is lost or stolen, it can be cancelled and replaced by a newer version. This is not naturally available in biometrics. If someone's face is compromised from a database, they cannot cancel or reissue it. If the electronic biometric identifier is stolen, it is nearly impossible to change a biometric feature. This renders the person's biometric feature questionable for future use in authentication, such as the case with the hacking of security-clearance-related background information from the Office of Personnel Management (OPM) in the United States.
Cancelable biometrics is a way in which to incorporate protection and the replacement features into biometrics to create a more secure system. It was first proposed by Ratha et al.
"Cancelable biometrics refers to the intentional and systematically repeatable distortion of biometric features in order to protect sensitive user-specific data. If a cancelable feature is compromised, the distortion characteristics are changed, and the same biometrics is mapped to a new template, which is used subsequently. Cancelable biometrics is one of the major categories for biometric template protection purpose besides biometric cryptosystem." In biometric cryptosystem, "the error-correcting coding techniques are employed to handle intraclass variations." This ensures a high level of security but has limitations such as specific input format of only small intraclass variations.
Several methods for generating new exclusive biometrics have been proposed. The first fingerprint-based cancelable biometric system was designed and developed by Tulyakov et al. Essentially, cancelable biometrics perform a distortion of the biometric image or features before matching. The variability in the distortion parameters provides the cancelable nature of the scheme. Some of the proposed techniques operate using their own recognition engines, such as Teoh et al. and Savvides et al., whereas other methods, such as Dabbah et al., take the advantage of the advancement of the well-established biometric research for their recognition front-end to conduct recognition. Although this increases the restrictions on the protection system, it makes the cancellable templates more accessible for available biometric technologies
Proposed soft biometrics
Soft biometrics are understood as not strict biometrical recognition practices that are proposed in favour of identity cheaters and stealers.
Traits are physical, behavioral or adhered human characteristics that have been derived from the way human beings normally distinguish their peers (e.g. height, gender, hair color). They are used to complement the identity information provided by the primary biometric identifiers. Although soft biometric characteristics lack the distinctiveness and permanence to recognize an individual uniquely and reliably, and can be easily faked, they provide some evidence about the users identity that could be beneficial. In other words, despite the fact they are unable to individualize a subject, they are effective in distinguishing between people. Combinations of personal attributes like gender, race, eye color, height and other visible identification marks can be used to improve the performance of traditional biometric systems. Most soft biometrics can be easily collected and are actually collected during enrollment. Two main ethical issues are raised by soft biometrics. First, some of soft biometric traits are strongly cultural based; e.g., skin colors for determining ethnicity risk to support racist approaches, biometric sex recognition at the best recognizes gender from tertiary sexual characters, being unable to determine genetic and chromosomal sexes; soft biometrics for aging recognition are often deeply influenced by ageist stereotypes, etc. Second, soft biometrics have strong potential for categorizing and profiling people, so risking of supporting processes of stigmatization and exclusion.
Data protection of biometric data in international law
Many countries, including the United States, are planning to share biometric data with other nations.
In testimony before the US House Appropriations Committee, Subcommittee on Homeland Security on "biometric identification" in 2009, Kathleen Kraninger and Robert A Mocny commented on international cooperation and collaboration with respect to biometric data, as follows:
According to an article written in 2009 by S. Magnuson in the National Defense Magazine entitled "Defense Department Under Pressure to Share Biometric Data" the United States has bilateral agreements with other nations aimed at sharing biometric data. To quote that article:
Likelihood of full governmental disclosure
Certain members of the civilian community are worried about how biometric data is used but full disclosure may not be forthcoming. In particular, the Unclassified Report of the United States' Defense Science Board Task Force on Defense Biometrics states that it is wise to protect, and sometimes even to disguise, the true and total extent of national capabilities in areas related directly to the conduct of security-related activities. This also potentially applies to Biometrics. It goes on to say that this is a classic feature of intelligence and military operations. In short, the goal is to preserve the security of 'sources and methods'.
Countries applying biometrics
Countries using biometrics include Australia, Brazil, Bulgaria, Canada, Cyprus, Greece, China, Gambia, Germany, India, Iraq, Ireland, Israel, Italy, Malaysia, Netherlands, New Zealand, Nigeria, Norway, Pakistan, Poland, South Africa, Saudi Arabia, Tanzania, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States and Venezuela.
Among low to middle income countries, roughly 1.2 billion people have already received identification through a biometric identification program.
There are also numerous countries applying biometrics for voter registration and similar electoral purposes. According to the International IDEA's ICTs in Elections Database, some of the countries using (2017) Biometric Voter Registration (BVR) are Armenia, Angola, Bangladesh, Bhutan, Bolivia, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Colombia, Comoros, Congo (Democratic Republic of), Costa Rica, Ivory Coast, Dominican Republic, Fiji, Gambia, Ghana, Guatemala, India, Iraq, Kenya, Lesotho, Liberia, Malawi, Mali, Mauritania, Mexico, Morocco, Mozambique, Namibia, Nepal, Nicaragua, Nigeria, Panama, Peru, Philippines, Senegal, Sierra Leone, Solomon Islands, Somaliland, Swaziland, Tanzania, Uganda, Uruguay, Venezuela, Yemen, Zambia, and Zimbabwe.
India's national ID program
India's national ID program called Aadhaar is the largest biometric database in the world. It is a biometrics-based digital identity assigned for a person's lifetime, verifiable online instantly in the public domain, at any time, from anywhere, in a paperless way. It is designed to enable government agencies to deliver a retail public service, securely based on biometric data (fingerprint, iris scan and face photo), along with demographic data (name, age, gender, address, parent/spouse name, mobile phone number) of a person. The data is transmitted in encrypted form over the internet for authentication, aiming to free it from the limitations of physical presence of a person at a given place.
About 550 million residents have been enrolled and assigned 480 million Aadhaar national identification numbers as of 7 November 2013. It aims to cover the entire population of 1.2 billion in a few years. However, it is being challenged by critics over privacy concerns and possible transformation of the state into a surveillance state, or into a Banana republic.§ The project was also met with mistrust regarding the safety of the social protection infrastructures. To tackle the fear amongst the people, India's supreme court put a new ruling into action that stated that privacy from then on was seen as a fundamental right. On 24 August 2017 this new law was established.
Malaysia's MyKad national ID program
The current identity card, known as MyKad, was introduced by the National Registration Department of Malaysia on 5 September 2001 with Malaysia becoming the first country in the world to use an identification card that incorporates both photo identification and fingerprint biometric data on a built-in computer chip embedded in a piece of plastic.
Besides the main purpose of the card as a validation tool and proof of citizenship other than the birth certificate, MyKad also serves as a valid driver's license, an ATM card, an electronic purse, and a public key, among other applications, as part of the Malaysian Government Multipurpose Card (GMPC) initiative, if the bearer chooses to activate the functions.
See also
Access control
AFIS
AssureSign
BioAPI
Biometrics in schools
European Association for Biometrics
Fingerprint recognition
Fuzzy extractor
Gait analysis
Government database
Handwritten biometric recognition
Identity Cards Act 2006
International Identity Federation
Keystroke dynamics
Multiple Biometric Grand Challenge
Private biometrics
Retinal scan
Signature recognition
Smart city
Speaker recognition
Vein matching
Voice analysis
Notes
References
Further reading
Biometrics Glossary – Glossary of Biometric Terms based on information derived from the National Science and Technology Council (NSTC) Subcommittee on Biometrics. Published by Fulcrum Biometrics, LLC, July 2013
Biometrics Institute - Explanatory Dictionary of Biometrics A glossary of biometrics terms, offering detailed definitions to supplement existing resources. Published May 2023.
Delac, K., Grgic, M. (2004). A Survey of Biometric Recognition Methods.
"Fingerprints Pay For School Lunch". (2001). Retrieved 2008-03-02.
"Germany to phase-in biometric passports from November 2005". (2005). E-Government News. Retrieved 2006-06-11.
Oezcan, V. (2003). "Germany Weighs Biometric Registration Options for Visa Applicants", Humboldt University Berlin. Retrieved 2006-06-11.
Ulrich Hottelet: Hidden champion – Biometrics between boom and big brother, German Times, January 2007.
Dunstone, T. and Yager, N., 2008. Biometric system and data analysis. 1st ed. New York: Springer.
External links
Surveillance
Authentication methods
Identification | 0.773749 | 0.997356 | 0.771703 |
Operationalization | In research design, especially in psychology, social sciences, life sciences and physics, operationalization or operationalisation is a process of defining the measurement of a phenomenon which is not directly measurable, though its existence is inferred from other phenomena. Operationalization thus defines a fuzzy concept so as to make it clearly distinguishable, measurable, and understandable by empirical observation. In a broader sense, it defines the extension of a concept—describing what is and is not an instance of that concept. For example, in medicine, the phenomenon of health might be operationalized by one or more indicators like body mass index or tobacco smoking. As another example, in visual processing the presence of a certain object in the environment could be inferred by measuring specific features of the light it reflects. In these examples, the phenomena are difficult to directly observe and measure because they are general/abstract (as in the example of health) or they are latent (as in the example of the object). Operationalization helps infer the existence, and some elements of the extension, of the phenomena of interest by means of some observable and measurable effects they have.
Sometimes multiple or competing alternative operationalizations for the same phenomenon are available. Repeating the analysis with one operationalization after the other can determine whether the results are affected by different operationalizations. This is called checking robustness. If the results are (substantially) unchanged, the results are said to be robust against certain alternative operationalizations of the checked variables.
The concept of operationalization was first presented by the British physicist N. R. Campbell in his 'Physics: The Elements' (Cambridge, 1920). This concept spread to humanities and social sciences. It remains in use in physics.
Theory
History
Operationalization is the scientific practice of operational definition, where even the most basic concepts are defined through the operations by which we measure them. The practice originated in the field of physics with the philosophy of science book The Logic of Modern Physics (1927), by Percy Williams Bridgman, whose methodological position is called "operationalism".
Bridgman wrote that in the theory of relativity a concept like "duration" can split into multiple different concepts. In refining a physical theory, it may be discovered that what was thought to be one concept is actually two or more distinct concepts. Bridgman proposed that if only operationally defined concepts are used, this will never happen.
Bridgman's theory was criticized because "length" is measured in various ways (e.g. it is impossible to use a measuring rod to measure the distance to the Moon), so "length" logically is not one concept but many, with some concepts requiring knowledge of geometry. Each concept is to be defined by the measuring operation used. So the criticism is that there are potentially infinite concepts, each defined by the methods that measured it, such as angle of sighting, day of the solar year, angular subtense of the moon, etc. which were gathered together, some astronomical observations taken over a period of thousands of years.
In the 1930s, Harvard experimental psychologist Edwin Boring and students Stanley Smith Stevens and Douglas McGregor, struggling with the methodological and epistemological problems of defining measurement of psychological phenomena, found a solution in reformulating psychological concepts operationally, as it had been proposed in the field of physics by Bridgman, their Harvard colleague. This resulted in a series of articles that were published by Stevens and McGregor from 1935, that were widely discussed in the field of psychology and led to the Symposium on operationism in 1945, to which Bridgman also contributed.
Operationalization
The practical 'operational definition' is generally understood as relating to the theoretical definitions that describe reality through the use of theory.
The importance of careful operationalization can perhaps be more clearly seen in the development of General Relativity. Einstein discovered that there were two operational definitions of "mass" being used by scientists: inertial, defined by applying a force and observing the acceleration, from Newton's Second Law of Motion; and gravitational, defined by putting the object on a scale or balance. Previously, no one had paid any attention to the different operations used because they always produced the same results, but the key insight of Einstein was to posit the Principle of Equivalence that the two operations would always produce the same result because they were equivalent at a deep level, and work out the implications of that assumption, which is the General Theory of Relativity. Thus, a breakthrough in science was achieved by disregarding different operational definitions of scientific measurements and realizing that they both described a single theoretical concept. Einstein's disagreement with the operationalist approach was criticized by Bridgman as follows: "Einstein did not carry over into his general relativity theory the lessons and insights he himself has taught us in his special theory." (p. 335).
In the social sciences
Operationalization is often used in the social sciences as part of the scientific method and psychometrics. Particular concerns about operationalization arise in cases that deal with complex concepts and complex stimuli (e.g., business research, software engineering) where unique threats to validity of operationalization are believed to exist.
Anger example
For example, a researcher may wish to measure the concept "anger." Its presence, and the depth of the emotion, cannot be directly measured by an outside observer because anger is intangible. Rather, other measures are used by outside observers, such as facial expression, choice of vocabulary, loudness and tone of voice.
If a researcher wants to measure the depth of "anger" in various persons, the most direct operation would be to ask them a question, such as "are you angry", or "how angry are you?". This operation is problematic, however, because it depends upon the definition of the individual. Some people might be subjected to a mild annoyance, and become slightly angry, but describe themselves as "extremely angry," whereas others might be subjected to a severe provocation, and become very angry, but describe themselves as "slightly angry." In addition, in many circumstances it is impractical to ask subjects whether they are angry.
Since one of the measures of anger is loudness, the researcher can operationalize the concept of anger by measuring how loudly the subject speaks compared to his normal tone. However, this must assume that loudness is a uniform measure. Some might respond verbally while others might respond physically.
Economics objections
One of the main critics of operationalism in social science argues that "the original goal was to eliminate the subjective mentalistic concepts that had dominated earlier psychological theory and to replace them with a more operationally meaningful account of human behavior. But, as in economics, the supporters ultimately ended up "turning operationalism inside out". "Instead of replacing 'metaphysical' terms such as 'desire' and 'purpose'" they "used it to legitimize them by giving them operational definitions." Thus in psychology, as in economics, the initial, quite radical operationalist ideas eventually came to serve as little more than a "reassurance fetish" for mainstream methodological practice."
Tying to conceptual frameworks
The above discussion links operationalization to measurement of concepts. Many scholars have worked to operationalize concepts like job satisfaction, prejudice, anger etc. Scale and index construction are forms of operationalization. There is not one perfect way to operationalize. For example, in the United States the concept distance driven would be operationalized as miles, whereas kilometers would be used in Europe.
Operationalization is part of the empirical research process. An example is the empirical research question of if job satisfaction influences job turnover. Both job satisfaction and job turnover need to be measured. The concepts and their relationship are important — operationalization occurs within a larger framework of concepts. When there is a large empirical research question or purpose the conceptual framework that organizes the response to the question must be operationalized before the data collection can begin. If a scholar constructs a questionnaire based on a conceptual framework, they have operationalized the framework. Most serious empirical research should involve operationalization that is transparent and linked to a conceptual framework.
Another example, the hypothesis Job satisfaction reduces job turnover is one way to connect (or frame) two concepts – job satisfaction and job turnover. The process of moving from the idea job satisfaction to the set of questionnaire items that form a job satisfaction scale is operationalization. For example, it is possible to measure job satisfaction using only two simple questions: "All in all, I am satisfied with my job", and, "In general, I like my job."
Operationalization uses a different logic when testing a formal (quantitative) hypothesis and testing working hypothesis (qualitative). For formal hypotheses the concepts are represented empirically (or operationalized) as numeric variables and tested using inferential statistics. Working hypotheses (particularly in the social and administrative sciences), however, are tested through evidence collection and the assessment of the evidence. The evidence is generally collected within the context of a case study. The researcher asks if the evidence is sufficient to "support" the working hypothesis. Formal operationalization would specify the kinds of evidence needed to support the hypothesis as well as evidence which would "fail" to support it. Robert Yin recommends developing a case study protocol as a way to specify the kinds of evidence needed during the data collection phases. He identifies six sources of evidence: documentation; archival records; interviews; direct observations; participant observation and physical or cultural artifacts.
In the field of public administration, Shields and Tajalli (2006) have identified five kinds of conceptual frameworks (working hypothesis, descriptive categories, practical ideal type, operations research, and formal hypothesis). They explain and illustrate how each of these conceptual frameworks can be operationalized. They also show how to make conceptualization and operationalization more concrete by demonstrating how to form conceptual framework tables that are tied to the literature and operationalization tables that lay out the specifics of how to operationalize the conceptual framework (measure the concepts).
See also
Proxy (statistics)
Notes
Further reading
A. Cornelius Benjamin (1955) Operationism via HathiTrust
Social sciences
Scientific method
Epistemology of science | 0.779864 | 0.989535 | 0.771703 |
Universalization | Universalization is an incipient concept describing the next phase of human development, marking the transition from trans-national to interplanetary relations and much more aggressive exploitation of opportunities that lie beyond the confines of Earth. As both a process and an end state, universalization implies an increasingly pervasive, abiding and singular human focus not only on global issues per se but on social, technological, economic and cultural challenges and opportunities extending into our solar system, our galaxy, and well beyond, where cooperation supersedes conflict negotiation. Its origins are associated with the incipient expansion of social, economic, and political relationships that have emerged in the wake of globalization and that increasingly define the planet, its place within the broader universe and the sustainability of humanity and our diversity.
Overview
The concept was inspired by Kwame Anthony Appiah's work on cosmopolitanism, and particularly his emphasis on the need to develop a transcendent, collaborative model of human interaction that looks beyond the limited confines of current human relationships. Underlying principles and activities associated with universalization have also been discussed in a number of works dealing with prospective human exploitation of natural resources in space.
Evidence of the transition from globalisation to the century of "universalization" is provided by the exponential growth in outer space activity across all sectors of human endeavour, including exploration (global investments by national governments and consortia of $65 billion annually), governance (the United Nations Office for Outer Space Affairs, the International Association for Space Safety), commerce (aerospace industries such as Boeing, Teledyne, MDA), resource exploitation (Moon Express), Tourism (Virgin Galactic, XCOR), communications (satellites, probes, inter-planetary internet), education (the International Space University, Singularity University, International Institute of Space Commerce), research (observatories at Hawaii, Chile, the Square Kilometer Array, the Hubble Space Telescope, and settlement (Mars One).
Another reading of "universalization" has been suggested by Gregory Paul Meyjes. Questioning the various processes (economic, political, cultural) by which globalization or globalisation has favored expeditious Anglo-cultural dominance at the expense of a more broadly-based, gradually-emerging world civilization, Meyjes argues for cultural policies that support "ecological" relations between local ethnocultural traditions, to protect cultural specificity in the short term and thus to allow as great a variety of groups as possible to voluntarily and organically contribute to the global whole. Meyjes thus proposes universalization as a process of (largely) unfettered yet non-threatening exchange (such as with the aid of an International Auxiliary Language) between and among the world's state-level and sub-state-level groups and "nations" – i.e. a participatory transnational process that informs the gradual emergence of an optimally-inclusive world civilization.
References
Psychotherapy
Emotional issues | 0.802534 | 0.96153 | 0.771661 |
Convergent thinking | Convergent thinking is a term coined by Joy Paul Guilford as the opposite of divergent thinking. It generally means the ability to give the "correct" answer to questions that do not require novel ideas, for instance on standardized multiple-choice tests for intelligence.
Relevance
Convergent thinking is the type of thinking that focuses on coming up with the single, well-established answer to a problem. It is oriented toward deriving the single best, or most often correct answer to a question. Convergent thinking emphasizes speed, accuracy, and logic and focuses on recognizing the familiar, reapplying techniques, and accumulating stored information. It is most effective in situations where an answer readily exists and simply needs to be either recalled or worked out through decision making strategies. A critical aspect of convergent thinking is that it leads to a single best answer, leaving no room for ambiguity. In this view, answers are either right or wrong. The solution that is derived at the end of the convergent thinking process is the best possible answer the majority of the time.
Convergent thinking is also linked to knowledge as it involves manipulating existing knowledge by means of standard procedures. Knowledge is another important aspect of creativity. It is a source of ideas, suggests pathways to solutions, and provides criteria of effectiveness and novelty. Convergent thinking is used as a tool in creative problem solving. When an individual is using critical thinking to solve a problem they consciously use standards or probabilities to make judgments. This contrasts with divergent thinking where judgment is deferred while looking for and accepting many possible solutions.
Convergent thinking is often used in conjunction with divergent thinking. Divergent thinking typically occurs in a spontaneous, free-flowing manner, where many creative ideas are generated and evaluated. Multiple possible solutions are explored in a short amount of time, and unexpected connections are drawn. After the process of divergent thinking has been completed, ideas and information are organized and structured using convergent thinking to decision making strategies are used leading to a single-best, or most often correct answer. Examples of divergent thinking include using brainstorming, free writing and creative thinking at the beginning of the problem solving process to generate possible solutions that can be evaluated later. Once a sufficient number of ideas have been explored, convergent thinking can be used. Knowledge, logic, probabilities and other decision-making strategies are taken into consideration as the solutions are evaluated individually in a search for a single best answer which when reached is unambiguous.
Convergent vs. divergent thinking
Personality
The personality correlates of divergent and convergent thinking have been studied. Results indicate that many personality traits are associated with divergent thinking (e.g., ideational fluency). Two of the most commonly identified correlates are Openness and Extraversion, which have been found to facilitate divergent thinking production. Openness assesses intellectual curiosity, imagination, artistic interests, liberal attitudes, and originality. See Divergent thinking page for further details.
The fact that Openness was found to be one of the strongest personality correlate of divergent thinking is not surprising, as previous studies have suggested that Openness be interpreted as a proxy of creativity. Although Openness conceptualizes individual differences in facets other than creativity, the high correlation between Openness and divergent thinking is indicative of two different ways of measuring the same aspects of creativity. Openness is a self-report of one’s preference for thinking "outside the box”. Divergent thinking tests represent a performance-based measure of such.
While some studies have found no personality effects on convergent thinking, large-scale meta-analyses have found numerous personality traits to be related to such reasoning abilities (e.g., corrected r = .31 with openness and -.30 with the volatility aspect of neuroticism).
Brain activity
The changes in brain activity were studied in subjects during both convergent and divergent thinking. To do this, researchers studied Electroencephalography (EEG) patterns of subjects during convergent and divergent thinking tasks. Different patterns of change for the EEG parameters were found during each type of thinking. When compared with a control group who was resting, both convergent and divergent thinking produced significant desynchronization of the Alpha 1,2 rhythms. Meanwhile, convergent thinking induced coherence increases in the Theta 1 band that was more caudal and right-sided. On the other hand, divergent thinking demonstrated amplitude decreases in the caudal regions of the cortex in Theta 1 and 2 bands. The large increase in amplitude and coherence indicates a close synchronization between both hemispheres in the brain.
The successful generation of the hypothesis during divergent thinking performance seems to induce positive emotions which, in part, can be due to the increase of complexity and performance measures of creative thinking, Psycho-inter-hemispheric coherence. Finally, the obtained dominance of the right hemisphere and ‘the cognitive axis’, the coupling of the left occipital – right frontal in contrast to the right occipital – left frontal ‘axis’ characterizing analytic thinking, may reflect the EEG pattern of the unconscious mental processing during successful divergent thinking.
Convergent and divergent thinking depend on the locus coeruleus neurotransmission system, which modulates noradrenaline levels in the brain. This system plays important roles in cognitive flexibility and the explore/exploit tradeoff problem (multi-armed bandit problem).
Intellectual ability
A series of standard intelligence tests were used to measure both the convergent and divergent thinking abilities of adolescents. Results indicate that subjects who classified as high on divergent thinking had significantly higher word fluency and reading scores than subjects who classified as low on divergent thinking. Furthermore, those who were high in divergent thinking also demonstrated higher anxiety and penetration scores. Thus, those subjects who are high in divergent thinking can be characterized as having their perceptual processes mature and become adequately controlled in an unconventional way.
Conversely, subjects in the high convergent thinking group illustrated higher grade averages for the previous school year, less difficulty with homework and also indicated that their parents pressed them towards post-secondary education. These were the only significant relationships regarding the convergent thinking measures. This suggests that these cognitive dimensions are independent of one another. Future investigations into this topic should focus more upon the developmental, cognitive and perpetual aspects of personality among divergent and convergent thinkers, rather than their attitude structures.
Creative ability
Creative ability was measured in a study using convergent tasks, which require a single correct answer, and divergent tasks, which requires producing many different answers of varying correctness. Two types of convergent tasks used were, the first being a remote associates tasks, which gave the subject three words and asked what word the previous three words are related to. The second type of convergent thinking task were insight problems, which gave the subjects some contextual facts and then asked them a question requiring interpretation.
For the remote associates tasks, the convergent thinkers correctly solved more of the five remote associates problems than did those using divergent thinking. This was demonstrated to be significantly different by a one-way ANOVA. In addition, when responding to insight problems, participants using convergent thinking solved more insight problems than did the control group, however, there was no significant difference between subjects using convergent or divergent thinking.
For the divergent thinking tasks, although together all of the divergent tasks demonstrated a correlation, they were not significant when examined between conditions.
Mood
With increasing evidence suggesting that emotions can affect underlying cognitive processes, recent approaches have also explored the opposite, that cognitive processes can also affect one's mood. Research indicates that preparing for a creative thinking task induces mood swings depending on what type of thinking is used for the task.
The results demonstrate that carrying out a task requiring creative thinking does have an effect on one's mood. This provides considerable support for the idea that mood and cognition are not only related, but also that this relation is reciprocal. Additionally, divergent and convergent thinking impact mood in opposite ways. Divergent thinking led to a more positive mood, whereas convergent thinking had the opposite effect, leading to a more negative mood.
Practical use
Convergent thinking is a fundamental tool in a child's education. Today, most educational opportunities are tied to one's performance on standardized tests that are often multiple choice in nature. When a student contemplates the possible answers available, they use convergent thinking to weigh alternatives within a construct. This allows one to find a single best solution that is measurable.
Examples of convergent questions in teaching in the classroom:
On reflecting over the entirety of the play Hamlet, what were the main reasons why Ophelia went mad?
What is the chemical reaction for photosynthesis?
What are signs of nitrogen deficiency in plants?
Which breeds of livestock would be best adapted for South Texas?
Criticism
The idea of convergent thinking has been critiqued by researchers who claim that not all problems have solutions that can be effectively ranked. Convergent thinking assigns a position to one solution over another. The problem is that when one is dealing with more complex problems, the individual may not be able to appropriately rank the solutions available to them. In these instances, researchers indicate that when dealing with complex problems, other variables such as one's gut feeling or instinctive problem solving abilities also have a role in determining a solution to a given problem.
Furthermore, convergent thinking has also been said to devalue minority arguments. In a study where experimental manipulations were used to motivate subjects to engage in convergent or divergent thinking when presented with either majority or minority support for persuasive arguments, a pattern emerged under the convergent thinking condition where majority support produced more positive attitudes on the focal issue. Conversely, minority support for the argument had no effect on the subjects. The convergent thinkers are too focused with selecting the best answer that they fail to appropriately evaluate minority opinion and could end up dismissing accurate solutions.
See also
Divergent thinking
References
Problem solving skills | 0.783355 | 0.985061 | 0.771652 |
Environmental impact of mining | Environmental impact of mining can occur at local, regional, and global scales through direct and indirect mining practices. Mining can cause erosion, sinkholes, loss of biodiversity, or the contamination of soil, groundwater, and surface water by chemicals emitted from mining processes. These processes also affect the atmosphere through carbon emissions which contributes to climate change.
Some mining methods (lithium mining, phosphate mining, coal mining, mountaintop removal mining, and sand mining) may have such significant environmental and public health effects that mining companies in some countries are required to follow strict environmental and rehabilitation codes to ensure that the mined area returns to its original state. Mining can provide various advantages to societies, yet it can also spark conflicts, particularly regarding land use both above and below the surface.
Mining operations remain rigorous and intrusive, often resulting in significant environmental impacts on local ecosystems and broader implications for planetary environmental health. To accommodate mines and associated infrastructure, land is cleared extensively, consuming significant energy and water resources, emitting air pollutants, and producing hazardous waste.
According to The World Counts page "The amount of resources mined from Earth is up from 39.3 billion tons in 2002. A 55 percent increase in less than 20 years. This puts Earth's natural resources under heavy pressure. We are already extracting 75 percent more than Earth can sustain in the long run."
Erosion
Erosion of exposed hillsides, mine dumps, tailings dams and resultant siltation of drainages, creeks and rivers can significantly affect the surrounding areas, a prime example being the giant Ok Tedi Mine in Papua New Guinea. Soil erosion can decrease the water availability for plant growth, resulting in a population decline in the plant ecosystem.
Soil erosion occurs from physical disturbances caused by mining activities (e.g. excavation, blasting, etc.) in wilderness areas. This causes disturbances of tree root systems, a crucial component in stabilizing soil and preventing erosion. Eroded materials can be transported by runoff into nearby surface water, leading to a process known as sedimentation. Moreover, altered drainage patterns redirect water flow, intensifying erosion and sedimentation of nearby water bodies. The cumulative impact results in degraded water quality, loss of habitat, and long-lasting ecological damage.
Sinkholes
A sinkhole at or near a mine site is typically caused from the failure of a mine roof from the extraction of resources, weak overburden or geological discontinuities. The overburden at the mine site can develop cavities in the subsoil or rock, which can infill with sand and soil from the overlying strata. These cavities in the overburden have the potential to eventually cave in, forming a sinkhole at the surface. The sudden failure of earth creates a large depression at the surface without warning, this can be seriously hazardous to life and property. Sinkholes at a mine site can be mitigated with the proper design of infrastructure such as mining supports and better construction of walls to create a barrier around an area prone to sinkholes. Back-filling and grouting can be done to stabilize abandoned underground workings.
Water pollution
Mining can have harmful effects on surrounding surface and groundwater. If proper precautions are not taken, unnaturally high concentrations of chemicals, such as arsenic, sulphuric acid, and mercury can spread over a significant area of surface or subsurface water. Large amounts of water used for mine drainage, mine cooling, aqueous extraction and other mining processes increases the potential for these chemicals to contaminate ground and surface water. As mining produces copious amounts of waste water, disposal methods are limited due to contaminates within the waste water. Runoff containing these chemicals can lead to the devastation of the surrounding vegetation. The dumping of the runoff in surface waters or in a lot of forests is the worst option. Therefore, submarine tailings disposal are regarded as a better option (if the waste is pumped to great depth). Land storage and refilling of the mine after it has been depleted is even better, if no forests need to be cleared for the storage of debris. The contamination of watersheds resulting from the leakage of chemicals also has an effect on the health of the local population.
In well-regulated mines, hydrologists and geologists take careful measurements of water to take precaution to exclude any type of water contamination that could be caused by the mine's operations. The minimization of environmental degradation is enforced in American mining practices by federal and state law, by restricting operators to meet standards for the protection of surface and groundwater from contamination. This is best done through the use of non-toxic extraction processes as bioleaching. Furthermore, protection from water contamination should continue after a mine has been decommissioned, as surroundings water systems can still become contaminated years after active use.
Air pollution
The mining industry contributes between 4 and 7% of global greenhouse gas emissions..The production of greenhouse gases, such as CO2 and CH4, can occur both directly and indirectly throughout the mining process and can have significant impacts on global climate change.
Air pollutants have a negative impact on plant growth, primarily through interfering with resource accumulation. Once leaves are in close contact with the atmosphere, many air pollutants, such as O3 and NOx, affect the metabolic function of the leaves and interfere with net carbon fixation by the plant canopy. Air pollutants that are first deposited on the soil, such as heavy metals, first affect the functioning of roots and interfere with soil resource capture by the plant. These reductions in resource capture (production of carbohydrate through photosynthesis, mineral nutrient uptake and water uptake from the soil) will affect plant growth through changes in resource allocation to the various plant structures. When air pollution stress co-occurs with other stresses, e.g. water stress, the outcome on growth will depend on a complex interaction of processes within the plant. At the ecosystem level, air pollution can shift the competitive balance among the species present and may lead to changes in the composition of the plant community. The impacts of air pollution can vary depending on the type and concentration of pollutant released. In agroecosystems these changes may be manifest in reduced economic yield.
Adaptation and mitigation techniques to reduce air pollution created by mining are often focused on using cleaner energy sources. Switching from coal and diesel to gasoline can reduce the concentration of greenhouse gases. Furthermore, switching to renewable energy sources, such as solar power and hydropower, may reduce greenhouse gas emissions further. Air pollution may also be reduced by maximizing the efficiency of the mine and conducting a life-cycle assessment to minimize the environmental impacts.
Acid rock drainage
Sub-surface mining often progresses below the water table, so water must be constantly pumped out of the mine in order to prevent flooding. When a mine is abandoned, the pumping ceases, and water floods the mine. This introduction of water is the initial step in most acid rock drainage situations.Acid rock drainage occurs naturally within some environments as part of the weathering process but is exacerbated by large-scale earth disturbances characteristic of mining and other large construction activities, usually within rocks containing an abundance of sulfide minerals. Areas where the earth has been disturbed (e.g. construction sites, subdivisions, and transportation corridors) may create acid rock drainage. In many localities, the liquid that drains from coal stocks, coal handling facilities, coal washeries, and coal waste tips can be highly acidic, and in such cases it is treated as acid mine drainage (AMD). The same type of chemical reactions and processes may occur through the disturbance of acid sulfate soils formed under coastal or estuarine conditions after the last major sea level rise, and constitutes a similar environmental hazard.
Acid mine drainage formation occurs when rocks containing sulfide minerals (e.g. Pyrite) are exposed to water and air, producing an acidic, sulfate-rich drainage. These acidic waters can leach out various heavy metals from the surrounding rocks and soil. The acidic and metal-rich AMD is a major source of environmental pollution, contaminating nearby surface waters and groundwater, harming ecosystems and rendering water unsuitable for drinking. AMD can persist for extended periods, even long after mining activities have ceased, leading to continual environmental degradation.
The five principal technologies used to monitor and control water flow at mine sites are diversion systems, containment ponds, groundwater pumping systems, subsurface drainage systems, and subsurface barriers. In the case of AMD, contaminated water is generally pumped to a treatment facility that neutralizes the contaminants. A 2006 review of environmental impact statements found that "water quality predictions made after considering the effects of mitigation largely underestimated actual impacts to groundwater, seeps, and surface water".
Heavy metals
Heavy metals are naturally occurring elements that have a high atomic weight and a density at least 5 times greater than that of water. Heavy metals are not readily degradable and therefore, are subjected to persistence in the environment and bioaccumulation in organisms. Their multiple industrial, domestic, agricultural, medical and technological applications have led to their wide distribution in the environment; raising concerns over their potential effects on human health and the environment.
Naturally occurring heavy metals are displayed in shapes that are not promptly accessible for uptake by plants. They are ordinarily displayed in insoluble shapes, like in mineral structures, or in precipitated or complex shapes that are not promptly accessible for plant take-up. Normally happening heavy metals have a high adsorption capacity in soil and are hence not promptly accessible for living organisms. However, the impacts of heavy metal transformation and interactions with soil organisms is highly dependent on the physicochemical properties of the soil and the organisms present. The holding vitality between normally happening heavy metals and soil is exceptionally high compared to that with anthropogenic sources.
Dissolution and transport of metals and heavy metals by run-off and ground water is another example of environmental problems with mining, such as the Britannia Mine, a former copper mine near Vancouver, British Columbia. Tar Creek, an abandoned mining area in Picher, Oklahoma that is now an Environmental Protection Agency Superfund site, also suffers from heavy metal contamination. Water in the mine containing dissolved heavy metals such as lead and cadmium leaked into local groundwater, contaminating it. Furthermore, the presence of heavy metals in freshwater may also affect the water chemistry. High concentrations of heavy metals can impact pH, buffering capacity, and dissolved oxygen. Long-term storage of tailings and dust can lead to additional problems, as they can be easily blown off site by wind, as occurred at Skouriotissa, an abandoned copper mine in Cyprus. Environmental changes such as global warming and increased mining activity may increase the content of heavy metals in the stream sediments. These impacts may also be enhanced in areas located downstream from the heavy metal source.
Effect on biodiversity
Mining impacts biodiversity across various spatial dimensions. Locally, the immediate effects are seen through direct habitat destruction at the mining sites. On a broader scale, mining activities contribute to significant environmental problems such as pollution and climate change, which have regional and global repercussions. Consequently, conservation strategies need to be multifaceted and geographically inclusive, tackling both the direct impacts at specific sites and the more extensive, far-reaching environmental consequences. The implantation of a mine is a major habitat modification, and smaller perturbations occur on a larger scale than exploitation site, mine-waste residuals contamination of the environment for example. Adverse effects can be observed long after the end of the mine activity. Destruction or drastic modification of the original site and anthropogenic substances release can have major impact on biodiversity in the area. Destruction of the habitat is the main component of biodiversity losses, but direct poisoning caused by mine-extracted material, and indirect poisoning through food and water, can also affect animals, vegetation and microorganisms. Habitat modification such as pH and temperature modification disturb communities in the surrounding area. Endemic species are especially sensitive, since they require very specific environmental conditions. Destruction or slight modification of their habitat put them at the risk of extinction. Habitats can be damaged when there is not enough terrestrial product as well as by non-chemical products, such as large rocks from the mines that are discarded in the surrounding landscape with no concern for impacts on natural habitat.
Concentrations of heavy metals are known to decrease with distance from the mine, and effects on biodiversity tend to follow the same pattern. Impacts can vary greatly depending on mobility and bioavailability of the contaminant: less-mobile molecules will stay inert in the environment while highly mobile molecules will easily move into another compartment or be taken up by organisms. For example, speciation of metals in sediments could modify their bioavailability, and thus their toxicity for aquatic organisms.
Biomagnification plays an important role in polluted habitats: mining impacts on biodiversity, assuming that concentration levels are not high enough to directly kill exposed organisms, should be greater to the species on top of the food chain because of this phenomenon.
Adverse mining effects on biodiversity depend a great extent on the nature of the contaminant, the level of concentration at which it can be found in the environment, and the nature of the ecosystem itself. Some species are quite resistant to anthropogenic disturbances, while some others will completely disappear from the contaminated zone. Time alone does not seem to allow the habitat to recover completely from the contamination. Remediation practices take time, and in most cases will not enable the recovery of the original diversity present before the mining activity took place.
Aquatic organisms
The mining industry can impact aquatic biodiversity through different ways. One way can be direct poisoning; a higher risk for this occurs when contaminants are mobile in the sediment or bioavailable in the water. Mine drainage can modify water pH, making it hard to differentiate direct impact on organisms from impacts caused by pH changes. Effects can nonetheless be observed and proven to be caused by pH modifications. Contaminants can also affect aquatic organisms through physical effects: streams with high concentrations of suspended sediment limit light, thus diminishing algae biomass. Metal oxide deposition can limit biomass by coating algae or their substrate, thereby preventing colonization.
Factors that impact communities in acid mine drainage sites vary temporarily and seasonally: temperature, rainfall, pH, salinisation and metal quantity all display variations on the long term, and can heavily affect communities. Changes in pH or temperature can affect metal solubility, and thereby the bioavailable quantity that directly impact organisms. Moreover, contamination persists over time: ninety years after a pyrite mine closure, water pH was still very low and microorganisms populations consisted mainly of acidophil bacteria.
One big case study that was considered extremely toxic to aquatic organisms was the contamination that occurred in Minamata Bay. Methylmercury was released into wastewater by industrial chemical company's and a disease called Minamata disease was discovered in Kumamoto, Japan. This resulted in mercury poisoning in fishes and shellfishes and it was contaminating surrounding species and many died from it and it impacted anyone that ate the contaminated fishes. Another significant case study illuminates the impact of phosphate mining on coral reef development adjacent to Christmas Island. In this scenario, phosphate-rich runoff was transported from local waterways to coral reefs off the coast, where reef sediment phosphate levels reached some of the highest levels ever recorded in Australian reefs at 54,000 mg/kg. Phosphate contamination has resulted in a noticeable decline in keystone reef-building species, such as crustose coralline algae and branching coral. This decline is likely due to phosphorus serving as a fertilizer for macro algae, allowing them to outcompete calcareous organisms.
Microorganisms
Algae communities are less diverse in acidic water containing high zinc concentration, and mine drainage stress decrease their primary production. Diatoms' community is greatly modified by any chemical change, pH phytoplankton assemblage, and high metal concentration diminishes the abundance of planktonic species. Some diatom species may grow in high-metal-concentration sediments. In sediments close to the surface, cysts suffer from corrosion and heavy coating. In very polluted conditions, total algae biomass is quite low, and the planktonic diatom community is missing. Similarly to phytoplankton, the zooplankton communities are heavily altered in cases where the mining impact is severe. In case of functional complementary, however, it is possible that the phytoplankton and zooplankton mass remains stable.
When assessing the potential risks of mining to marine microbiomes, it is important to broaden the scope to include other vulnerable communities, such as those found at the seafloor, which are at risk of ecosystem degradation due to deep-sea mining. Microbial life plays a vital role in fulfilling a variety of niches and supporting the productivity of biogeochemical cycles within seafloor ecosystems. Primary zones of deep-sea mining include operational hydrothermal vents along spreading centers (e.g., mid-ocean ridges, volcanic arcs) on the ocean floor where sulfide minerals were deposited. Other extraction zones include inactive hydrothermal vents with similar mineral deposits, polymetallic protuberances (mainly manganese) along the ocean floor, and sometimes polymetallic crusts (cobalt crusts) left behind at seamounts. These mineral deposits are often found in exotic ecosystems capable of surviving under extreme chemical conditions and abnormally high temperatures. Resource extraction has only increased over time, leading to the potential for significant losses of microbial ecosystem services at hydrothermal vents and increased ecosystem service degradation at inactive massive sulfide deposits. Potential drivers of ecosystem degradation via deepsea mining include acidification, the release of toxic heavy metals, removal of slow-growing benthic fauna, burial and respiration impairment of benthic organisms from the generation of sediment plumes, and disruption of the food supply chain among benthopelagic species. These potential outcomes can alter the chemical balance of these environments, leading to a cascade of declines in benthic and pelagic species that rely on hydrothermal vents as sources of nutrient availability. Ensuring the preservation of hydrothermal microbes and the species that depend on them is critical for retaining the rich biodiversity of seafloor environments and the ecosystem services they provide
Macro-organisms
Water insect and crustacean communities are modified around a mine, resulting in a low tropic completeness and their community being dominated by predators. However, biodiversity of macroinvertebrates can remain high if sensitive species are replaced with tolerant ones. When diversity within the area is reduced, there is sometimes no effect of stream contamination on abundance or biomass, suggesting that tolerant species fulfilling the same function take the place of sensible species in polluted sites. pH diminution in addition to elevated metal concentration can also have adverse effects on macroinvertebrates' behaviour, showing that direct toxicity is not the only issue. Fish can also be affected by pH, temperature variations, and chemical concentrations.
Terrestrial organisms
Vegetation
Soil texture and water content can be greatly modified in disturbed sites, leading to plants community changes in the area. Most of the plants have a low concentration tolerance for metals in the soil, but sensitivity differs among species. Grass diversity and total coverage is less affected by high contaminant concentration than forbs and shrubs. Mine waste-materials rejects or traces due to mining activity can be found in the vicinity of the mine, sometimes far away from the source. Established plants cannot move away from perturbations, and will eventually die if their habitat is contaminated by heavy metals or metalloids at a concentration that is too elevated for their physiology. Some species are more resistant and will survive these levels, and some non-native species that can tolerate these concentrations in the soil, will migrate in the surrounding lands of the mine to occupy the ecological niche. This can also leave the soil vulnerable to potential soil erosion, which would make it inhabitable for plants.
Plants can be affected through direct poisoning, for example arsenic soil content reduces bryophyte diversity. Vegetation can also be contaminated from other metals as well such as nickel and copper. Soil acidification through pH diminution by chemical contamination can also lead to a diminished species number. Contaminants can modify or disturb microorganisms, thus modifying nutrient availability, causing a loss of vegetation in the area. Some tree roots divert away from deeper soil layers in order to avoid the contaminated zone, therefore lacking anchorage within the deep soil layers, resulting in the potential uprooting by the wind when their height and shoot weight increase. In general, root exploration is reduced in contaminated areas compared to non-polluted ones. Plant species diversity will remain lower in reclaimed habitats than in undisturbed areas. Depending on what specific type of mining is done, all vegetation can be initially removed from the area before the actual mining is started.
Cultivated crops might be a problem near mines. Most crops can grow on weakly contaminated sites, but yield is generally lower than it would have been in regular growing conditions. Plants also tend to accumulate heavy metals in their aerial organs, possibly leading to human intake through fruits and vegetables. Regular consumption of contaminated crops might lead to health problems caused by long-term metal exposure. Cigarettes made from tobacco growing on contaminated sites might also possibly have adverse effects on human population, as tobacco tends to accumulate cadmium and zinc in its leaves.
Moreover, plants which have a high tendency to accumulate heavy metals, such as Noccaea caerulescens, may be used for phytoextraction In the phytoextraction process, plants will extract heavy metals present in the soil, and store them in portions of the plant which can be easily harvested. Once the plant which has accumulated the heavy metals is harvested, the stored heavy metals are effectively removed from the soil.
Animals
Habitat destruction is one of the main issues of mining activity. Huge areas of natural habitat are destroyed during mine construction and exploitation, forcing animals to leave the site. In addition, desirable minerals exist across all biodiversity-rich areas, and future mineral demands are expected to rise. This indicates a significant risk for animal biodiversity, considering mining is believed to have some of the most profound negative impacts on local fauna, such as reducing the availability of food and shelter, which in turn limits the number of individuals a region can sustain. Moreover, mineral exploitation poses additional threats to wildlife beyond habitat degradation, mining is believed to produce adverse impacts on wildlife in forms such as soil and water contamination, suppression of vegetation, and modifications in landscape structure.
Landscape alterations, in particular, pose a significant threat to medium and large-sized forest-dependent mammals that require large areas to meet their needs. Medium-large mammals vary in their tolerance to anthropogenically driven changes to their ecosystems; this impacts their ability to find food, move, and avoid hunting pressures. These same fauna are responsible for shaping the structure of forested areas via processes such as predation, trampling of low-lying vegetation, and seed consumption/dispersion. Outside of physically altering the structure of local landscapes, mining can also produce large amounts of residual waste reducing the quality of air and water, thereby reducing the amount of accessible land for large mammals. This relationship has been highlighted in iron-rich areas of India where mining's anthropogenic impacts have been reduced by regulations on waste production, mitigating the adverse effects of mineral extraction on local fauna such as elephants. While mining is believed to directly impact fauna near the extraction site, it may also have indirect effects on mammal biodiversity by driving the construction of roads and infrastructure accommodating mining company employees. There remains a glaring gap in studies regarding the indirect impacts of mining on mammals, indicating that we must advocate for incentives to support studies aimed at testing the health of these larger mammals. This will allow for more effective conservation efforts to preserve animal biodiversity.
One case study demonstrating the impacts of mining on animal biodiversity takes place in Western Ghana. Over the past several decades mining activities have rapidly expanded across Africa; this has driven large-scale deforestation and increased human settlement in the mineral-rich eastern and western regions of Brong-Ahafo (forest land in Ghana). Increased settlement has facilitated migration of loggers, miners, other workers creating further stress on forested areas, with many migrants utilizing hunting for wild animals to collect bushmeat. This example highlights a significant indirect impact of mining on local fauna in the Brong-Ahafo forest land. In this region, researchers utilized Sherman collapsible live traps for nine small mammal species (e.g. H. alleni, P. tullbergi, H. trivirgatus, etc.) to explore if there were any differences in fauna biodiversity between mining-impacted areas and areas without significant impacts from mining. After recording several captures in both areas, it was concluded that mining-impacted forests had lower levels of fauna biodiversity in comparison to their counterparts, indicating that mining definitely hurt local animal biodiversity. This scenario, exemplifies the profound ecological repercussions of mining on fauna biodiversity and highlights the urgent need for implementation of conservation strategies to mitigate the impacts of mineral extraction on local wildlife populations.
Animals can be poisoned directly by mine products and residuals. Bioaccumulation in the plants or the smaller organisms they eat can also lead to poisoning: in certain areas horses, goats and sheep are exposed to potentially toxic concentrations of copper and lead in grass. There are fewer ant species in soil containing high copper levels, in the vicinity of a copper mine. If fewer ants are found, chances are higher that other organisms living in the surrounding landscape are strongly affected by the high copper levels as well. Ants have good judgement whether an area is habitual as they live directly in the soil and are thus sensitive to environmental disruptions.
Microorganisms
Microorganisms are extremely sensitive to environmental modification, such as modified pH, temperature changes or chemical concentrations due to their size. For example, the presence of arsenic and antimony in soils have led to diminution in total soil bacteria. Much like waters sensitivity, a small change in the soil pH can provoke the remobilization of contaminants, in addition to the direct impact on pH-sensitive organisms.
Microorganisms have a wide variety of genes among their total population, so there is a greater chance of survival of the species due to the resistance or tolerance genes in that some colonies possess, as long as modifications are not too extreme. Nevertheless, survival in these conditions will imply a big loss of gene diversity, resulting in a reduced potential for adaptations to subsequent changes. Undeveloped soil in heavy metal contaminated areas could be a sign of reduced activity by soils microfauna and microflora, indicating a reduced number of individuals or diminished activity. Twenty years after disturbance, even in rehabilitation area, microbial biomass is still greatly reduced compared to undisturbed habitat.
Arbuscular mycorrhiza fungi are especially sensitive to the presence of chemicals, and the soil is sometimes so disturbed that they are no longer able to associate with root plants. However, some fungi possess contaminant accumulation capacity and soil cleaning ability by changing the biodisponibility of pollutants, this can protect plants from potential damages that could be caused by chemicals. Their presence in contaminated sites could prevent loss of biodiversity due to mine-waste contamination, or allow for bioremediation, the removal of undesired chemicals from contaminated soils. On the contrary, some microbes can deteriorate the environment: which can lead to elevated SO4 in the water and can also increase microbial production of hydrogen sulfide, a toxin for many aquatic plants and organisms.
Waste materials
Tailings
Mining processes produce an excess of waste materials known as tailings. The materials that are left over after are a result of separating the valuable fraction from the uneconomic fraction of ore. These large amounts of waste are a mixture of water, sand, clay, and residual bitumen. Tailings are commonly stored in tailings ponds made from naturally existing valleys or large engineered dams and dyke systems. Tailings ponds can remain part of an active mine operation for 30–40 years. This allows for tailings deposits to settle, or for storage and water recycling.
Tailings have great potential to damage the environment by releasing toxic metals by acid mine drainage or by damaging aquatic wildlife; these both require constant monitoring and treatment of water passing through the dam. However, the greatest danger of tailings ponds is dam failure. Tailings ponds are typically formed by locally derived fills (soil, coarse waste, or overburden from mining operations and tailings) and the dam walls are often built up on to sustain greater amounts of tailings. The lack of regulation for design criteria of the tailings ponds are what put the environment at risk for flooding from the tailings ponds.
Some heavy metals that accumulate in tailings, such as thorium, are linked to increase cancer risk. The tailings around China's Bayan Obo mine contains 70 000 tons of thorium. Contaminated groundwater is moving towards the Yellow River due to the absence of an impermeable lining for the tailing dam.
Spoil tip
A spoil tip is a pile of accumulated overburden that was removed from a mine site during the extraction of coal or ore. These waste materials are composed of ordinary soil and rocks, with the potential to be contaminated with chemical waste. Spoil is much different from tailings, as it is processed material that remains after the valuable components have been extracted from ore. Spoil tip combustion can happen fairly commonly as, older spoil tips tend to be loose and tip over the edge of a pile. As spoil is mainly composed of carbonaceous material that is highly combustible, it can be accidentally ignited by the lighting fire or the tipping of hot ashes. Spoil tips can often catch fire and be left burning underground or within the spoil piles for many years.
Effects of mine pollution on humans
Humans are also affected by mining. There are many diseases that can come from the pollutants that are released into the air and water during the mining process. For example, during smelting operations large quantities of air pollutants, such as the suspended particulate matter, SOx, arsenic particles and cadmium, are emitted. Metals are usually emitted into the air as particulates as well. There are also many occupational health hazards that miners face. Most of miners suffer from various respiratory and skin diseases such as asbestosis, silicosis, or black lung disease.
Furthermore, one of the biggest subset of mining that impacts humans is the pollutants that end up in the water, which results in poor water quality. About 30% of the world has access to renewable freshwater which is used by industries that generate large amounts of waste containing chemicals in various concentrations that are deposited into the freshwater. The concern of active chemicals in the water can pose a great risk to human health as it can accumulate within the water and fishes. There was a study done on an abandon mine in China, Dabaoshan mine and this mine was not active to many years yet the impact of how metals can accumulate in water and soil was a major concern for neighboring villages. Due to the lack of proper care of waste materials 56% of mortality rate is estimated within the regions around this mining sites, and many have been diagnosed with esophageal cancer and liver cancer. It resulted that this mine till this day still has negative impacts on human health through crops and it is evident that there needs to be more cleaning up measures around surrounding areas.
The long-term effects associated with air pollution are plenty including chronic asthma, pulmonary insufficiency, and cardiovascular mortality. According to a Swedish cohort study, diabetes seems to be induced after long-term air pollution exposure. Furthermore, air pollution seems to have various malign health effects in early human life, such as respiratory, cardiovascular, mental, and perinatal disorders, leading to infant mortality or chronic disease in adult age. Discuss contamination basically influences those living in huge urban zones, where street outflows contribute the foremost to the degradation of discuss quality. There's moreover a threat of mechanical mishaps, where the spread of a harmful haze can be lethal to the populaces of the encompassing regions. The scattering of poisons is decided by numerous parameters, most outstandingly barometrical soundness and wind.
Deforestation
With open cast mining the overburden, which may be covered in forest, must be removed before the mining can commence. Although the deforestation due to mining may be small compared to the total amount it may lead to species extinction if there is a high level of local endemism.
The lifecycle of mining coal is one of the filthiest cycles that causes deforestation due to the amount of toxins, and heavy metals that are released soil and water environment. Although the effects of coal mining take a long time to impact the environment the burning of coals and fires which can burn up to decades can release flying ash and increase the greenhouse gasses. Specifically strip mining that can destroy landscapes, forests, and wildlife habitats that are near the sites. Trees, plants and topsoil are cleared from the mining area and this can lead to destruction of agricultural land. Furthermore, when rainfall occurs the ashes and other materials are washed into streams that can hurt fish. These impacts can still occur after the mining site is completed which disturbs the presences of the land and restoration of the deforestation takes longer than usual because the quality of the land is degraded. Legal mining, albeit more environmentally-controlled than illegal mining, contributes to some substantial percentage to the deforestation of tropical countries
Open-pit nickel mining has led to environmental degradation and pollution in developing countries such as the Philippines and Indonesia. In 2024, nickel mining and processing was one of the main causes of deforestation in Indonesia. Open-pit cobalt mining has led to deforestation and habitat destruction in the Democratic Republic of Congo.
Impacts associated with specific types of mining
Coal mining
The environmental factors of the coal industry are not only impacting air pollution, water management and land use but also is causing severe health effects by the burning of the coal. Air pollution is increasing in numbers of toxins such as mercury, lead, sulfur dioxide, nitrogen oxides and other heavy metals. This is causing health issues involving breathing difficulties and is impacting the wildlife around the surrounding areas that needs clean air to survive. The future of air pollution remains unclear as the Environmental Protection Agency have tried to prevent some emissions but don't have control measures in place for all plants producing mining of coal. Water pollution is another factor that is being damaged throughout this process of mining coals, the ashes from coal is usually carried away in rainwater which streams into larger water sites. It can take up to 10 years to clean water sites that have coal waste and the potential of damaging clean water can only make the filtration much more difficult.
Deep sea mining
Deep sea mining for manganese nodules and other resources have led to concerns from marine scientists and environmental groups over the impact on fragile deep sea ecosystems. Knowledge of potential impacts is limited due to limited research on deep sea life.
Lithium mining
Lithium does not occur as the metal naturally since it is highly reactive, but is found combined in small amounts in rocks, soils, and bodies of water. The extraction of lithium in rock form can be exposed to air, water, and soil. Furthermore, batteries are globally demanded for containing lithium in regards to manufacturing, the toxic chemicals that lithium produce can negatively impact humans, soils, and marine species. Lithium production increased by 25% between 2000 and 2007 for the use of batteries, and the major sources of lithium are found in brine lake deposits. Lithium is discovered and extracted from 150 minerals, clays, numerous brines, and sea water, and although lithium extraction from rock-form is twice as expensive from that of lithium extracted from brines, the average brine deposit is greater than in comparison to an average lithium hard rock deposit.
Phosphate mining
Phosphate-bearing rocks are mined to produce phosphorus, an essential element used in industry and agriculture. The process of extraction includes removal of surface vegetation, thereby exposing phosphorus rocks to the terrestrial ecosystem, damaging the land area with exposed phosphorus, resulting in ground erosion. The products released from phosphate ore mining are wastes, and tailings, resulting in human exposure to particulate matter from contaminated tailings via inhalation and the toxic elements that impact human health are (Cd, Cr, Zn, Cu and Pb).
Oil shale mining
Oil shale is a sedimentary rock containing kerogen which hydrocarbons can be produced. Mining oil shale impacts the environment it can damage the biological land and ecosystems. The thermal heating and combustion generate a lot of material and waste that includes carbon dioxide and greenhouse gas. Many environmentalists are against the production and usage of oil shale because it creates large amounts of greenhouse gasses. Among air pollution, water contamination is a huge factor mainly because oil shales are dealing with oxygen and hydrocarbons. There is changes in the landscape with mining sites due to oil shale mining and the production using chemical products. The ground movements within the area of underground mining is a problem that is long-term because it causes non-stabilized areas. Underground mining causes a new formation that can be suitable for some plant growth, but rehabilitation could be required.
Mountaintop removal mining
Mountaintop removal mining (MTR) occurs when trees are cut down, and coal seams are removed by machines and explosives. As a result the landscape is more susceptible to flash flooding and causing potential pollution from the chemicals. The critical zone disturbed by mountaintop removal causes degraded stream water quality towards the marine and terrestrial ecosystems and thus mountaintop removal mining affect hydrologic response and long-term watersheds.
Sand mining
Sand mining and gravel mining creates large pits and fissures in the earth's surface. At times, mining can extend so deeply that it affects ground water, springs, underground wells, and the water table. The major threats of sand mining activities include channel bed degradation, river formation and erosion. Sand mining has resulted in an increase of water turbidity in the majority offshore of Lake Hongze, the fourth largest freshwater lake located in China.
Mitigation
Various mitigation techniques exist to reduce the impacts of mining on the environment; however, the technique deployed is often dependent on the type of environment and severity of the impact. To ensure completion of reclamation, or restoring mine land for future use, many governments and regulatory authorities around the world require that mining companies post a bond to be held in escrow until productivity of reclaimed land has been convincingly demonstrated, although if cleanup procedures are more expensive than the size of the bond, the bond may simply be abandoned. Furthermore, effective mitigation is highly dependent on government policy, economic resources, and the implementation of new technology. Since 1978 the mining industry has reclaimed more than 2 million acres (8,000 km2) of land in the United States alone. This reclaimed land has renewed vegetation and wildlife in previous mining lands and can even be used for farming and ranching.
Specific sites
Tui mine in New Zealand
Stockton mine in New Zealand
Northland Pyrite Mine in Temagami, Ontario, Canada
Sherman Mine in Temagami, Ontario, Canada
Ok Tedi Mine in Western Province, Papua New Guinea
The Berkeley Pit
Wheal Jane Mines
See also
Environmental impact of deep sea mining
Environmental effects of placer mining
Environmental impact of gold mining
Environmental impact of zinc mining
List of environmental issues
Appalachian Voices, a lobby group in the United States
Mining
Natural resource
References
Pollution
Mining and the environment | 0.775234 | 0.995371 | 0.771645 |
Analytic hierarchy process – car example | This is a worked-through example showing the use of the analytic hierarchy process (AHP) in a practical decision situation.
See Analytic hierarchy process#Practical examples for context for this example.
Overview
AHP stands for analytic hierarchy process – a multi-criteria decision-making (MCDM) method. In AHP, values like price, weight, or area, or even subjective opinions such as feelings, preferences, or satisfaction, can be translated into measurable numeric relations. The core of AHP is the comparison of pairs instead of sorting (ranking), voting (e.g. assigning points) or the free assignment of priorities.
Teachers and users of the AHP know that the best way to understand it is to work through an example. The example below shows how a broad range of considerations can be managed through the use of the analytic hierarchy process.
The decision at hand requires a reasonably complex hierarchy to describe. It involves factors from the tangible and precisely measurable (purchase price, passenger capacity, cargo capacity), through the tangible but difficult to measure (maintenance costs, fuel costs, resale value) to the intangible and totally subjective (style).
(https://bpmsg.com/ahp-introduction)
In the end, there is a clear decision whose development can be seen, traced, and understood by all concerned.
A practical example: choosing an automobile
In an AHP hierarchy for a family buying a vehicle, the goal might be to choose the best car for the Jones family. The family might decide to consider cost, safety, style, and capacity as the criteria for making their decision. They might subdivide the cost criterion into purchase price, fuel costs, maintenance costs, and resale value. They might separate Capacity into cargo capacity and passenger capacity. The family, which for personal reasons always buys Hondas, might decide to consider as alternatives the Accord Sedan, Accord Hybrid Sedan, Pilot SUV, CR-V SUV, Element SUV, and Odyssey Minivan.
Constructing the hierarchy
The Jones' hierarchy could be diagrammed as shown below:
As they build their hierarchy, the buyer should investigate the values or measurements of the different elements that make it up. If there are published safety ratings, for example, or manufacturer's specs for cargo capacity, they should be gathered as part of the process. This information will be needed later, when the criteria and alternatives are evaluated.
Note that the measurements for some criteria, such as purchase price, can be stated with absolute certainty. Others, such as resale value, must be estimated, so must be stated with somewhat less confidence. Still others, such as style, are really in the eye of the beholder and are hard to state quantitatively at all. The AHP can accommodate all these types of criteria, even when they are present in a single problem.
Also note that the structure of the vehicle-buying hierarchy might be different for other families (ones who don't limit themselves to Hondas, or who care nothing about style, or who drive less than a year, etc.). It would definitely be different for a 25-year-old playboy who doesn't care how much his cars cost, knows he will never wreck one, and is intensely interested in speed, handling, and the numerous aspects of style.
Pairwise comparing the criteria with respect to the goal
To incorporate their judgments about the various elements in the hierarchy, decision makers compare the elements two by two. How they are compared will be shown later on. Right now, let's see which items are compared. Our example will begin with the four criteria in the second row of the hierarchy, though we could begin elsewhere if we wanted to. The criteria will be compared as to how important they are to the decision makers, with respect to the goal.
Each pair of items in this row will be compared; there are a total of six pairs (cost/safety, cost/style, cost/capacity, safety/style, safety/capacity, and style/capacity). You can use the diagram below to see these pairs more clearly.
In the next row, there is a group of four subcriteria under the cost criterion, and a group of two subcriteria under the capacity criterion.
In the Cost subgroup, each pair of subcriteria will be compared regarding their importance with respect to the Cost criterion. (As always, their importance is judged by the decision makers.) Once again, there are six pairs to compare (Purchase Price/Fuel Costs, Purchase Price/Maintenance Costs, Purchase Price/Resale Value, Fuel Costs/Maintenance Costs, Fuel Costs/Resale Value, and Maintenance Costs/Resale Value).
In the Capacity subgroup, there is only one pair of subcriteria. They are compared as to how important they are with respect to the Capacity criterion.
Things change a bit when we get to the alternatives row. Here, the cars in each group of alternatives are compared pair-by-pair with respect to the covering criterion of the group, which is the node directly above them in the hierarchy. What we are doing here is evaluating the models under consideration with respect to Purchase Price, then with respect to fuel costs, then maintenance costs, resale value, safety, style, cargo capacity, and passenger capacity. Because there are six cars in the group of alternatives, there will be fifteen comparisons for each of the eight covering criteria.
When the pairwise comparisons are as numerous as those in our example, specialized AHP software can help in making them quickly and efficiently. We will assume that the Jones family has access to such software, and that it allows the opinions of various family members to be combined into an overall opinion for the group.
The family's first pairwise comparison is cost vs. safety. They need to decide which of these is more important in choosing the best car for them all. This can be a difficult decision. On the one hand, "You can't put a price on safety. Nothing is more important than the life of a family member." But on the other hand, the family has a limited amount of money to spend, no member has ever had a major accident, and Hondas are known as very safe cars. In spite of the difficulty in comparing money to potential injury or death, the Jones family needs to determine its judgment about cost vs. safety in the car they are about to buy. They have to say which criterion is more important to them in reaching their goal, and how much more important it is (to them) than the other one. In making this judgment, they should remember that since the AHP is a flexible process, they can change their judgment later on.
You can imagine that there might be heated family discussion about cost vs. safety. It is the nature of the AHP to promote focused discussions about difficult aspects of the decisions to which it is applied. Such discussions encourage the communication of differences, which in turn encourages cooperation, compromise, and agreement among the members of the group.
Let's say that the family decides that in this case, cost is moderately more important to them than safety. The software requires them to express this judgment by entering a number. They can use this table to determine it; in this case they would enter a 3 in favor of cost:
Continuing our example, let's say they make the following judgments about all the comparisons of criteria, entering them into the software as numbers gotten from the table: as stated, cost is moderately important (3) over safety; also, cost is very strongly important (7) over style, and is moderately important (3) over capacity. Safety is extremely more important (9) than style, and of equal importance (1) to capacity. Capacity is very strongly important (7) over style.
We could show those judgments in a table like this:
The AHP software uses mathematical calculations to convert these judgments to priorities for each of the four criteria. The details of the calculations are beyond the scope of this article, but are readily available elsewhere. The software also calculates a consistency ratio that expresses the internal consistency of the judgments that have been entered.
In this case the judgments showed acceptable consistency, and the software used the family's inputs to assign these new priorities to the criteria:
You can duplicate this analysis at this online demonstration site; use the Line by Line Method by clicking its button, and don't forget to enter a negative number if the Criterion on the left is less important than the one on the right. If you are having trouble, click here for help. IMPORTANT: The demo site is designed for convenience, not accuracy. The priorities it returns may differ somewhat from those returned by rigorous AHP calculations. Nevertheless, it is useful in showing the mechanics of the pairwise comparison process. Once you are comfortable with the demo, you can experiment by entering your own judgments for the criteria in question. If your judgments are different from those of the Jones family, your priorities will possibly be quite different from theirs.
Look again at the above diagram and note that the Subcriteria still show their default priorities. This is because the decision makers haven't entered any judgments about them. So next on the family's agenda is to pairwise compare the four Subcriteria under Cost, then the two Subcriteria under Capacity. They will compare them following the same pattern as they did for the Criteria.
We could imagine the result of their comparisons yielding the priorities shown here:
At this point, all the comparisons for Criteria and Subcriteria have been made, and the AHP software has derived the local priorities for each group at each level. One more step can be made here. We know how much the priority of each Criterion contributes to the priority of the Goal. Since we also know how much the priority of each Subcriterion contributes to the priority of its parent, we (and the AHP software) can calculate the global priority of each Subcriterion. That will show us the priority of each Subcriterion with respect to the Goal. The global priorities throughout the hierarchy will add up to 1.000, like this:
Based on the judgments entered by the family, the AHP has derived the priorities for the factors against which each of the six cars will be compared. They are shown, from highest to lowest, in the table below. Notice that Cost and Capacity will not be evaluated directly, but that each of their Subcriteria will be evaluated on its own:
The next step is to evaluate each of the cars with respect to these factors. In the technical language of AHP, we will pairwise compare the alternatives with respect to their covering criteria.
Pairwise comparing the Alternatives with respect to the Criteria
The family can evaluate alternatives against their covering criteria in any order they choose. In this case, they choose the order of decreasing priority of the covering criteria. That means Purchase Price first.
Purchase price
The family has established a budget of $25,000 for buying the new car, but they are willing to consider alternatives whose price exceeds their budget. To refresh your mind, here are the six cars they are considering—in AHP terminology, the six alternatives—along with their purchase prices:
Knowing that they will have a lot of pairwise comparisons to make, the family prepared this worksheet to help them. It shows comparative information about the price and budget status of each pair of cars:
Now, what do they do?
First they might compare the purchase price of the Accord Sedan to that of the Accord Hybrid. If they stick purely to arithmetic, they could say that the Sedan is favored by 1.5, since the Hybrid's price is about 1.5 times that of the Sedan, and a lower price is better. They could follow that pattern through all 15 of the comparisons, and it would give a mathematically consistent set of comparisons.
But merely entering the numbers wouldn't take into account things like the $25,000 budget, or the value to the family of saving, say, $5,000 vs. $1,000 on a purchase. Things like that can be highly important in making decisions, and their importance can vary greatly with the situation and the people involved. Some families might never want to exceed their budget. Others might be willing to exceed it by a few dollars or a few per cent, but very unwilling to go further. Still others might not care much if they spend double their budget on the car. Because the AHP allows decision-makers to enter their judgments about the data, rather than just the data themselves, it can deal with all these situations and more.
Let's say that the Jones family is willing to exceed their budget by up to $1,000, but anything more is unacceptable. They "never say never," however—budget-busting cars will score as low as possible on the purchase price, but won't be removed from the list of alternatives. And for cars priced under budget, a $1,000 difference in price doesn't matter much to the Joneses, but a $5,000 difference is strongly important, and a $10,000 difference is extreme. They might enter the following intensities into the AHP software (throughout this example, the judgments of decision-makers are shaded in green):
You can follow the family's thinking by looking at the rationale for each judgment. Whenever a car that is under budget is compared with one that is over budget by more than $1,000, the former is extremely preferred. For cars under budget, a $1,000 less expensive car is slightly preferred, a $5,000 one is strongly preferred, and a $6,000 one is even more strongly preferred. When both cars are well over budget (comparison #6), they are equally preferred, which is to say they are equally undesirable. Because budget status and absolute price difference are enough to make each comparison, the ratio of prices never enters into the judgments.
When the judgments shown above are entered, the AHP software returns the following priorities for the six alternatives with respect to Purchase Price:
The local priorities show how much the purchase price of each model contributes to the subcriterion of Purchase Price. The global priorities show how much the purchase price of each model contributes to the overall goal of choosing the best car for the Jones family.
Safety
Comparing the alternatives on the basis of Safety is much less objective than comparing them on Purchase Price. Purchase prices are measured in dollars and can be determined to the penny. People can easily agree on the meaning of a $20,360 purchase price, and can rationally compare it to all the other prices, using methods and calculations that are understood and accepted by all.
But "safety" eludes our efforts even to define it in an objective way. Not only that, but the objective measurements of safety are limited and not readily comparable from car to car.
The government conducts objective crash tests, but they are incomplete measures of the "safety" of a given car. Also, the crash tests only compare the members of a single class of cars, such as Midsize Cars or Minivans. Is a midsize car with 100% 5-star safety ratings equally as safe as a minivan with the same ratings? It's not exactly clear. And when evaluating minivans that have 5-star ratings in all categories but one, who can say if the one with four stars for "Frontal Impact, Driver's Side" is safer than the one whose four stars are in "Side Impact, Rear Occupant?" There's really no way to tell.
In spite of these difficulties, the AHP provides a rational way to evaluate the relative safety of different cars.
Let's assume that the Jones family has researched the Safety of the six Hondas they are considering. They will have found that all of them are among the safest cars on the road. All six are "Top Safety Picks" of the IIHS safety standards organization. All of them do very well in the crash testing programs of the National Highway Traffic Safety Administration. But there are differences between them, and the family wants to factor the differences into their decision. "Your car can never be too safe."
The worksheet below includes the data that the family has decided to evaluate. They believe that a heavier car is a safer car, so they've documented the curb weights of their alternatives. They have investigated the results of government crash tests, and they've summarized the results on the worksheet:
The family will consider everything in the worksheet as they compare their alternatives. They are not safety experts, but they can apply their life experience to making decisions about the safety ratings. They all feel safer when driving a car that is significantly heavier than another one. One family member has seen two gruesome rollover accidents, and is terrified of a vehicle rolling over with her inside. She insists that the family car has the highest possible Rollover Rating.
Here are the weights that the Jones family enters for the alternatives regarding Safety (throughout this example, orange shading is used for judgments where A is favored; yellow shading is used for B):
When the judgments shown above are entered, the AHP software returns the following priorities for the six alternatives with respect to Safety:
The local priorities show how much the safety of each model contributes to the Criterion of Safety. The global priorities show how much the Safety of each model contributes to the overall goal of choosing the best car for the Jones family.
Passenger capacity
This characteristic is easy to evaluate. The alternatives can carry either four or five or eight passengers. Here are the figures:
The family has decided that four is barely enough, five is perfect for their needs, and eight is just a little bit better than five. Here are their judgments:
When the judgments shown above are entered, the AHP software returns the following priorities for the six alternatives with respect to Passenger Capacity:
The local priorities show how much the passenger capacity of each model contributes to the Subcriterion of Passenger Capacity. The global priorities show how much the passenger capacity of each model contributes to the overall goal of choosing the best car for the Jones family.
Fuel costs
After careful consideration, the Jones family believes that no matter which car they buy, they will drive it the same number of miles per year. In other words, there is nothing about any of the alternatives, including the price of fuel or the car's fuel consumption per mile, that would cause it to be driven more or fewer miles than any other alternative. They also believe that the government MPG rating is an accurate basis on which to compare the fuel consumption of the cars. Here is a worksheet showing the government MPG ratings of the Jones family alternatives:
They believe, therefore, that the fuel cost of any alternative vs. any other depends exclusively on the MPG ratings of the two cars. So the pairwise judgments they enter for any two cars will be inversely proportional to their MPG ratings. In other words, if car A has exactly twice the MPG rating of car B, the Fuel Cost for car B will be exactly twice that of car A. This table shows the judgments they will enter for all the comparisons:
When the judgments shown above are entered, the AHP software returns the following priorities for the six alternatives with respect to Fuel Cost:
The local priorities show how much the fuel cost of each model contributes to the subcriterion of Fuel Costs. The global priorities show how much the fuel cost of each model contributes to the overall goal of choosing the best car for the Jones family.
Resale value
When the family researched Resale Value, they learned that lending institutions keep statistics on the market value of different models after various time periods. These estimated "residual values" are used for leasing, and are typically based on a limit of driven per year. Actual residual values depend on the condition of the car, and can vary with market conditions.
The Joneses are going to buy their car, not lease it, and they expect to drive it more than 12,000 miles per year, but they agree among themselves that the leasing figures are a good basis on which to compare the alternatives under consideration. Their bank gave them this table showing the residual value of each alternative after four years and :
As they look at the table of residual values, they see that the residual value of a CR-V is 25% higher than that of a Pilot (0.55 is 125% of 0.44). They reason that such a greatly higher residual value is an indication of a better or more desirable car, so they want to place a premium on cars with relatively high residual value. After some thought and discussion, they decide that, when comparing residual values, they want to look at the higher one as a percentage of the lower, and assign their intensities on that basis. Where one model has a residual value that is less than 105% of another, they consider the residual values as equal for all practical purposes. Where one model has a residual value that is 125% of the residual value of another, they consider the former model as quite strongly more important, desirable, valuable, etc., as indicated by its much higher resale value. With a bit more thought and discussion, they decide to make their judgments on this basis:
They realize that not every family would do it this way, but this way seems best for them. This table shows the judgments they will enter for their Resale Value comparisons:
When the judgments shown above are entered, the AHP software returns the following priorities for the six alternatives with respect to Resale Value:
The local priorities show how much the resale value of each model contributes to the Subcriterion of Resale Value. The global priorities show how much the resale value of each model contributes to the overall goal of choosing the best car for the Jones family.
Maintenance costs
The Jones family researched maintenance costs for the cars under consideration, but they didn't find any hard figures. The closest they got was Consumer Reports magazine, which publishes 17 separate maintenance ratings for every car on the market. Their Hondas ranked very well, with all ratings "Much Better Than Average," except for a few on the Pilot and Odyssey. The Pilot got "Better Than Average" for its audio system and the user rating, and "Average" for body integrity. The Odyssey got "Better Than Average" for body hardware and power equipment, and "Average" for brakes, body integrity, and user rating.
The Joneses also asked their favorite mechanic to evaluate the maintenance costs for their six cars. Using tire prices and mileage estimates, he came up with figures for tire costs over of driving. He didn't have figures for brake costs, but he said they'd be about twice as much for the SUVs and minivans as they would for the sedans. He also cautioned them that the battery in the Accord Hybrid was an expensive repair item, and that the engine placement on the Odyssey made it a more expensive car to work on.
The family created this worksheet to keep track of all their information about maintenance costs:
Even though every column on the worksheet contains a different type of information, the Joneses can use it to make reasonable, rational judgments about Maintenance Costs. Here are the judgments they will enter:
When the judgments shown above are entered, the AHP software returns the following priorities for the six alternatives with respect to Maintenance Costs:
The local priorities show how much the projected maintenance cost of each model contributes to the subcriterion of Maintenance Costs. The global priorities show how much the maintenance cost of each model contributes to the overall goal of choosing the best car for the Jones family.
Style
The family decided that Style is important to them, but how can they determine the "style" of each of the six alternatives? "Style" is a pretty subjective concept—it can truly be said that "style is in the eye of the beholder." Yet through the method of pairwise comparison, the AHP gives the Jones family a way to evaluate the "style" of the cars they are considering.
Honda's web site provides photos of each of the alternatives. It also has videos, commercials, rotatable 360° views, color chips, and more, all available to help family members evaluate the Style of each car. The family can compare their alternatives two-by-two on Style, using the tools on the web site to help them make their judgments. They did just that, and here is the record of their judgments:
When the judgments shown above are entered, the AHP software returns the following local priorities for the six alternatives with respect to Style:
The local priorities show how much the style of each model contributes to the Style Criterion. The global priorities show how much the Style of each model contributes to the overall goal of choosing the best car for the Jones family.
Cargo capacity
The Cargo Capacity of each alternative, measured in cubic feet, is listed in the manufacturer's specifications for each vehicle. The Joneses don't really know how it is calculated, but they trust that it's a good indication of how much cargo can be packed into a vehicle. This worksheet shows the cargo capacities of the Jones' alternatives:
Cargo capacities for the alternatives vary from 14 to . If they wanted to, the Jones family could enter these capacities directly into the AHP software. But that would mean that, when considering Cargo Capacity, a car with . of it would be over ten times as desirable as one with only 14. Given the car's use as a family vehicle, that doesn't seem quite right. So the family looks at the available capacities and determines that a . trunk is perfectly fine for their needs, that something about five times larger is slightly better, and that something about ten times larger is moderately so. These judgments correspond to values of 1, 2, and 3 on the AHP's Fundamental Scale.
Here are the judgments they would enter into the AHP software:
When the judgments shown above are entered, the AHP software returns the following local priorities for the six alternatives with respect to Cargo Capacity:
The local priorities show how much the cargo capacity of each model contributes to the subcriterion of Cargo Capacity. The global priorities show how much the cargo capacity of each model contributes to the overall goal of choosing the best car for the Jones family.
Making the decision
In the end, the AHP software arranges and totals the global priorities for each of the alternatives. Their grand total is 1.000, which is identical to the priority of the goal. Each alternative has a global priority corresponding to its "fit" to all the family's judgments about all those aspects of Cost, Safety, Style and Capacity. Here is a summary of the global priorities of the alternatives:
The Odyssey Minivan, with a global priority of 0.220, is the alternative that contributes the most to the goal of choosing the best car for the Jones family. The Accord Sedan is a close second, with a priority of 0.213. The other models have considerably less priority than those two. In descending order, they are CR-V SUV, Accord Hybrid, Element SUV, and Pilot SUV.
The Analytic Hierarchy Process has shown the Joneses that the Odyssey Minivan best satisfies all their criteria and judgments, followed closely by the Accord Sedan. The other alternatives fall significantly short of meeting their criteria. The family's next step is up to them. They might just go out and buy an Odyssey, or they might use the AHP or other means to refine their decision between the Odyssey and the Accord Sedan.
References
External links
R ahp package – The R open source ahp package provides an implementation of this example.
AHPy – AHPy provides a worked example of this problem in its README
AHP with Microsoft Excel – A worked example for choosing a smartphone using a Microsoft Excel workbook
Group decision-making | 0.794814 | 0.970846 | 0.771642 |
Phosphorus cycle | The phosphorus cycle is the biogeochemical cycle that involves the movement of phosphorus through the lithosphere, hydrosphere, and biosphere. Unlike many other biogeochemical cycles, the atmosphere does not play a significant role in the movement of phosphorus, because phosphorus and phosphorus-based materials do not enter the gaseous phase readily, as the main source of gaseous phosphorus, phosphine, is only produced in isolated and specific conditions. Therefore, the phosphorus cycle is primarily examined studying the movement of orthophosphate (PO4)3-, the form of phosphorus that is most commonly seen in the environment, through terrestrial and aquatic ecosystems.
Living organisms require phosphorus, a vital component of DNA, RNA, ATP, etc., for their proper functioning. Phosphorus also enters in the composition of phospholipids present in cell membranes. Plants assimilate phosphorus as phosphate and incorporate it into organic compounds. In animals, inorganic phosphorus in the form of apatite is also a key component of bones, teeth (tooth enamel), etc. On the land, phosphorus gradually becomes less available to plants over thousands of years, since it is slowly lost in runoff. Low concentration of phosphorus in soils reduces plant growth and slows soil microbial growth, as shown in studies of soil microbial biomass. Soil microorganisms act as both sinks and sources of available phosphorus in the biogeochemical cycle. Short-term transformation of phosphorus is chemical, biological, or microbiological. In the long-term global cycle, however, the major transfer is driven by tectonic movement over geologic time and weathering of phosphate containing rock such as apatite. Furthermore, phosphorus tends to be a limiting nutrient in aquatic ecosystems. However, as phosphorus enters aquatic ecosystems, it has the possibility to lead to over-production in the form of eutrophication, which can happen in both freshwater and saltwater environments.
Human activities have caused major changes to the global phosphorus cycle primarily through the mining and subsequent transformation of phosphorus minerals for use in fertilizer and industrial products. Some phosphorus is also lost as effluent through the mining and industrial processes as well.
Phosphorus in the environment
Ecological function
Phosphorus is an essential nutrient for plants and animals. Phosphorus is a limiting nutrient for aquatic organisms. Phosphorus forms parts of important life-sustaining molecules that are very common in the biosphere. Phosphorus does enter the atmosphere in very small amounts when dust containing phosphorus is dissolved in rainwater and sea spray, but the element mainly remains on land and in rock and soil minerals. Phosphates which are found in fertilizers, sewage and detergents, can cause pollution in lakes and streams. Over-enrichment of phosphate in both fresh and inshore marine waters can lead to massive algae blooms. In fresh water, the death and decay of these blooms leads to eutrophication. An example of this is the Canadian Experimental Lakes Area.
Freshwater algal blooms are generally caused by excess phosphorus, while those that take place in saltwater tend to occur when excess nitrogen is added. However, it is possible for eutrophication to be due to a spike in phosphorus content in both freshwater and saltwater environments.
Phosphorus occurs most abundantly in nature as part of the orthophosphate ion (PO4)3−, consisting of a P atom and 4 oxygen atoms. On land most phosphorus is found in rocks and minerals. Phosphorus-rich deposits have generally formed in the ocean or from guano, and over time, geologic processes bring ocean sediments to land. Weathering of rocks and minerals release phosphorus in a soluble form where it is taken up by plants, and it is transformed into organic compounds. The plants may then be consumed by herbivores and the phosphorus is either incorporated into their tissues or excreted. After death, the animal or plant decays, and phosphorus is returned to the soil where a large part of the phosphorus is transformed into insoluble compounds. Runoff may carry a small part of the phosphorus back to the ocean. Generally with time (thousands of years) soils become deficient in phosphorus leading to ecosystem retrogression.
Major pools in aquatic systems
There are four major pools of phosphorus in freshwater ecosystems: dissolved inorganic phosphorus (DIP), dissolved organic phosphorus (DOP), particulate inorganic phosphorus (PIP) and particulate organic phosphorus (POP). Dissolved material is defined as substances that pass through a 0.45 μm filter. DIP consists mainly of orthophosphate (PO43-) and polyphosphate, while DOP consists of DNA and phosphoproteins. Particulate matter are the substances that get caught on a 0.45 μm filter and do not pass through. POP consists of both living and dead organisms, while PIP mainly consists of hydroxyapatite, Ca5(PO4)3OH . Inorganic phosphorus comes in the form of readily soluble orthophosphate. Particulate organic phosphorus occurs in suspension in living and dead protoplasm and is insoluble. Dissolved organic phosphorus is derived from the particulate organic phosphorus by excretion and decomposition and is soluble.
Biological function
The primary biological importance of phosphates is as a component of nucleotides, which
serve as energy storage within cells (ATP) or when linked together, form the nucleic acids DNA and RNA. The double helix of our DNA is only possible because of the phosphate ester bridge that binds the helix. Besides making biomolecules, phosphorus is also found in bone and the enamel of mammalian teeth, whose strength is derived from calcium phosphate in the form of hydroxyapatite. It is also found in the exoskeleton of insects, and phospholipids (found in all biological membranes). It also functions as a buffering agent in maintaining acid base homeostasis in the human body.
Phosphorus cycling
Phosphates move quickly through plants and animals; however, the processes that move them through the soil or ocean are very slow, making the phosphorus cycle overall one of the slowest biogeochemical cycles.
The global phosphorus cycle includes four major processes:
(i) tectonic uplift and exposure of phosphorus-bearing rocks such as apatite to surface weathering;
(ii) physical erosion, and chemical and biological weathering of phosphorus-bearing rocks to provide dissolved and particulate phosphorus to soils, lakes and rivers;
(iii) riverine and subsurface transportation of phosphorus to various lakes and run-off to the ocean;
(iv) sedimentation of particulate phosphorus (e.g., phosphorus associated with organic matter and oxide/carbonate minerals) and eventually burial in marine sediments (this process can also occur in lakes and rivers).
In terrestrial systems, bioavailable P (‘reactive P’) mainly comes from weathering of phosphorus-containing rocks. The most abundant primary phosphorus-mineral in the crust is apatite, which can be dissolved by natural acids generated by soil microbes and fungi, or by other chemical weathering reactions and physical erosion. The dissolved phosphorus is bioavailable to terrestrial organisms and plants and is returned to the soil after their decay. Phosphorus retention by soil minerals (e.g., adsorption onto iron and aluminum oxyhydroxides in acidic soils and precipitation onto calcite in neutral-to-calcareous soils) is usually viewed as the most important process in controlling terrestrial P-bioavailability in the mineral soil. This process can lead to the low level of dissolved phosphorus concentrations in soil solution. Various physiological strategies are used by plants and microorganisms for obtaining phosphorus from this low level of phosphorus concentration.
Soil phosphorus is usually transported to rivers and lakes and can then either be buried in lake sediments or transported to the ocean via river runoff. Atmospheric phosphorus deposition is another important marine phosphorus source to the ocean. In surface seawater, dissolved inorganic phosphorus, mainly orthophosphate (PO43-), is assimilated by phytoplankton and transformed into organic phosphorus compounds. Phytoplankton cell lysis releases cellular dissolved inorganic and organic phosphorus to the surrounding environment. Some of the organic phosphorus compounds can be hydrolyzed by enzymes synthesized by bacteria and phytoplankton and subsequently assimilated. The vast majority of phosphorus is remineralized within the water column, and approximately 1% of associated phosphorus carried to the deep sea by the falling particles is removed from the ocean reservoir by burial in sediments. A series of diagenetic processes act to enrich sediment pore water phosphorus concentrations, resulting in an appreciable benthic return flux of phosphorus to overlying bottom waters. These processes include
(i) microbial respiration of organic matter in sediments,
(ii) microbial reduction and dissolution of iron and manganese (oxyhydr)oxides with subsequent release of associated phosphorus, which connects the phosphorus cycle to the iron cycle, and
(iii) abiotic reduction of iron (oxyhydr)oxides by hydrogen sulfide and liberation of iron-associated phosphorus.
Additionally,
(iv) phosphate associated with calcium carbonate and
(v) transformation of iron oxide-bound phosphorus to vivianite play critical roles in phosphorus burial in marine sediments.
These processes are similar to phosphorus cycling in lakes and rivers.
Although orthophosphate (PO43-), the dominant inorganic P species in nature, is oxidation state (P5+), certain microorganisms can use phosphonate and phosphite (P3+ oxidation state) as a P source by oxidizing it to orthophosphate. Recently, rapid production and release of reduced phosphorus compounds has provided new clues about the role of reduced P as a missing link in oceanic phosphorus.
Phosphatic minerals
The availability of phosphorus in an ecosystem is restricted by its rate of release during weathering. The release of phosphorus from apatite dissolution is a key control on ecosystem productivity. The primary mineral with significant phosphorus content, apatite [Ca5(PO4)3OH] undergoes carbonation.
Little of this released phosphorus is taken up by biota, as it mainly reacts with other soil minerals. This leads to phosphorus becoming unavailable to organisms in the later stage of weathering and soil development as it will precipitate into rocks. Available phosphorus is found in a biogeochemical cycle in the upper soil profile, while phosphorus found at lower depths is primarily involved in geochemical reactions with secondary minerals. Plant growth depends on the rapid root uptake of phosphorus released from dead organic matter in the biochemical cycle. Phosphorus is limited in supply for plant growth. Phosphates move quickly through plants and animals; however, the processes that move them through the soil or ocean are very slow, making the phosphorus cycle overall one of the slowest biogeochemical cycles.
Low-molecular-weight (LMW) organic acids are found in soils. They originate from the activities of various microorganisms in soils or may be exuded from the roots of living plants. Several of those organic acids are capable of forming stable organo-metal complexes with various metal ions found in soil solutions. As a result, these processes may lead to the release of inorganic phosphorus associated with aluminum, iron, and calcium in soil minerals. The production and release of oxalic acid by mycorrhizal fungi explain their importance in maintaining and supplying phosphorus to plants.
The availability of organic phosphorus to support microbial, plant and animal growth depends on the rate of their degradation to generate free phosphate. There are various enzymes such as phosphatases, nucleases and phytase involved for the degradation. Some of the abiotic pathways in the environment studied are hydrolytic reactions and photolytic reactions. Enzymatic hydrolysis of organic phosphorus is an essential step in the biogeochemical phosphorus cycle, including the phosphorus nutrition of plants and microorganisms and the transfer of organic phosphorus from soil to bodies of water. Many organisms rely on the soil derived phosphorus for their phosphorus nutrition.
Eutrophication
Eutrophication is when waters are enriched by nutrients that lead to structural changes to the aquatic ecosystem such as algae bloom, deoxygenation, reduction of fish species. It does occur naturally, as when lakes age they become more productive due to increases in major limiting reagents such as nitrogen and phosphorus. For example, phosphorus can enter into lakes where it will accumulate in the sediments and the biosphere. It can also be recycled from the sediments and the water system allowing it to stay in the environment. Antrhopogenic effects can also cause phosphorus to flow into aquatic ecosystems as seen in drainage water and runoff from fertilized soils on agricultural land. Additionally, eroded soils, which can be caused by deforestation and urbanization, can lead to more phosphorus and nitrogen being added to these aquatic ecosystems. These all increase the amount of phosphorus that enters the cycle which has led to excessive nutrient intake in freshwater systems causing dramatic growth in algal populations. When these algae die, their putrefaction depletes the water of oxygen and can toxify the waters. Both these effects cause plant and animal death rates to increase as the plants take in and animals drink the poisonous water.
Saltwater phosphorus eutrophication
Oceanic ecosystems gather phosphorus through many sources, but it is mainly derived from weathering of rocks containing phosphorus which are then transported to the oceans in a dissolved form by river runoff. Due to a dramatic rise in mining for phosphorus, it is estimated that humans have increased the net storage of phosphorus in soil and ocean systems by 75%. This increase in phosphorus has led to more eutrophication in ocean waters as phytoplankton blooms have caused a drastic shift in anoxic conditions seen in both the Gulf of Mexico and the Baltic Sea. Some research suggests that when anoxic conditions arise from eutrophication due to excess phosphorus, this creates a positive feedback loop that releases more phosphorus from oceanic reserves, exacerbating the issue. This could possibly create a self-sustaining cycle of oceanic anoxia where the constant recovery of phosphorus keeps stabilizing the eutrophic growth. Attempts to mitigate this problem using biological approaches are being investigated. One such approach involves using phosphorus accumulating organisms such as, Candidatus accumulibacter phosphatis, which are capable of effectively storing phosphorus in the form of phosphate in marine ecosystems. Essentially, this would alter how the phosphorus cycle exists currently in marine ecosystems. Currently, there has been a major influx of phosphorus due to increased agricultural use and other industrial applications, thus these organisms could theoretically store phosphorus and hold on to it until it could be recycled in terrestrial ecosystems which would have lost this excess phosphorus due to runoff.
Wetland
Wetlands are frequently applied to solve the issue of eutrophication. Nitrate is transformed in wetlands to free nitrogen and discharged to the air. Phosphorus is adsorbed by wetland soils which are taken up by the plants. Therefore, wetlands could help to reduce the concentration of nitrogen and phosphorus to remit eutrophication. However, wetland soils can only hold a limited amount of phosphorus. To remove phosphorus continually, it is necessary to add more new soils within the wetland from remnant plant stems, leaves, root debris, and undecomposable parts of dead algae, bacteria, fungi, and invertebrates.
Human influences
Nutrients are important to the growth and survival of living organisms, and hence, are essential for development and maintenance of healthy ecosystems. Humans have greatly influenced the phosphorus cycle by mining phosphate rock. For millennia, phosphorus was primarily brought into the environment through the weathering of phosphate containing rocks, which would replenish the phosphorus normally lost to the environment through processes such as runoff, albeit on a very slow and gradual time-scale. Since the 1840s, when the technology to mine and extract phosphorus became more prevalent, approximately 110 teragrams of phosphorus has been added to the environment. This trend appears to be continuing in the future as from 1900-2022, the amount of phosphorus mined globally has increased 72-fold, with an expected annual increase of 4%. Most of this mining is done in order to produce fertilizers which can be used on a global scale. However, at the rate humans are mining, the geological system can not restore what is lost quickly enough. Thus, researchers are examining ways to better recycle phosphorus in the environment, with one promising application including the use of microorganisms. Regardless, humans have had a profound impact on the phosphorus cycle with wide-reaching implications about food security, eutrophication, and the overall availability of the nutrient.
Other human processes can have detrimental effects on the phosphorus cycle, such as the repeated application of liquid hog manure in excess to crops. The application of biosolids may also increase available phosphorus in soil. In poorly drained soils or in areas where snowmelt can cause periodic waterlogging, reducing conditions can be attained in 7–10 days. This causes a sharp increase in phosphorus concentration in solution and phosphorus can be leached. In addition, reduction of the soil causes a shift in phosphorus from resilient to more labile forms. This could eventually increase the potential for phosphorus loss. This is of particular concern for the environmentally sound management of such areas, where disposal of agricultural wastes has already become a problem. It is suggested that the water regime of soils that are to be used for organic wastes disposal is taken into account in the preparation of waste management regulations.
See also
Peak phosphorus
Planetary boundaries
Oceanic carbon cycle
References
External links
Biogeochemical cycle
Soil biology
Soil chemistry
Phosphorus | 0.77474 | 0.995842 | 0.771519 |
Homophily | Homophily is a concept in sociology describing the tendency of individuals to associate and bond with similar others, as in the proverb "". The presence of homophily has been discovered in a vast array of network studies: over have observed homophily in some form or another, and they establish that similarity is associated with connection. The categories on which homophily occurs include age, gender, class, and organizational role.
The opposite of homophily is heterophily or intermingling. Individuals in homophilic relationships share common characteristics (beliefs, values, education, etc.) that make communication and relationship formation easier. Homophily between mated pairs in animals has been extensively studied in the field of evolutionary biology, where it is known as assortative mating. Homophily between mated pairs is common within natural animal mating populations.
Homophily has a variety of consequences for social and economic outcomes.
Types and dimensions
Baseline vs. inbreeding
To test the relevance of homophily, researchers have distinguished between two types:
Baseline homophily: simply the amount of homophily that would be expected by chance given an existing uneven distribution of people with varying characteristics; and
Inbreeding homophily: the amount of homophily over and above this expected value, typically due to personal preferences and choices.
Status vs. value
In their original formulation of homophily, Paul Lazarsfeld and Robert K. Merton (1954) distinguished between status homophily and value homophily; individuals with similar social status characteristics were more likely to associate with each other than by chance:
Status homophily: includes both society-ascribed characteristics (e.g. race, ethnicity, sex, and age) and acquired characteristics (e.g., religion, occupation, behavior patterns, and education).
Value homophily: involves association with others who have similar values, attitudes, and beliefs, regardless of differences in status characteristics.
Dimensions
Race and ethnicity
Social networks in the United States today are strongly divided by race and ethnicity, which account for a large proportion of inbreeding homophily (though classification by these criteria can be problematic in sociology due to fuzzy boundaries and different definitions of race).
Smaller groups have lower diversity simply due to the number of members. This tends to give racial and ethnic minority groups a higher baseline homophily. Race and ethnicity also correlates with educational attainment and occupation, which further increase baseline homophily.
Sex and gender
In terms of sex and gender, the baseline homophily networks were relatively low compared to race and ethnicity. In this form of homophily men and women frequently live together and have large populations that are normally equal in size. It is also common to find higher levels of gender homophily among school students. Most sex homophily are a result of inbreeding homophily.
Age
Most age homophily is of the baseline type. An interesting pattern of inbreeding age homophily for groups of different ages was found by Marsden (1988). It indicated a strong relationship between someone's age and the social distance to other people with regard to confiding in someone. For example, the larger age gap someone had, the smaller chances that they were confided by others with lower ages to "discuss important matters."
Religion
Homophily based on religion is due to both baseline and inbreeding homophily. Those that belong in the same religion are more likely to exhibit acts of service and aid to one another, such as loaning money, giving therapeutic counseling, and other forms of help during moments of emergency. Parents have been shown to have higher levels of religious homophily than nonparent, which supports the notion that religious institutions are sought out for the benefit of children.
Education, occupation and social class
Family of birth accounts for considerable baseline homophily with respect to education, occupation, and social class. In terms of education, there is a divide among those who have a college education and those who do not. Another major distinction can be seen between those with white collar occupations and blue collar occupations.
Interests
Homophily occurs within groups of people that have similar interests as well. We enjoy interacting more with individuals who share similarities with us, so we tend to actively seek out these connections. Additionally, as more users begin to rely on the Internet to find like minded communities for themselves, many examples of niches within social media sites have begun appearing to account for this need. This response has led to the popularity of sites like Reddit in the 2010s, advertising itself as a "home to thousands of communities... and authentic human interaction."
Social media
As social networks are largely divided by race, social-networking websites like Facebook also foster homophilic atmospheres. When a Facebook user 'likes' or interacts with an article or post of a certain ideology, Facebook continues to show that user posts of that similar ideology (which Facebook believes they will be drawn to). In a research article, McPherson, Smith-Lovin, and Cook (2003) write that homogeneous personal networks result in limited "social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience." This homophily can foster divides and echo chambers on social networking sites, where people of similar ideologies only interact with each other.
Causes and effects
Causes
Geography: Baseline homophily often arises when the people who are located nearby also have similar characteristics. People are more likely to have contact with those who are geographically closer than those who are distant. Technology such as the telephone, e-mail, and social networks have reduced but not eliminated this effect.
Family ties: These ties decay slowly, but familial ties, specifically that of domestic partners, fulfill many requisites that generate homophily. Family relationships are generally close and keep frequent contact though they may be at great geographic distances. Ideas that may get lost in other relational contexts, will often instead lead to actions in this setting.
Organizations: School, work, and volunteer activities provide the great majority of non-family ties. Many friendships, confiding relations, and social support ties are formed within voluntary groups. The social homogeneity of most organizations creates a strong baseline homophily in networks that are formed there.
Isomorphic sources: The connections between people who occupy equivalent roles will induce homophily in the system of network ties. This is common in three domains: workplace (e.g., all heads of HR departments will tend to associate with other HR heads), family (e.g., mothers tend to associate with other mothers), and informal networks.
Cognitive processes: People who have demographic similarity tend to own shared knowledge, and therefore they have a greater ease of communication and share cultural tastes, which can also generate homophily.
Effects
According to one study, perception of interpersonal similarity improves coordination and increase the expected payoff of interactions, above and beyond the effect of merely "liking others." Another study claims that homophily produces tolerance and cooperation in social spaces. However, homophilic patterns can also restrict access to information or inclusion for minorities.
Nowadays, the restrictive patterns of homophily can be widely seen within social media. This selectiveness within social media networks can be traced back to the origins of Facebook and the transition of users from MySpace to Facebook in the early 2000s. One study of this shift in a network's user base from (2011) found that this perception of homophily impacted many individuals' preference of one site over another. Most users chose to be more active on the site their friends were on. However, along with the complexities of belongingness, people of similar ages, economic class, and prospective futures (higher education and/or career plans) shared similar reasons for favoring one social media platform. The different features of homophily affected their outlook of each respective site.
The effects of homophily on the diffusion of information and behaviors are also complex. Some studies have claimed that homophily facilitates access information, the diffusion of innovations and behaviors, and the formation of social norms. Other studies, however, highlight mechanisms through which homophily can maintain disagreement, exacerbate polarization of opinions, lead to self segregation between groups, and slow the formation of an overall consensus.
As online users have a degree of power to form and dictate the environment, the effects of homophily continue to persist. On Twitter, terms such as "stan Twitter", "Black Twitter", or "local Twitter" have also been created and popularized by users to separate themselves based on specific dimensions.
Homophily is a cause of homogamy—marriage between people with similar characteristics. Homophily is a fertility factor; an increased fertility is seen in people with a tendency to seek acquaintance among those with common characteristics. Governmental family policies have a decreased influence on fertility rates in such populations.
See also
Groupthink
Echo chamber (media)
References
Interpersonal relationships
Sociological terminology | 0.780762 | 0.988145 | 0.771507 |
Nomothetic | Nomothetic literally means "proposition of the law" (Greek derivation) and is used in philosophy, psychology, and law with differing meanings.
Etymology
In the general humanities usage, nomothetic may be used in the sense of "able to lay down the law", "having the capacity to posit lasting sense" (from , from nomothetēs νομοθέτης "lawgiver", from νόμος "law" and the Proto-Indo-European etymon nem- meaning to "take, give, account, apportion")), e.g., 'the nomothetic capability of the early mythmakers' or 'the nomothetic skill of Adam, given the power to name things.'
In psychology
In psychology, nomothetic refers to research about general principles or generalizations across a population of individuals. For example, the Big Five model of personality and Piaget's developmental stages are nomothetic models of personality traits and cognitive development respectively. In contrast, idiographic refers to research about the unique and contingent aspects of individuals, as in psychological case studies.
In psychological testing, nomothetic measures are contrasted to ipsative or idiothetic measures, where nomothetic measures are measures that are observed on a relatively large sample and have a more general outlook.
In other fields
In sociology, nomothetic explanation presents a generalized understanding of a given case, and is contrasted with idiographic explanation, which presents a full description of a given case. Nomothetic approaches are most appropriate to the deductive approach to social research inasmuch as they include the more highly structured research methodologies which can be replicated and controlled, and which focus on generating quantitative data with a view to explaining causal relationships.
In anthropology, nomothetic refers to the use of generalization rather than specific properties in the context of a group as an entity.
In history, nomothetic refers to the philosophical shift in emphasis away from traditional presentation of historical text restricted to wars, laws, dates, and such, to a broader appreciation and deeper understanding.
See also
Nomothetic and idiographic
Nomological
References
Sociological terminology | 0.79801 | 0.966758 | 0.771483 |
Anthropogenic cloud | A homogenitus, anthropogenic or artificial cloud is a cloud induced by human activity. Although most clouds covering the sky have a purely natural origin, since the beginning of the Industrial Revolution, the use of fossil fuels and water vapor and other gases emitted by nuclear, thermal and geothermal power plants yield significant alterations of the local weather conditions. These new atmospheric conditions can thus enhance cloud formation.
Various methods have been proposed for creating and utilizing this weather phenomenon. Experiments have also been carried out for various studies. For example, Russian scientists have been studying artificial clouds for more than 50 years. But by far the greatest number of anthropogenic clouds are airplane contrails (condensation trails) and rocket trails.
Anthropogenesis
Three conditions are needed to form an anthropogenic cloud:
The air must be near saturation of its water vapor,
The air must be cooled to the dew point temperature with respect to water (or ice) to condensate (or sublimate) part of the water vapor,
The air must contain condensation nuclei, small solid particles, where condensation/sublimation starts.
The current use of fossil fuels enhances any of these three conditions. First, fossil fuel combustion generates water vapor. Additionally, this combustion also generates the formation of small solid particles that can act as condensation nuclei. Finally, all the combustion processes emit energy that enhance vertical upward movements.
Despite all the processes involving the combustion of fossil fuels, only some human activities, such as, thermal power plants, commercial aircraft or chemical industries modify enough the atmospheric conditions to produce clouds that can use the qualifier homogenitus due to its anthropic origin.
Cloud classification
The International Cloud Atlas published by the World Meteorological Organization compiles the proposal made by Luke Howard at the beginning of the 19th century, and all the subsequent modifications. Each cloud has a name in Latin, and clouds are classified according to their genus, species, and variety:
There are 10 genera (plural of genus) (e.g. cumulus, stratus, etc...).
There is a number of species for these genera that describe the form, the dimensions, internal structure, and type of vertical movement (e.g. stratus nebulosus for stratus covering the whole sky). Species are mutually exclusive.
Species can further be divided into varieties that describe their transparence or their arrangement (e.g. stratus nebulosus opacus for thick stratus covering the whole sky).
Further terms can be added to describe the origin of the cloud. Homogenitus is a suffix that signifies that a cloud originates from human activity. For instance, Cumulus originated by human activity is called Cumulus homogenitus and abbreviated as CUh. If a homogenitus cloud of one genus changes to another genus type, it is termed a homomutatus cloud.
Generating process
The international cloud classification divides the different genera into three main groups of clouds according to their altitude:
High clouds
Middle clouds
Low clouds
Homogenitus clouds can be generated by different sources in the high and low levels.
High homogenitus
Despite the fact that the three genera of high clouds, Cirrus, Cirrocumulus and Cirrostratus, form at the top of the troposphere, far from the earth surface, they may have an anthropogenic origin. In this case, the process that causes their formation is almost always the same: commercial and military aircraft flight. Exhaust products from the combustion of the kerosene (or sometimes gasoline) expelled by engines provide water vapor to this region of the troposphere.
In addition, the strong contrast between the cold air of the high troposphere layers and the warm and moist air ejected by aircraft engines causes rapid deposition of water vapor, forming small ice crystals. This process is also enhanced by the presence of abundant nuclei of condensation produced as a result of combustion. These clouds are commonly known as condensation trails (contrails), and are initially lineal cirrus clouds that could be called Cirrus homogenitus (Cih). The large temperature difference between the air exhausted and the ambient air generates small-scale convection processes, which favor the evolution of the condensation trails to Cirrocumulus homogenitus (Cch).
Depending on the atmospheric conditions at the upper part of the troposphere, where the plane is flying, these high clouds rapidly disappear or persist. When the air is dry and stable, the water rapidly evaporates inside the contrails and can only observed up to several hundreds of meters from the plane. On the other hand, if humidity is high enough, there exists an ice oversaturation, and the homogenitus get wide and can exist for hours. In the later case, depending on the wind conditions, Cch may evolve to Cirrus homogenitus (Cih) or Cirrostratus homogenitus (Csh). The existence and persistence of these three types of high anthropogenic clouds may indicate the approximation of air stability. In some cases, when there is a large density of air traffic, these high homogenitus may inhibit the formation of natural high clouds, because the contrails capture most of the water vapor.
Low homogenitus
The lowest part of the atmosphere is the region most influenced by human activity, through the emission of water vapor, warm air, and condensation nuclei. When the atmosphere is stable, the additional contribution of warm and moist air from emissions enhances fog formation or produces layers of Stratus homogenitus (Sth). If the air is not stable, this warm and moist air emitted by human activities creates a convective movement that can reach the lifted condensation level, producing an anthropogenic cumulus cloud, or Cumulus homogenitus (Cuh). This type of clouds may be also observed over the polluted air covering some cities and industrial areas under high-pressure conditions.
Stratocumulus homogenitus (Sch) are anthropogenic clouds that may be formed by the evolution of Sth in a slightly unstable atmosphere or of Cuh in a stable atmosphere.
Finally, the large, towering Cumulonimbus (Cb) presents such a great vertical development that only in some particular cases can they be created by anthropic causes. For instance, large fires may cause the formation of flammagenitus clouds, which can evolve to Cumulonimbus flammagenitus (CbFg, or CbFgh if anthropogenic); very large explosions, such as nuclear explosions, produce mushroom clouds, a distinctive subtype of cumulonimbus flammagenitus.
Experiments
Anthropogenic cloud can be generated in laboratory or in situ to study its properties or use it for other purpose. A cloud chambers is a sealed environment containing a supersaturated vapor of water or alcohol. When a charged particle (for example, an alpha or beta particle) interacts with the mixture, the fluid is ionized. The resulting ions act as condensation nuclei, around which a mist will form (because the mixture is on the point of condensation). Cloud seeding, a form of weather modification, is the attempt to change the amount or type of precipitation that falls from clouds, by dispersing substances into the air that serve as cloud condensation or ice nuclei, which alter the microphysical processes within the cloud. The usual intent is to increase precipitation (rain or snow), but hail and fog suppression are also widely practiced in airports.
Numerous experiments have been done with those two methods in the troposphere. At higher altitudes, NASA studied inducing noctilucent clouds in 1960 and 2009. In 1984 satellites from three nations took part in an artificial cloud experiment as part of a study of solar winds and comets. In 1969, a European satellite released and ignited barium and copper oxide at an altitude of 43,000 miles in space to create a 2,000 mile mauve and green plume visible for 22 minutes. It was part of a study of magnetic and electric fields.
Plans to create artificial clouds over soccer tournaments in the Middle East were suggested in 2011 as a way to help shade and cool down Qatar's 2022 FIFA World Cup.
Influence on climate
There are many studies dealing with the importance and effects of high anthropic clouds (Penner, 1999; Minna et al., 1999, 2003–2004; Marquart et al., 2002–2003; Stuber and Foster, 2006, 2007), but not about anthropic clouds in general. For the particular case of Cia due to contrails, IPCC estimates positive radiative forcing around 0.01 Wm−2.
When annotating the weather data, using the suffix that indicates the cloud origin allows differentiating these clouds from the ones with natural origin. Once this notation is established, after several years of observations, the influence of homogenitus on earth climate will be clearly analyzed.
See also
Contrail
Chemtrail conspiracy theory
Environmental impact of aviation
Global dimming
References
Bibliography
Howard, L. 1804: On the modification of clouds and the principles of their production, suspension and destruction: being the substance of an essay read before the Askesian Society in session 1802–03. J. Taylor. London.
IPCC 2007 AR4 WGI WGIII.
Marquart, S, and B. Mayer, 2002: Towards a reliable GCM estimation on contrail radiative forcing. Geophys. Res. Lett., 29, 1179, doi:10.1029/2001GL014075.
Marquart S., Ponater M., Mager F., and Sausen R., 2003: Future Development of contrail Cover, Optical Depth, and Radiative Forcing: Impacts of Increasing Air Traffic and Climate Change. Journal of climatology, 16, 2890–2904
Mazon J, Costa M, Pino D, Lorente J, 2012: Clouds caused by human activities. Weather, 67, 11, 302–306.
Meteorological glossary of American meteorological Society: http://glossary.ametsoc.org/?p=1&query=pyrocumulus&submit=Search
Minnis P., Kirk J. and Nordeen L., Weaver S., 2003. Contrail Frequency over the United States from Surface Observations. American Meteorology Society, 16, 3447–3462
Minnis, P., J. Ayers, R. Palikonda, and D. Phan, 2004: Contrails, cirrus trends, and climate. J. Climate, 14, 555–561.
Norris, J. R., 1999: On trends and possible artifacts in global ocean cloud cover between 1952 and 1995. J. Climate, 12, 1864–1870.
Penner, J., D. Lister, D. Griggs, D. Dokken, and M. McFarland, 1999: Special Report on Aviation and the Global Atmosphere. Cambridge University Press, 373 pp.
Stuber, N., and P. Forster, 2007: The impact of diurnal variations of air traffic on contrail radiative forcing. Atmos. Chem. Phys., 7, 3153–3162.
Stuber, N., and P. Forster, G. Rädel, and K. Shine, 2006: The importance of the diurnal and annual cycle of air traffic for contrail radiative forcing. Nature, 441, 864–867.
World Meteorological Organization (1975). International Cloud Atlas: Manual on the observation of clouds and other meteors. WMO-No. 407. I (text). Geneva: World Meteorological Organization. .
World Meteorological Organization (1987). International Cloud Atlas: Manual on the observation of clouds and other meteors. WMO-No. 407. II (plates). Geneva: World Meteorological Organization. pp. 196. .
Cloud types
Weather modification | 0.789937 | 0.976634 | 0.77148 |
Bioinspiration | Bioinspiration refers to the human development of novel materials, devices, structures, and behaviors inspired by solutions found in biological organisms, where they have evolved and been refined over millions of years. The goal is to improve modeling and simulation of the biological system to attain a better understanding of nature's critical structural features, such as a wing, for use in future bioinspired designs. Bioinspiration differs from biomimicry in that the latter aims to precisely replicate the designs of biological materials. Bioinspired research is a return to the classical origins of science: it is a field based on observing the remarkable functions that characterize living organisms and trying to abstract and imitate those functions.
History
Ideas in science and technology often arise from studying nature. In the 16th and 17th century, G. Galilei, J. Kepler and I. Newton studied the motion of the sun and the planets and developed the first empirical equation to describe gravity. A few years later, M. Faraday and J. C. Maxwell derived the fundamentals of electromagnetism by examining interactions between electrical currents and magnets. The studies of heat transfer and mechanical work lead to the understanding of thermodynamics. However, quantum mechanics originated from the spectroscopic study of light. Current objects of attention have originated in chemistry but the most abundant of them are found in biology, e.g. the study of genetics, characteristics of cells and the development of higher animals and disease.
The current field of research
Bioinspiration is a solidly established strategy in the field of chemistry, but it is not a mainstream approach. Especially, this research is still developing its scientific and technological systems, on academic and industrial levels. In recent years, it is also considered to develop composites for aerospace and military applications.
This field dates back from the 1980s but in the 2010s, many natural phenomena have not been studied.
Typical characteristics of Bioinspiration
Function
Bio-inspired research is a form of study that takes inspiration from the natural world. Unlike traditional chemistry research, it does not delve into the microscopic details of molecules. Instead, it focuses on understanding the functions and behaviors of living organisms. By observing nature's solutions, researchers can find innovative ideas for technology and problem-solving.
A limitless source of ideas
There are various kinds of organisms and many different strategies that have proved successful in biology at solving some functional problem. Some kinds of high-level bio functions may seem simple, but they are supported by many layers of underlying structures, processes, molecules and their elaborate interaction. There is no chance to run out of phenomena for bio-inspired research.
Simplicity
Often, bio-inspired research about something can be much easier than precisely replicating the source of inspiration. For example, researchers do not have to know how a bird flies to make an airplane.
Transcultural field
Bioinspiration returns to observation of nature as a source of inspiration for problem-solving and make it part of a grand tradition. The simplicity of many solutions emerge from a bio-inspired strategy, combined with the fact that different geographical and cultural regions have different types of contact with animals, fish, plants, birds and even microorganisms. This means different regions will have intrinsic advantages in areas in which their natural landscape is rich. So bio-inspired research is trans-cultural field.
Technical applications
There are many technical applications available nowadays that are bioinspired. However, this term should not be confused with biomimicry. For example, an airplane in general is inspired by birds. The wing tips of an airplane are biomimetic because their original function of minimizing turbulence and therefore needing less energy to fly, are not changed or improved compared to nature's original. Nano 3D printing methods are also one of the novel methods for bioinspiration. Plants and animals have particular properties which are often related to their composition of nano - and micro- surface structures. For example, research has been conducted to mimic the superhydrophobicity of Salvinia molesta leaves, the adhesiveness of gecko's toes on slippery surfaces, and moth antennas which inspire new approaches to detect chemical leaks, drugs and explosives.
References
<https://www.researchgate.net/publication/330246880_Biomimicry_Exploring_Research_Challenges_Gaps_and_Tools_Proceedings_of_ICoRD_2019_Volume_1/>
See also
Bio-inspired computing
Bio-inspired engineering
Bio-inspired photonics
Bio-inspired robotics
Paleo-inspiration | 0.807816 | 0.955005 | 0.771469 |
Agricultural science | Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists.
History
In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer.
In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018.
In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures.
Prominent agricultural scientists
Wilbur Olin Atwater
Robert Bakewell
Norman Borlaug
Luther Burbank
George Washington Carver
Carl Henry Clerk
George C. Clerk
René Dumont
Sir Albert Howard
Kailas Nath Kaul
Thomas Lecky
Justus von Liebig
Jay Laurence Lush
Gregor Mendel
Louis Pasteur
M. S. Swaminathan
Jethro Tull
Artturi Ilmari Virtanen
Sewall Wright
Fields or related disciplines
Scope
Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts:
Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research.
Agronomy is research and development related to studying and improving plant-based crops.
Soil forming factors and soil degradation
Agricultural sciences include research and development on:
Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques)
Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems.
Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products)
Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation)
Theoretical production ecology, relating to crop production modeling
Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems.
Food production and demand on a global basis, with special attention paid to the major producers, such as China, India, Brazil, the US and the EU.
Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering.
See also
Agricultural Research Council
Agricultural sciences basic topics
Agriculture ministry
Agroecology
American Society of Agronomy
Genomics of domestication
History of agricultural science
Institute of Food and Agricultural Sciences
International Assessment of Agricultural Science and Technology for Development
International Food Policy Research Institute, IFPRI
List of agriculture topics
National FFA Organization
Research Institute of Crop Production (RICP) (in the Czech Republic)
University of Agricultural Sciences
References
Further reading
Agricultural Research, Livelihoods, and Poverty: Studies of Economic and Social Impacts in Six Countries Edited by Michelle Adato and Ruth Meinzen-Dick (2007), Johns Hopkins University Press Food Policy Report
Claude Bourguignon, Regenerating the Soil: From Agronomy to Agrology, Other India Press, 2005
Pimentel David, Pimentel Marcia, Computer les kilocalories, Cérès, n. 59, sept-oct. 1977
Russell E. Walter, Soil conditions and plant growth, Longman group, London, New York 1973
Saltini Antonio, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, , , ,
Vavilov Nicolai I. (Starr Chester K. editor), The Origin, Variation, Immunity and Breeding of Cultivated Plants. Selected Writings, in Chronica botanica, 13: 1–6, Waltham, Mass., 1949–50
Vavilov Nicolai I., World Resources of Cereals, Leguminous Seed Crops and Flax, Academy of Sciences of Urss, National Science Foundation, Washington, Israel Program for Scientific Translations, Jerusalem 1960
Winogradsky Serge, Microbiologie du sol. Problèmes et methodes. Cinquante ans de recherches, Masson & c.ie, Paris 1949
External links
Consultative Group on International Agricultural Research (CGIAR)
Agricultural Research Service
Indian Council of Agricultural Research
International Institute of Tropical Agriculture
International Livestock Research Institute
The National Agricultural Library (NAL) – the most comprehensive agricultural library in the world
Crop Science Society of America
American Society of Agronomy
Soil Science Society of America
Agricultural Science Researchers, Jobs and Discussions
Information System for Agriculture and Food Research
NMSU Department of Entomology Plant Pathology and Weed Science | 0.777297 | 0.992463 | 0.771439 |
Rural development | Rural development is the process of improving the quality of life and economic well-being of people living in rural areas, often relatively isolated and sparsely populated areas. Often, rural regions have experienced rural poverty, poverty greater than urban or suburban economic regions due to lack of access to economic activities, and lack of investments in key infrastructure such as education.
Rural development has traditionally centered on the exploitation of land-intensive natural resources such as agriculture and forestry. However, changes in global production networks and increased urbanization have changed the character of rural areas. Increasingly rural tourism, niche manufacturers, and recreation have replaced resource extraction and agriculture as dominant economic drivers. The need for rural communities to approach development from a wider perspective has created more focus on a broad range of development goals rather than merely creating incentive for agricultural or resource-based businesses.
Education, entrepreneurship, physical infrastructure, and social infrastructure all play an important role in developing rural regions. Rural development is also characterized by its emphasis on locally produced economic development strategies. In contrast to urban regions, which have many similarities, rural areas are highly distinctive from one another. For this reason there are a large variety of rural development approaches used globally.
Rural poverty
Approaches to development
Rural development actions are intended to further the social and economic development of rural communities.
Rural development programs were historically top-down approaches from local or regional authorities, regional development agencies, NGOs, national governments or international development organizations. However, a critical 'organization gap' identified during the late 1960s, reflecting on the disjunction between national organizations and rural communities led to a great focus on community participation in rural development agendas. Oftentimes this was achieved through political decentralization policies in developing countries, particularly popular among African countries, or policies that shift the power of socio-politico-economic decision-making and the election of representatives and leadership from centralized governments to local governments. As a result, local populations can also bring about endogenous initiatives for development. The term rural development is not limited to issues of developing countries. In fact many developed countries have very active rural development programs.
Rural development aims at finding ways to improve rural lives with the participation of rural people themselves, so as to meet the required needs of rural communities. The outsider may not understand the setting, culture, language and other things prevalent in the local area. As such, rural people themselves have to participate in their sustainable rural development. In developing countries like Nepal, Pakistan, India, Bangladesh, integrated development approaches are being followed up. In this context, many approaches and ideas have been developed and implemented, for instance, bottom-up approaches, PRA- Participatory Rural Appraisal, RRA- Rapid Rural Appraisal, Working With People (WWP), etc. The New Rural Reconstruction Movement in China has been actively promoting rural development through their ecological farming projects.
The role of NGOs/non-profits in developing countries
Because decentralization policies made development problems the responsibility of local governments, it also opened the door for non-governmental organizations (NGOs), nonprofits, and other foreign actors to become more involved in the approach to these issues. For example, the elimination of statist approaches to development caused an exponential increase in the number of NGOs active in Africa, and additionally caused them to take on increasingly important roles. Consequently, nonprofits and NGOs are also greatly involved in the provisioning of needs in developing countries and they play an increasingly large role in supporting rural development.
These organizations are often criticized for taking over responsibilities that are traditionally carried out by the state, causing governments to become ineffective in handling these responsibilities over time. Within Africa, NGOs carry out the majority of sustainable building and construction through donor-funded, low-income housing projects. Furthermore, they are often faulted for being easily controlled by donor money and oriented to serve the needs of local elites above the rest of the population. As a result of this critique, many NGOs have started to include strategies in their projects that promote community participation.
Many scholars argue that NGOs are an insufficient solution to a lack of development leadership as a result of decentralization policies. Human rights expert Susan Dicklitch points to the historical context of colonialism, organization-specific limitations, and regime restraints as hindrances to the promises of NGOs. She notes that “NGOs are increasingly relegated to service provision and gap-filling activities as by the retreating state, but those supportive functions are not matched with increased political efficacy”.
International examples
Rural development in Uganda
In Uganda specifically, several mid-century centrist administrations, particularly the regimes of Idi Amin (1971–1979) and Milton Obote (1981–1986), described as brutal and ineffective led to a sharp drop in responsiveness to citizen's needs between 1966 and 1986. As a result of these administrations, several constraints were placed on local governments that prevented effective development initiatives: every single employee in local governments had to be appointed by the president, all local budgets and bylaws had to be approved by the Minister of Local Government, and this Minister could dissolve any local government council.
Because of the several shortcomings of the dictatorial government in promoting the participation of citizens in local development efforts, a decentralization campaign was officially launched in Uganda in 1992, with its legislative culmination occurring in 1997 with the passing of the Local Governments Act. This act led to the transfer of power to local governments in an attempt to encourage citizen participation and further rural development. Regarding funding under the decentralization structure, local governments receive a majority of their funds in block grants from the national government, mostly as conditional grants but with some unconditional and equalization grants administered as well. Furthermore, local governments were given the power to collect taxes from their constituents, however, this usually only accounts for less than 10 percent of the local government's budget.
Debates in decentralization efforts in Uganda
Some scholars express concern that decentralization efforts in Uganda may not actually be leading to an increase in participation and development. For example, despite increases over the years in local councils and civil society organizations (CSOs) in rural Uganda, efforts are consistently undermined by a lacking socio-economic structure leading to high rates of illiteracy, poor agricultural techniques, market access, and transportation systems. These shortcomings are often a result of taxes and payments imposed by local authorities and administration agents that inhibit farmers' access to larger markets. Furthermore, the overall financial strength of local governments is considerably weaker than that of the national government, which adversely affects their responsiveness to the needs of their citizens and success in increasing participation in community development initiatives. Finally, civil society organizations are often ineffective in practice at mobilizing for the community's interests. Dr. Umar Kakumba, a scholar at Makerere University in Uganda, notes of CSOs:The CSOs’ inability to effectively mobilize for and represent the local community’s interests is linked to the disabling regulatory environment with cumbersome and elaborate procedures for registration and restrictions on what constitutes allowable advocacy activities; their desire to complement the work of government rather than questioning it; the difficulties in raising adequate resources from their membership; the inability to exercise internal democracy and accountability; the urban/elite orientation of most NGOs; and the donor funding that encourages a number of CSOs to emerge in order to clinch a share of the donor monies.
Nigeria
Rural development agencies
In many countries, the national and subnational government delegates rural development to agencies and support centers.
List of agencies
International Institute of Rural Reconstruction
Technical Centre for Agricultural and Rural Cooperation ACP-EU (CTA) Agricultural and rural information provider
USDA Rural Development, an agency of the United States Department of Agriculture
European Agricultural Fund for Rural Development, a part of the Common Agricultural Policy by the European Commission's Directorate-General for Agriculture and Rural Development
England Rural Development Programme by DEFRA
Agricultural Development & Training Society, India
Ministry of Villages, Development of Disadvantaged Regions, and Transmigration, Indonesia
Tipperary Institute, Ireland
Azerbaijan Rural Investment Project in Azerbaijan
Nimbkar Agricultural Research Institute, India
Philippine Rural Reconstruction Movement, Philippines
See also
Comilla Project, the first comprehensive rural development project in developing countries
Development studies
District Rural Development Agencies (India)
International Fund for Agricultural Development (IFAD)
Regional development
RIGA Project
Rural flight
Rural sociology
Rural management
Urban development
References
External links
Transforming the Rural Nonfarm Economy: Opportunities and Threats in the Developing World Edited by Steven Haggblade, Peter B. R. Hazell, and Thomas Reardon (2007), Johns Hopkins University Press
CNN - For Rural Women, Land Means Hope, The George Foundation
Research on Agriculture and Rural Development from the Overseas Development Institute
European Network for Rural Development | 0.775427 | 0.994807 | 0.771401 |
Biotic | Biotics describe living or once living components of a community; for example organisms, such as animals and plants.
Biotic may refer to:
Life, the condition of living organisms
Biology, the study of life
Biotic material, which is derived from living organisms
Biotic components in ecology
Biotic potential, an organism's reproductive capacity
Biotic community, all the interacting organisms living together in a specific habitat
Biotic energy, a vital force theorized by biochemist Benjamin Moore
Biotic Baking Brigade, an unofficial group of pie-throwing activists
See also
Abiotic
Antibiotics are agents that either kill bacteria or inhibit their growth
Prebiotics are non-digestible food ingredients that stimulate the growth or activity of bacteria in the digestive system
Probiotics consist of a live culture of bacteria that inhibit or interfere with colonization by microbial pathogens
Synbiotics refer to nutritional supplements combining probiotics and prebiotics | 0.786717 | 0.980528 | 0.771398 |
Variable renewable energy | Variable renewable energy (VRE) or intermittent renewable energy sources (IRES) are renewable energy sources that are not dispatchable due to their fluctuating nature, such as wind power and solar power, as opposed to controllable renewable energy sources, such as dammed hydroelectricity or bioenergy, or relatively constant sources, such as geothermal power.
The use of small amounts of intermittent power has little effect on grid operations. Using larger amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure.
Options to absorb large shares of variable energy into the grid include using storage, improved interconnection between different variable sources to smooth out supply, using dispatchable energy sources such as hydroelectricity and having overcapacity, so that sufficient energy is produced even when weather is less favourable. More connections between the energy sector and the building, transport and industrial sectors may also help.
Background and terminology
The penetration of intermittent renewables in most power grids is low: global electricity generation in 2021 was 7% wind and 4% solar. However, in 2021 Denmark, Luxembourg and Uruguay generated over 40% of their electricity from wind and solar. Characteristics of variable renewables include their unpredictability, variability, and low operating costs. These, along with renewables typically being asynchronous generators, provide a challenge to grid operators, who must make sure supply and demand are matched. Solutions include energy storage, demand response, availability of overcapacity and sector coupling. Smaller isolated grids may be less tolerant to high levels of penetration.
Matching power demand to supply is not a problem specific to intermittent power sources. Existing power grids already contain elements of uncertainty including sudden and large changes in demand and unforeseen power plant failures. Though power grids are already designed to have some capacity in excess of projected peak demand to deal with these problems, significant upgrades may be required to accommodate large amounts of intermittent power.
Several key terms are useful for understanding the issue of intermittent power sources. These terms are not standardized, and variations may be used. Most of these terms also apply to traditional power plants.
Intermittency or variability is the extent to which a power source fluctuates. This has two aspects: a predictable variability, such as the day-night cycle, and an unpredictable part (imperfect local weather forecasting). The term intermittent can be used to refer to the unpredictable part, with variable then referring to the predictable part.
Dispatchability is the ability of a given power source to increase and decrease output quickly on demand. The concept is distinct from intermittency; dispatchability is one of several ways system operators match supply (generator's output) to system demand (technical loads).
Penetration is the amount of electricity generated from a particular source as a percentage of annual consumption.
Nominal power or nameplate capacity is the theoretical output registered with authorities for classifying the unit. For intermittent power sources, such as wind and solar, nameplate power is the source's output under ideal conditions, such as maximum usable wind or high sun on a clear summer day.
Capacity factor, average capacity factor, or load factor is the ratio of actual electrical generation over a given period of time, usually a year, to actual generation in that time period. Basically, it is the ratio between the how much electricity a plant produced and how much electricity a plant would have produced if were running at its nameplate capacity for the entire time period.
Firm capacity or firm power is "guaranteed by the supplier to be available at all times during a period covered by a commitment".
Capacity credit: the amount of conventional (dispatchable) generation power that can be potentially removed from the system while keeping the reliability, usually expressed as a percentage of the nominal power.
Foreseeability or predictability is how accurately the operator can anticipate the generation: for example tidal power varies with the tides but is completely foreseeable because the orbit of the moon can be predicted exactly, and improved weather forecasts can make wind power more predictable.
Sources
Dammed hydroelectricity, biomass and geothermal are dispatchable as each has a store of potential energy; wind and solar without storage can be decreased (curtailed) but are not dispatchable.
Wind power
Grid operators use day ahead forecasting to determine which of the available power sources to use the next day, and weather forecasting is used to predict the likely wind power and solar power output available. Although wind power forecasts have been used operationally for decades, the IEA is organizing international collaboration to further improve their accuracy.
Wind-generated power is a variable resource, and the amount of electricity produced at any given point in time by a given plant will depend on wind speeds, air density, and turbine characteristics, among other factors. If wind speed is too low then the wind turbines will not be able to make electricity, and if it is too high the turbines will have to be shut down to avoid damage. While the output from a single turbine can vary greatly and rapidly as local wind speeds vary, as more turbines are connected over larger and larger areas the average power output becomes less variable.
Intermittence: Regions smaller than synoptic scale, less than about 1000 km long, the size of an average country, have mostly the same weather and thus around the same wind power, unless local conditions favor special winds. Some studies show that wind farms spread over a geographically diverse area will as a whole rarely stop producing power altogether. This is rarely the case for smaller areas with uniform geography such as Ireland, Scotland and Denmark which have several days per year with little wind power.
Capacity factor: Wind power typically has an annual capacity factor of 25–50%, with offshore wind outperforming onshore wind.
Dispatchability: Because wind power is not by itself dispatchable wind farms are sometimes built with storage.
Capacity credit: At low levels of penetration, the capacity credit of wind is about the same as the capacity factor. As the concentration of wind power on the grid rises, the capacity credit percentage drops.
Variability: Site dependent. Sea breezes are much more constant than land breezes. Seasonal variability may reduce output by 50%.
Reliability: A wind farm has high technical reliability when the wind blows. That is, the output at any given time will only vary gradually due to falling wind speeds or storms, the latter necessitating shut downs. A typical wind farm is unlikely to have to shut down in less than half an hour at the extreme, whereas an equivalent-sized power station can fail totally instantaneously and without warning. The total shutdown of wind turbines is predictable via weather forecasting. The average availability of a wind turbine is 98%, and when a turbine fails or is shut down for maintenance it only affects a small percentage of the output of a large wind farm.
Predictability: Although wind is variable, it is also predictable in the short term. There is an 80% chance that wind output will change less than 10% in an hour and a 40% chance that it will change 10% or more in 5 hours.
Because wind power is generated by large numbers of small generators, individual failures do not have large impacts on power grids. This feature of wind has been referred to as resiliency.
Solar power
Intermittency inherently affects solar energy, as the production of renewable electricity from solar sources depends on the amount of sunlight at a given place and time. Solar output varies throughout the day and through the seasons, and is affected by dust, fog, cloud cover, frost or snow. Many of the seasonal factors are fairly predictable, and some solar thermal systems make use of heat storage to produce grid power for a full day.
Variability: In the absence of an energy storage system, solar does not produce power at night, little in bad weather and varies between seasons. In many countries, solar produces most energy in seasons with low wind availability and vice versa.
Capacity factor Standard photovoltaic solar has an annual average capacity factor of 10-20%, but panels that move and track the sun have a capacity factor up to 30%. Thermal solar parabolic trough with storage 56%. Thermal solar power tower with storage 73%.
The impact of intermittency of solar-generated electricity will depend on the correlation of generation with demand. For example, solar thermal power plants such as Nevada Solar One are somewhat matched to summer peak loads in areas with significant cooling demands, such as the south-western United States. Thermal energy storage systems like the small Spanish Gemasolar Thermosolar Plant can improve the match between solar supply and local consumption. The improved capacity factor using thermal storage represents a decrease in maximum capacity, and extends the total time the system generates power.
Run-of-the-river hydroelectricity
In many countries new large dams are no longer being built, because of the environmental impact of reservoirs. Run of the river projects have continued to be built. The absence of a reservoir results in both seasonal and annual variations in electricity generated.
Tidal power
Tidal power is the most predictable of all the variable renewable energy sources. The tides reverse twice a day, but they are never intermittent, on the contrary they are completely reliable.
Wave power
Waves are primarily created by wind, so the power available from waves tends to follow that available from wind, but due to the mass of the water is less variable than wind power. Wind power is proportional to the cube of the wind speed, while wave power is proportional to the square of the wave height.
Solutions for their integration
The displaced dispatchable generation could be coal, natural gas, biomass, nuclear, geothermal or storage hydro. Rather than starting and stopping nuclear or geothermal, it is cheaper to use them as constant base load power. Any power generated in excess of demand can displace heating fuels, be converted to storage or sold to another grid. Biofuels and conventional hydro can be saved for later when intermittents are not generating power. Some forecast that “near-firm” renewables (batteries with solar and/or wind) power will be cheaper than existing nuclear by the late 2020s: therefore they say base load power will not be needed.
Alternatives to burning coal and natural gas which produce fewer greenhouse gases may eventually make fossil fuels a stranded asset that is left in the ground. Highly integrated grids favor flexibility and performance over cost, resulting in more plants that operate for fewer hours and lower capacity factors.
All sources of electrical power have some degree of variability, as do demand patterns which routinely drive large swings in the amount of electricity that suppliers feed into the grid. Wherever possible, grid operations procedure are designed to match supply with demand at high levels of reliability, and the tools to influence supply and demand are well-developed. The introduction of large amounts of highly variable power generation may require changes to existing procedures and additional investments.
The capacity of a reliable renewable power supply, can be fulfilled by the use of backup or extra infrastructure and technology, using mixed renewables to produce electricity above the intermittent average, which may be used to meet regular and unanticipated supply demands. Additionally, the storage of energy to fill the shortfall intermittency or for emergencies can be part of a reliable power supply.
In practice, as the power output from wind varies, partially loaded conventional plants, which are already present to provide response and reserve, adjust their output to compensate. While low penetrations of intermittent power may use existing levels of response and spinning reserve, the larger overall variations at higher penetrations levels will require additional reserves or other means of compensation.
Operational reserve
All managed grids already have existing operational and "spinning" reserve to compensate for existing uncertainties in the power grid. The addition of intermittent resources such as wind does not require 100% "back-up" because operating reserves and balancing requirements are calculated on a system-wide basis, and not dedicated to a specific generating plant.
Some gas, or hydro power plants are partially loaded and then controlled to change as demand changes or to replace rapidly lost generation. The ability to change as demand changes is termed "response". The ability to quickly replace lost generation, typically within timescales of 30 seconds to 30 minutes, is termed "spinning reserve".
Generally thermal plants running as peaking plants will be less efficient than if they were running as base load. Hydroelectric facilities with storage capacity, such as the traditional dam configuration, may be operated as base load or peaking plants.
Grids can contract for grid battery plants, which provide immediately available power for an hour or so, which gives time for other generators to be started up in the event of a failure, and greatly reduces the amount of spinning reserve required.
Demand response
Demand response is a change in consumption of energy to better align with supply. It can take the form of switching off loads, or absorb additional energy to correct supply/demand imbalances. Incentives have been widely created in the American, British and French systems for the use of these systems, such as favorable rates or capital cost assistance, encouraging consumers with large loads to take them offline whenever there is a shortage of capacity, or conversely to increase load when there is a surplus.
Certain types of load control allow the power company to turn loads off remotely if insufficient power is available. In France large users such as CERN cut power usage as required by the System Operator - EDF under the encouragement of the EJP tariff.
Energy demand management refers to incentives to adjust use of electricity, such as higher rates during peak hours. Real-time variable electricity pricing can encourage users to adjust usage to take advantage of periods when power is cheaply available and avoid periods when it is more scarce and expensive. Some loads such as desalination plants, electric boilers and industrial refrigeration units, are able to store their output (water and heat). Several papers also concluded that Bitcoin mining loads would reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy power stations and therefore accelerate transition to sustainable energy. But others argue that Bitcoin mining can never be sustainable.
Instantaneous demand reduction. Most large systems also have a category of loads which instantly disconnect when there is a generation shortage, under some mutually beneficial contract. This can give instant load reductions or increases.
Storage
At times of low load where non-dispatchable output from wind and solar may be high, grid stability requires lowering the output of various dispatchable generating sources or even increasing controllable loads, possibly by using energy storage to time-shift output to times of higher demand. Such mechanisms can include:
Pumped storage hydropower is the most prevalent existing technology used, and can substantially improve the economics of wind power. The availability of hydropower sites suitable for storage will vary from grid to grid. Typical round trip efficiency is 80%.
Traditional lithium-ion is the most common type used for grid-scale battery storage . Rechargeable flow batteries can serve as a large capacity, rapid-response storage medium. Hydrogen can be created through electrolysis and stored for later use.
Flywheel energy storage systems have some advantages over chemical batteries. Along with substantial durability which allows them to be cycled frequently without noticeable life reduction, they also have very fast response and ramp rates. They can go from full discharge to full charge within a few seconds. They can be manufactured using non-toxic and environmentally friendly materials, easily recyclable once the service life is over.
Thermal energy storage stores heat. Stored heat can be used directly for heating needs or converted into electricity. In the context of a CHP plant a heat storage can serve as a functional electricity storage at comparably low costs. Ice storage air conditioning Ice can be stored inter seasonally and can be used as a source of air-conditioning during periods of high demand. Present systems only need to store ice for a few hours but are well developed.
Storage of electrical energy results in some lost energy because storage and retrieval are not perfectly efficient. Storage also requires capital investment and space for storage facilities.
Geographic diversity and complementing technologies
The variability of production from a single wind turbine can be high. Combining any additional number of turbines, for example, in a wind farm, results in lower statistical variation, as long as the correlation between the output of each turbine is imperfect, and the correlations are always imperfect due to the distance between each turbine. Similarly, geographically distant wind turbines or wind farms have lower correlations, reducing overall variability. Since wind power is dependent on weather systems, there is a limit to the benefit of this geographic diversity for any power system.
Multiple wind farms spread over a wide geographic area and gridded together produce power more constantly and with less variability than smaller installations. Wind output can be predicted with some degree of confidence using weather forecasts, especially from large numbers of turbines/farms. The ability to predict wind output is expected to increase over time as data is collected, especially from newer facilities.
Electricity produced from solar energy tends to counterbalance the fluctuating supplies generated from wind. Normally it is windiest at night and during cloudy or stormy weather, and there is more sunshine on clear days with less wind. Besides, wind energy has often a peak in the winter season, whereas solar energy has a peak in the summer season; the combination of wind and solar reduces the need for dispatchable backup power.
In some locations, electricity demand may have a high correlation with wind output, particularly in locations where cold temperatures drive electric consumption, as cold air is denser and carries more energy.
The allowable penetration may be increased with further investment in standby generation. For instance some days could produce 80% intermittent wind and on the many windless days substitute 80% dispatchable power like natural gas, biomass and Hydro.
Areas with existing high levels of hydroelectric generation may ramp up or down to incorporate substantial amounts of wind. Norway, Brazil, and Manitoba all have high levels of hydroelectric generation, Quebec produces over 90% of its electricity from hydropower, and Hydro-Québec is the largest hydropower producer in the world. The U.S. Pacific Northwest has been identified as another region where wind energy is complemented well by existing hydropower. Storage capacity in hydropower facilities will be limited by size of reservoir, and environmental and other considerations.
Connecting grid internationally
It is often feasible to export energy to neighboring grids at times of surplus, and import energy when needed. This practice is common in Europe and between the US and Canada. Integration with other grids can lower the effective concentration of variable power: for instance, Denmark's high penetration of VRE, in the context of the German/Dutch/Scandinavian grids with which it has interconnections, is considerably lower as a proportion of the total system. Hydroelectricity that compensates for variability can be used across countries.
The capacity of power transmission infrastructure may have to be substantially upgraded to support export/import plans. Some energy is lost in transmission. The economic value of exporting variable power depends in part on the ability of the exporting grid to provide the importing grid with useful power at useful times for an attractive price.
Sector coupling
Demand and generation can be better matched when sectors such as mobility, heat and gas are coupled with the power system. The electric vehicle market is for instance expected to become the largest source of storage capacity. This may be a more expensive option appropriate for high penetration of variable renewables, compared to other sources of flexibility. The International Energy Agency says that sector coupling is needed to compensate for the mismatch between seasonal demand and supply.
Electric vehicles can be charged during periods of low demand and high production, and in some places send power back from the vehicle-to-grid.
Penetration
Penetration refers to the proportion of a primary energy (PE) source in an electric power system, expressed as a percentage. There are several methods of calculation yielding different penetrations. The penetration can be calculated either as:
the nominal capacity (installed power) of a PE source divided by the peak load within an electric power system; or
the nominal capacity (installed power) of a PE source divided by the total capacity of the electric power system; or
the electrical energy generated by a PE source in a given period, divided by the demand of the electric power system in this period.
The level of penetration of intermittent variable sources is significant for the following reasons:
Power grids with significant amounts of dispatchable pumped storage, hydropower with reservoir or pondage or other peaking power plants such as natural gas-fired power plants are capable of accommodating fluctuations from intermittent power more easily.
Relatively small electric power systems without strong interconnection (such as remote islands) may retain some existing diesel generators but consuming less fuel, for flexibility until cleaner energy sources or storage such as pumped hydro or batteries become cost-effective.
In the early 2020s wind and solar produce 10% of the world's electricity, but supply in the 40-55% penetration range has already been implemented in several systems, with over 65% planned for the UK by 2030.
There is no generally accepted maximum level of penetration, as each system's capacity to compensate for intermittency differs, and the systems themselves will change over time. Discussion of acceptable or unacceptable penetration figures should be treated and used with caution, as the relevance or significance will be highly dependent on local factors, grid structure and management, and existing generation capacity.
For most systems worldwide, existing penetration levels are significantly lower than practical or theoretical maximums.
Maximum penetration limits
Maximum penetration of combined wind and solar is estimated at around 70% to 90% without regional aggregation, demand management or storage; and up to 94% with 12 hours of storage. Economic efficiency and cost considerations are more likely to dominate as critical factors; technical solutions may allow higher penetration levels to be considered in future, particularly if cost considerations are secondary.
Economic impacts of variability
Estimates of the cost of wind and solar energy may include estimates of the "external" costs of wind and solar variability, or be limited to the cost of production. All electrical plant has costs that are separate from the cost of production, including, for example, the cost of any necessary transmission capacity or reserve capacity in case of loss of generating capacity. Many types of generation, particularly fossil fuel derived, will have cost externalities such as pollution, greenhouse gas emission, and habitat destruction, which are generally not directly accounted for.
The magnitude of the economic impacts is debated and will vary by location, but is expected to rise with higher penetration levels. At low penetration levels, costs such as operating reserve and balancing costs are believed to be insignificant.
Intermittency may introduce additional costs that are distinct from or of a different magnitude than for traditional generation types. These may include:
Transmission capacity: transmission capacity may be more expensive than for nuclear and coal generating capacity due to lower load factors. Transmission capacity will generally be sized to projected peak output, but average capacity for wind will be significantly lower, raising cost per unit of energy actually transmitted. However transmission costs are a low fraction of total energy costs.
Additional operating reserve: if additional wind and solar does not correspond to demand patterns, additional operating reserve may be required compared to other generating types, however this does not result in higher capital costs for additional plants since this is merely existing plants running at low output - spinning reserve. Contrary to statements that all wind must be backed by an equal amount of "back-up capacity", intermittent generators contribute to base capacity "as long as there is some probability of output during peak periods". Back-up capacity is not attributed to individual generators, as back-up or operating reserve "only have meaning at the system level".
Balancing costs: to maintain grid stability, some additional costs may be incurred for balancing of load with demand. Although improvements to grid balancing can be costly, they can lead to long term savings.
In many countries for many types of variable renewable energy, from time to time the government invites companies to tender sealed bids to construct a certain capacity of solar power to connect to certain electricity substations. By accepting the lowest bid the government commits to buy at that price per kWh for a fixed number of years, or up to a certain total amount of power. This provides certainty for investors against highly volatile wholesale electricity prices. However they may still risk exchange rate volatility if they borrowed in foreign currency.
Examples by country
Great Britain
The operator of the British electricity system has said that it will be capable of operating zero-carbon by 2025, whenever there is enough renewable generation, and may be carbon negative by 2033. The company, National Grid Electricity System Operator, states that new products and services will help reduce the overall cost of operating the system.
Germany
In countries with a considerable amount of renewable energy, solar energy causes price drops around noon every day. PV production follows the higher demand during these hours. The images below show two weeks in 2022 in Germany, where renewable energy has a share of over 40%. Prices also drop every night and weekend due to low demand. In hours without PV and wind power, electricity prices rise. This can lead to demand side adjustments. While industry is dependent on the hourly prices, most private households still pay a fixed tariff. With smart meters, private consomers can also be motivated i.e. to load an electric car when enough renewable energy is available and prices are cheap.
Steerable flexibility in electricity production is essential to back up variable energy sources. The German example shows that pumped hydro storage, gas plants and hard coal jump in fast. Lignite varies on a daily basis. Nuclear power and biomass can theoretically adjust to a certain extent. However, in this case incentives still seem not be high enough.
See also
Combined cycle hydrogen power plant
Cost of electricity by source
Energy security and renewable technology
Ground source heat pump
List of energy storage power plants
Spark spread: calculating the cost of back up
References
Further reading
External links
Grid Integration of Wind Energy
Electric power distribution
Energy storage
Renewable energy | 0.783497 | 0.984548 | 0.771391 |
Familialism | Familialism or familism is a philosophy that puts priority to family. The term familialism has been specifically used for advocating a welfare system wherein it is presumed that families will take responsibility for the care of their members rather than leaving that responsibility to the government. The term familism relates more to family values. This can manifest as prioritizing the needs of the family higher than that of individuals. Yet, the two terms are often used interchangeably.
In the Western world, familialism views the nuclear family of one father, one mother, and their child or children as the central and primary social unit of human ordering and the principal unit of a functioning society and civilization. In Asia, aged parents living with the family is often viewed as traditional. It is suggested that Asian familialism became more fixed after encounters with Europeans following the Age of Discovery. In Japan, drafts based on French laws were rejected after criticism from people like by the reason that "civil law will destroy filial piety".
Regarding familism as a fertility factor, there is limited support among Hispanics of an increased number of children with increased familism in the sense of prioritizing the needs of the family higher than that of individuals. On the other hand, the fertility impact is unknown in regard to systems where the majority of the economic and caring responsibilities rest on the family (such as in Southern Europe), as opposed to defamilialized systems where welfare and caring responsibilities are largely supported by the state (such as Nordic countries).
Western familism
In the Western world, familialism views the nuclear family of one father, one mother, and their child or children as the central and primary social unit of human ordering and the principal unit of a functioning society and civilization. Accordingly, this unit is also the basis of a multi-generational extended family, which is embedded in socially as well as genetically inter-related communities, nations, etc., and ultimately in the whole human family past, present and future. As such, Western familialism usually opposes other social forms and models that are chosen as alternatives (i.e. single-parent, LGBT parenting, etc.).
Historical and philosophical background of Western familism
Ancient political familialism
"Family as a model for the state" as an idea in political philosophy originated in the Socratic-Platonic principle of macrocosm/microcosm, which identifies recurrent patterns at larger and smaller scales of the cosmos, including the social world. In particular, monarchists have argued that the state mirrors the patriarchal family, with the subjects obeying the king as children obey their father, which in turn helps to justify monarchical or aristocratic rule.
Plutarch (46–120 CE) records a laconic saying of the Dorians attributed to Lycurgus (8th century BCE). Asked why he did not establish a democracy in Lacedaemon (Sparta), Lycurgus responded, "Begin, friend, and set it up in your family". Plutarch claims that Spartan government resembled the family in its form.
Aristotle (384–322 BCE) argued that the schema of authority and subordination exists in the whole of nature. He gave examples such as man and animal (domestic), man and wife, slaves and children. Further, he claimed that it is found in any animal, as the relationship he believed to exist between soul and body, of "which the former is by nature the ruling and the later subject factor". Aristotle further asserted that "the government of a household is a monarchy since every house is governed by a single ruler". Later, he said that husbands exercise a republican government over their wives and monarchical government over their children, and that they exhibit political office over slaves and royal office over the family in general.
Arius Didymus (1st century CE), cited centuries later by Stobaeus, wrote that "A primary kind of association (politeia) is the legal union of a man and woman for begetting children and for sharing life". From the collection of households a village is formed and from villages a city, "So just as the household yields for the city the seeds of its formation, thus it yields the constitution (politeia)". Further, Didymus claims that "Connected with the house is a pattern of monarchy, of aristocracy and of democracy. The relationship of parents to children is monarchic, of husbands to wives aristocratic, of children to one another democratic".
Modern political familialism
The family is in the center of the social philosophy of the early Chicago School of Economics. It is a recurring point of reference in the economic and social theories of its founder Frank Knight. Knight positions his notion of the family in contrast to the dominant notion of individualism:
"Our 'individualism' is really 'familism'. ... The family is still the unit in production and consumption."
Some modern thinkers, such as Louis de Bonald, have written as if the family were a miniature state. In his analysis of the family relationships of father, mother and child, Bonald related these to the functions of a state: the father is the power, the mother is the minister and the child as subject. As the father is "active and strong" and the child is "passive or weak", the mother is the "median term between the two extremes of this continuous proportion". Like many apologists for political familialism, De Bonald justified his analysis on biblical authority:
"(It) calls man the reason, the head, the power of woman: Vir caput est mulieris (the man is head of the woman) says St. Paul. It calls woman the helper or minister of man: "Let us make man," says Genesis, "a helper similar to him." It calls the child a subject, since it tells it, in a thousand places, to obey its parents".
Bonald also sees divorce as the first stage of disorder in the state, insisting that the deconstitution of the family brings about the deconstitution of state, with The Kyklos not far behind.
Erik von Kuehnelt-Leddihn also connects family and monarchy:
"Due to its inherent patriarchalism, monarchy fits organically into the ecclesiastic and familistic pattern of a Christian society. (Compare the teaching of Pope Leo XIII: 'Likewise the powers of fathers of families preserves expressly a certain image and form of the authority which is in God, from which all paternity in heaven and earth receives its name—Eph 3.15') The relationship between the King as 'father of the fatherland' and the people is one of mutual love".
George Lakoff has more recently claimed that the left-right distinction in politics reflects a different ideals of the family; for the right-wing, the ideal is a patriarchal family based upon absolutist morality; for the left-wing, the ideal is an unconditionally loving family. As a result, Lakoff argues, both sides find each other's views not only immoral, but incomprehensible, since they appear to violate each side's deeply held beliefs about personal morality in the sphere of the family.
Criticism of Western familism
Criticism in practice
Familialism has been challenged as historically and sociologically inadequate to describe the complexity of actual family relations. In modern American society in which the male head of the household can no longer be guaranteed a wage suitable to support a family, 1950s-style familialism has been criticized as counterproductive to family formation and fertility.
Imposition of Western-style familialism on other cultures has been disruptive to traditional non-nuclear family forms such as matrilineality.
The rhetoric of "family values" has been used to demonize single mothers and LGBT couples, who allegedly lack them. This has a disproportionate impact on the African-American community, as African-American women are more likely to be single mothers.
Criticism from the LGBT community
LGBT communities tend to accept and support the diversity of intimate human associations, partially as a result of their historically ostracized status from nuclear family structures. From its inception in the late 1960s, the gay rights movement has asserted every individual's right to create and define their own relationships and family in the way most conducive to the safety, happiness, and self-actualization of each individual.
For example, the glossary of LGBT terms of Family Pride Canada, a Canadian organization advocating for family equality for LGBT parents, defines familialism as:
Criticism in psychology
Normalization of the nuclear family as the only healthy environment for children has been criticized by psychologists.
In a peer-reviewed study from 2007, adoptees have been shown to display self-esteem comparable with non-adoptees.
In a meta-study from 2012, "quality of parenting and parent–child relationships" is described as the most important factor to children development. Also "Dimensions of family structure including such factors as divorce, single parenthood, and the parents' sexual orientation and biological relatedness between parents and children are of little or no predictive importance"
Criticism in psychoanalysis
Gilles Deleuze and Félix Guattari, in their now-classic 1972 book Anti-Oedipus, argued that psychiatry and psychoanalysis, since their inception, have been affected by an incurable familialism, which is their ordinary bed and board. Psychoanalysis has never escaped from this, having remained captive to an unrepentant familialism.
Michel Foucault wrote that through familialism psychoanalysis completed and perfected what the psychiatry of 19th century insane asylums had set out to do and that it enforced the power structures of bourgeois society and its values: Family-Children (paternal authority), Fault-Punishment (immediate justice), Madness-Disorder (social and moral order). Deleuze and Guattari added that "the familialism inherent in psychoanalysis doesn't so much destroy classical psychiatry as shine forth as the latter's crowning achievement", and that since the 19th century, the study of mental illnesses and madness has remained the prisoner of the familial postulate and its correlates.
Through familialism, and the psychoanalysis based on it, guilt is inscribed upon the family's smallest member, the child, and parental authority is absolved.
According to Deleuze and Guattari, among the psychiatrists only Karl Jaspers and Ronald Laing, have escaped familialism. This was not the case of the culturalist psychoanalysts, which, despite their conflict with orthodox psychoanalysts, had a "stubborn maintenance of a familialist perspective", still speaking "the same language of a familialized social realm".
Criticism in Marxism
In The Communist Manifesto of 1848, Karl Marx describes how the bourgeois or monogamous two-parent family has as its foundation capital and private gain. Marx also pointed out that this family existed only in its full form among the bourgeoisie or upper classes, and was nearly absent among the exploited proletariat or working class. He felt that the vanishment of capital would also result in the vanishment of the monogamous marriage, and the exploitation of the working class. He explains how family ties among the proletarians are divided by the capitalist system, and their children are used simply as instruments of labour. This is partly due to child labour laws being less strict at the time in Western society. In Marx's view, the bourgeois husband sees his wife as an instrument of labour, and therefore to be exploited, as instruments of production (or labour) exist under capitalism for this purpose.
In The Origin of the Family, Private Property, and the State, published in 1884, Frederick Engels was also extremely critical of the monogamous two parent family and viewed it as one of many institutions for the division of labour in capitalist society. In his chapter "The Monogamous Family", Engels traces monogamous marriage back to the Greeks, who viewed the practice's sole aim as making "the man supreme in the family, and to propagate, as the future heirs to his wealth, children indisputably his own". He felt that the monogamous marriage made explicit the subjugation of one sex by the other throughout history, and that the first division of labour "is that between man and woman for the propagation of children". Engels views the monogamous two-parent family as a microcosm of society, stating "It is the cellular form of civilized society, in which the nature of the oppositions and contradictions fully active in that society can be already studied".
Engels pointed out disparities between the legal recognition of a marriage, and the reality of it. A legal marriage is entered into freely by both partners, and the law states both partners must have common ground in rights and duties. There are other factors that the bureaucratic legal system cannot take into account however, since it is "not the law's business". These may include differences in the class position of both parties and pressure on them from outside to bear children.
For Engels, the obligation of the husband in the traditional two-parent familial structure is to earn a living and support his family. This gives him a position of supremacy. This role is given without a particular need for special legal titles or privileges. Within the family, he represents the bourgeois, and the wife represents the proletariat. Engels, on the other hand, equates the position of the wife in marriage with one of exploitation and prostitution, as she sells her body "once and for all into slavery".
More recent criticism from a Marxist perspective comes from Lisa Healy in her 2009 essay "Capitalism and the Transforming Family Unit: A Marxist Analysis". Her essay examines the single-parent family, defining it as one parent, often a woman, living with one or more usually unmarried children. The stigmatization of lone parents is tied to their low rate of participation in the workforce, and a pattern of dependency on welfare. This results in less significant contributions to the capitalist system on their part. This stigmatization is reinforced by the state, such as through insufficient welfare payments. This exposes capitalist interests that are inherent to their society and which favour two-parent families.
In politics
Australia
The Family First Party originally contested the 2002 South Australian state election, where former Assemblies of God pastor Andrew Evans won one of the eleven seats in the 22-seat South Australian Legislative Council on 4 percent of the statewide vote. The party made their federal debut at the 2004 general election, electing Steve Fielding on 2 percent of the Victorian vote in the Australian Senate, out of six Victorian senate seats up for election. Both MPs were able to be elected with Australia's Single Transferable Vote and Group voting ticket system in the upper house. The party opposes abortion, euthanasia, harm reduction, gay adoptions, in-vitro fertilisation (IVF) for gay couples and gay civil unions. It supports drug prevention, zero tolerance for law breaking, rehabilitation, and avoidance of all sexual behaviors it considers deviant.
In the 2007 Australian election, Family First came under fire for giving preferences in some areas to the Liberty and Democracy Party, a libertarian party that supports legalization of incest, gay marriage, and drug use.
United Kingdom
Family values was a recurrent theme in the Conservative government of John Major. His Back to Basics initiative became the subject of ridicule after the party was affected by a series of sleaze scandals. John Major himself, the architect of the policy, was subsequently found to have had an affair with Edwina Currie. Family values were revived under David Cameron, being a recurring theme in his speeches on social responsibility and related policies, demonstrated by his Marriage Tax allowance policy which would provide tax breaks for married couples.
New Zealand
Family values politics reached their apex under the social conservative administration of the Third National Government (1975–84), widely criticised for its populist and social conservative views about abortion and homosexuality. Under the Fourth Labour Government (1984–90), homosexuality was decriminalised and abortion access became easier to obtain.
In the early 1990s, New Zealand reformed its electoral system, replacing the first-past-the-post electoral system with the Mixed Member Proportional system. This provided a particular impetus to the formation of separatist conservative Christian political parties, disgruntled at the Fourth National Government (1990–99), which seemed to embrace bipartisan social liberalism to offset Labour's earlier appeal to social liberal voters. Such parties tried to recruit conservative Christian voters to blunt social liberal legislative reforms, but had meagre success in doing so. During the tenure of Fifth Labour Government (1999–2008), prostitution law reform (2003), same-sex civil unions (2005) and the repeal of laws that permitted parental corporal punishment of children (2007) became law.
At present, Family First New Zealand, a 'non-partisan' social conservative lobby group, operates to try to forestall further legislative reforms such as same-sex marriage and same-sex adoption. In 2005, conservative Christians tried to pre-emptively ban same-sex marriage in New Zealand through alterations to the New Zealand Bill of Rights Act 1990, but the bill failed 47 votes to 73 at its first reading. At most, the only durable success such organisations can claim in New Zealand is the continuing criminality of cannabis possession and use under New Zealand's Misuse of Drugs Act 1975.
Russia
Federal law of Russian Federation no. 436-FZ of 2010-12-23 "On Protecting Children from Information Harmful to Their Health and Development" lists information "negating family values and forming disrespect to parents and/or other family members" as information not suitable for children ("18+" rating). It does not contain any separate definition of family values.
Singapore
Singapore's main political party, the People's Action Party, promotes family values intensively. Former Prime Minister Lee Hsien Loong said that "The family is the basic building block of our society. [...] And by "family" in Singapore, we mean one man, one woman, marrying, having children and bringing up children within that framework of a stable family unit."
One MP has described the nature of family values in the city-state as "almost Victorian in nature". The government is opposed to same-sex adoption. The Singaporean justice system uses corporal punishment.
United States
The use of family values as a political term dates back to 1976, when it appeared in the Republican Party platform. The phrase became more widespread after Vice President Dan Quayle used it in a speech at the 1992 Republican National Convention. Quayle had also launched a national controversy when he criticized the television program Murphy Brown for a story line that depicted the title character becoming a single mother by choice, citing it as an example of how popular culture contributes to a "poverty of values", and saying: "[i]t doesn't help matters when primetime TV has Murphy Brown—a character who supposedly epitomizes today's intelligent, highly paid, professional woman—mocking the importance of fathers, by bearing a child alone, and calling it just another 'lifestyle choice'". Quayle's remarks initiated widespread controversy, and have had a continuing effect on U.S. politics. Stephanie Coontz, a professor of family history and the author of several books and essays about the history of marriage, says that this brief remark by Quayle about Murphy Brown "kicked off more than a decade of outcries against the 'collapse of the family'".
In 1998, a Harris survey found that:
52% of women and 42% of men thought family values means "loving, taking care of, and supporting each other"
38% of women and 35% of men thought family values means "knowing right from wrong and having good values"
2% of women and 1% men thought of family values in terms of the "traditional family"
The survey noted that 93% of all women thought that society should value all types of families (Harris did not publish the responses for men).
Republican Party
Since 1980, the Republican Party has used the issue of family values to attract socially conservative voters. While "family values" remains an amorphous concept, social conservatives usually understand the term to include some combination of the following principles (also referenced in the 2004 Republican Party platform):
opposition to sex outside of marriage
support for a traditional role for women in "the family"
opposition to same-sex marriage, homosexuality and gender transition
support for complementarianism
opposition to legalized induced abortion
support for abstinence-only sex education
support for policies said to protect children from obscenity and exploitation
Social and religious conservatives often use the term "family values" to promote conservative ideology that supports traditional morality or Christian values. Social conservatism in the United States is centered on the preservation of what adherents often call 'traditional' or 'family values'. Some American conservative Christians see their religion as the source of morality and consider the nuclear family an essential element in society. For example, "The American Family Association exists to motivate and equip citizens to change the culture to reflect Biblical truth and traditional family values." Such groups variously oppose abortion, pornography, masturbation, pre-marital sex, polygamy, homosexuality, certain aspects of feminism, cohabitation, separation of church and state, legalization of recreational drugs, and depictions of sexuality in the media.
Democratic Party
Although the term "family values" remains a core issue for the Republican Party, the Democratic Party has also used the term, though differing in its definition. In his acceptance speech at the 2004 Democratic National Convention, John Kerry said "it is time for those who talk about family values to start valuing families".
Other liberals have used the phrase to support such values as family planning, affordable child-care, and maternity leave. For example, groups such as People For the American Way, Planned Parenthood, and Parents and Friends of Lesbians and Gays have attempted to define the concept in a way that promotes the acceptance of single-parent families, same-sex monogamous relationships and marriage. This understanding of family values does not promote conservative morality, instead focusing on encouraging and supporting alternative family structures, access to contraception and abortion, increasing the minimum wage, sex education, childcare, and parent-friendly employment laws, which provide for maternity leave and leave for medical emergencies involving children.
While conservative sexual ethics focus on preventing premarital or non-procreative sex, liberal sexual ethics are typically directed rather towards consent, regardless of whether or not the partners are married.
Demographics
Population studies have found that in 2004 and 2008, liberal-voting ("blue") states have lower rates of divorce and teenage pregnancy than conservative-voting ("red") states. June Carbone, author of Red Families vs. Blue Families, opines that the driving factor is that people in liberal states tend to wait longer before getting married.
A 2002 government survey found that 95% of adult Americans had premarital sex. This number had risen slightly from the 1950s, when it was nearly 90%. The median age of first premarital sex has dropped in that time from 20.4 to 17.6.
Christian right
The Christian right often promotes the term family values to refer to their version of familialism.
Focus on the Family is an American Christian conservative organization whose family values include adoption by married, opposite-sex parents; and traditional gender roles. It opposes abortion, divorce, LGBT rights, particularly LGBT adoption and same-sex marriage, pornography, masturbation, and pre-marital sex. The Family Research Council is an example of a right-wing organization claiming to uphold traditional family values. Due to its usage of virulent anti-gay rhetoric and opposition to civil rights for LGBT people, it was classified as a hate group.
See also
Nepotism, favoritism granted to relatives and friends without regard to merit
Nuclear family, a family group consisting of a pair of adults and their children
Natalism, a belief that promotes human reproduction
Extended family
Single parent
Family Coalition Party of British Columbia
Family Party of Germany
League of Polish Families
Nepal Pariwar Dal
New Reform Party of Ontario, founded as Family Coalition Party of Ontario
Party for Japanese Kokoro
The People of Family
We Are Family (Slovakia)
World Congress of Families
References
Plutarch: The Lives of the Noble Grecians and Romans, trans. by John Dryden and revised by Arthur Hugh Clough, The Modern Library (div of Random House, Inc). Bio on Lycurgus; pg 65.
Politics, Aristotle, Loeb Classical Library, Bk I, §II 8–10; 1254a 20–35; pg 19–21
Politics, Bk I, §11,21;1255b 15–20; pg 29.
Hellenistic Commentary to the New Testament, ed. By M. Eugene Boring, Klaus Berger, Carsten Colpe, Abingdon Press, Nashville, TN, 1995.
Hellenistic Commentary to the New Testament, ed. By M. Eugene Boring, Klaus Berger, Carsten Colpe, Abingdon Press, Nashville, TN, 1995.
On Divorce, Louis de Bonald, trans. By Nicholas Davidson, Transaction Publishers, New Brunswick, 1993. pp 44–46.
On Divorce, Louis de Bonald, pp 88–89; 149.
Liberty or Equality, Von Kuehnelt-Leddihn, pg 155.
George Lakoff, What Conservatives Know That Liberals Don't,
Frank H. Knight, (1923). The Ethics of Competition. The Quarterly Journal of Economics, 37(4), 579–624. https://doi.org/10.2307/1884053, p. 590f.
Noppeney, C. (1998). Zwischen Chicago-Schule und Ordoliberalismus: Wirtschaftsethische Spuren in der Ökonomie Frank Knights (Bd. 21). Bern: Paul Haupt, p. 176ff,
Further reading
Anne Revillard (2007) Stating Family Values and Women's Rights: Familialism and Feminism Within the French Republic French Politics 5, 210–228.
Alberto Alesina; Paola Giuliano (2010) The Power of the Family Journal of Economic Growth, vol. 15(2), 93-125
Frederick Engels (1884) The Monogamous Family The Origin of the Family, Private Property and the State. Chapter 2, Part 4. Retrieved 24 October 2013.
Carle C. Zimmerman (1947) Family and Civilization The close and causal connections between the rise and fall of different types of families and the rise and fall of civilizations. Zimmerman traces the evolution of family structure from tribes and clans to extended and large nuclear families to the small nuclear families and broken families of today.
Family
Ideologies
Social ideologies
Political ideologies
Conservatism
Social conservatism
Censorship of LGBTQ issues | 0.781604 | 0.986923 | 0.771383 |
Biogeography | Biogeography is the study of the distribution of species and ecosystems in geographic space and through geological time. Organisms and biological communities often vary in a regular fashion along geographic gradients of latitude, elevation, isolation and habitat area. Phytogeography is the branch of biogeography that studies the distribution of plants. Zoogeography is the branch that studies distribution of animals. Mycogeography is the branch that studies distribution of fungi, such as mushrooms.
Knowledge of spatial variation in the numbers and types of organisms is as vital to us today as it was to our early human ancestors, as we adapt to heterogeneous but geographically predictable environments. Biogeography is an integrative field of inquiry that unites concepts and information from ecology, evolutionary biology, taxonomy, geology, physical geography, palaeontology, and climatology.
Modern biogeographic research combines information and ideas from many fields, from the physiological and ecological constraints on organismal dispersal to geological and climatological phenomena operating at global spatial scales and evolutionary time frames.
The short-term interactions within a habitat and species of organisms describe the ecological application of biogeography. Historical biogeography describes the long-term, evolutionary periods of time for broader classifications of organisms. Early scientists, beginning with Carl Linnaeus, contributed to the development of biogeography as a science.
The scientific theory of biogeography grows out of the work of Alexander von Humboldt (1769–1859), Francisco Jose de Caldas (1768–1816), Hewett Cottrell Watson (1804–1881), Alphonse de Candolle (1806–1893), Alfred Russel Wallace (1823–1913), Philip Lutley Sclater (1829–1913) and other biologists and explorers.
Introduction
The patterns of species distribution across geographical areas can usually be explained through a combination of historical factors such as: speciation, extinction, continental drift, and glaciation. Through observing the geographic distribution of species, we can see associated variations in sea level, river routes, habitat, and river capture. Additionally, this science considers the geographic constraints of landmass areas and isolation, as well as the available ecosystem energy supplies.
Over periods of ecological changes, biogeography includes the study of plant and animal species in: their past and/or present living refugium habitat; their interim living sites; and/or their survival locales. As writer David Quammen put it, "...biogeography does more than ask Which species? and Where. It also asks Why? and, what is sometimes more crucial, Why not?."
Modern biogeography often employs the use of Geographic Information Systems (GIS), to understand the factors affecting organism distribution, and to predict future trends in organism distribution.
Often mathematical models and GIS are employed to solve ecological problems that have a spatial aspect to them.
Biogeography is most keenly observed on the world's islands. These habitats are often much more manageable areas of study because they are more condensed than larger ecosystems on the mainland. Islands are also ideal locations because they allow scientists to look at habitats that new invasive species have only recently colonized and can observe how they disperse throughout the island and change it. They can then apply their understanding to similar but more complex mainland habitats. Islands are very diverse in their biomes, ranging from the tropical to arctic climates. This diversity in habitat allows for a wide range of species study in different parts of the world.
One scientist who recognized the importance of these geographic locations was Charles Darwin, who remarked in his journal "The Zoology of Archipelagoes will be well worth examination". Two chapters in On the Origin of Species were devoted to geographical distribution.
History
18th century
The first discoveries that contributed to the development of biogeography as a science began in the mid-18th century, as Europeans explored the world and described the biodiversity of life. During the 18th century most views on the world were shaped around religion and for many natural theologists, the bible. Carl Linnaeus, in the mid-18th century, improved our classifications of organisms through the exploration of undiscovered territories by his students and disciples. When he noticed that species were not as perpetual as he believed, he developed the Mountain Explanation to explain the distribution of biodiversity; when Noah's ark landed on Mount Ararat and the waters receded, the animals dispersed throughout different elevations on the mountain. This showed different species in different climates proving species were not constant. Linnaeus' findings set a basis for ecological biogeography. Through his strong beliefs in Christianity, he was inspired to classify the living world, which then gave way to additional accounts of secular views on geographical distribution. He argued that the structure of an animal was very closely related to its physical surroundings. This was important to a George Louis Buffon's rival theory of distribution.
Closely after Linnaeus, Georges-Louis Leclerc, Comte de Buffon observed shifts in climate and how species spread across the globe as a result. He was the first to see different groups of organisms in different regions of the world. Buffon saw similarities between some regions which led him to believe that at one point continents were connected and then water separated them and caused differences in species. His hypotheses were described in his work, the 36 volume Histoire Naturelle, générale et particulière, in which he argued that varying geographical regions would have different forms of life. This was inspired by his observations comparing the Old and New World, as he determined distinct variations of species from the two regions. Buffon believed there was a single species creation event, and that different regions of the world were homes for varying species, which is an alternate view than that of Linnaeus. Buffon's law eventually became a principle of biogeography by explaining how similar environments were habitats for comparable types of organisms. Buffon also studied fossils which led him to believe that the Earth was over tens of thousands of years old, and that humans had not lived there long in comparison to the age of the Earth.
19th century
Following the period of exploration came the Age of Enlightenment in Europe, which attempted to explain the patterns of biodiversity observed by Buffon and Linnaeus. At the birth of the 19th century, Alexander von Humboldt, known as the "founder of plant geography", developed the concept of physique generale to demonstrate the unity of science and how species fit together. As one of the first to contribute empirical data to the science of biogeography through his travel as an explorer, he observed differences in climate and vegetation. The Earth was divided into regions which he defined as tropical, temperate, and arctic and within these regions there were similar forms of vegetation. This ultimately enabled him to create the isotherm, which allowed scientists to see patterns of life within different climates. He contributed his observations to findings of botanical geography by previous scientists, and sketched this description of both the biotic and abiotic features of the Earth in his book, Cosmos.
Augustin de Candolle contributed to the field of biogeography as he observed species competition and the several differences that influenced the discovery of the diversity of life. He was a Swiss botanist and created the first Laws of Botanical Nomenclature in his work, Prodromus. He discussed plant distribution and his theories eventually had a great impact on Charles Darwin, who was inspired to consider species adaptations and evolution after learning about botanical geography. De Candolle was the first to describe the differences between the small-scale and large-scale distribution patterns of organisms around the globe.
Several additional scientists contributed new theories to further develop the concept of biogeography. Charles Lyell developed the Theory of Uniformitarianism after studying fossils. This theory explained how the world was not created by one sole catastrophic event, but instead from numerous creation events and locations. Uniformitarianism also introduced the idea that the Earth was actually significantly older than was previously accepted. Using this knowledge, Lyell concluded that it was possible for species to go extinct. Since he noted that Earth's climate changes, he realized that species distribution must also change accordingly. Lyell argued that climate changes complemented vegetation changes, thus connecting the environmental surroundings to varying species. This largely influenced Charles Darwin in his development of the theory of evolution.
Charles Darwin was a natural theologist who studied around the world, and most importantly in the Galapagos Islands. Darwin introduced the idea of natural selection, as he theorized against previously accepted ideas that species were static or unchanging. His contributions to biogeography and the theory of evolution were different from those of other explorers of his time, because he developed a mechanism to describe the ways that species changed. His influential ideas include the development of theories regarding the struggle for existence and natural selection. Darwin's theories started a biological segment to biogeography and empirical studies, which enabled future scientists to develop ideas about the geographical distribution of organisms around the globe.
Alfred Russel Wallace studied the distribution of flora and fauna in the Amazon Basin and the Malay Archipelago in the mid-19th century. His research was essential to the further development of biogeography, and he was later nicknamed the "father of Biogeography". Wallace conducted fieldwork researching the habits, breeding and migration tendencies, and feeding behavior of thousands of species. He studied butterfly and bird distributions in comparison to the presence or absence of geographical barriers. His observations led him to conclude that the number of organisms present in a community was dependent on the amount of food resources in the particular habitat. Wallace believed species were dynamic by responding to biotic and abiotic factors. He and Philip Sclater saw biogeography as a source of support for the theory of evolution as they used Darwin's conclusion to explain how biogeography was similar to a record of species inheritance. Key findings, such as the sharp difference in fauna either side of the Wallace Line, and the sharp difference that existed between North and South America prior to their relatively recent faunal interchange, can only be understood in this light. Otherwise, the field of biogeography would be seen as a purely descriptive one.
20th and 21st century
Moving on to the 20th century, Alfred Wegener introduced the Theory of Continental Drift in 1912, though it was not widely accepted until the 1960s. This theory was revolutionary because it changed the way that everyone thought about species and their distribution around the globe. The theory explained how continents were formerly joined in one large landmass, Pangea, and slowly drifted apart due to the movement of the plates below Earth's surface. The evidence for this theory is in the geological similarities between varying locations around the globe, the geographic distribution of some fossils (including the mesosaurs) on various continents, and the jigsaw puzzle shape of the landmasses on Earth. Though Wegener did not know the mechanism of this concept of Continental Drift, this contribution to the study of biogeography was significant in the way that it shed light on the importance of environmental and geographic similarities or differences as a result of climate and other pressures on the planet. Importantly, late in his career Wegener recognised that testing his theory required measurement of continental movement rather than inference from fossils species distributions.
In 1958 paleontologist Paul S. Martin published A Biogeography of Reptiles and Amphibians in the Gómez Farias Region, Tamaulipas, Mexico, which has been described as "ground-breaking" and "a classic treatise in historical biogeography". Martin applied several disciplines including ecology, botany, climatology, geology, and Pleistocene dispersal routes to examine the herpetofauna of a relatively small and largely undisturbed area, but ecologically complex, situated on the threshold of temperate – tropical (nearctic and neotropical) regions, including semiarid lowlands at 70 meters elevation and the northernmost cloud forest in the western hemisphere at over 2200 meters.
The publication of The Theory of Island Biogeography by Robert MacArthur and E.O. Wilson in 1967 showed that the species richness of an area could be predicted in terms of such factors as habitat area, immigration rate and extinction rate. This added to the long-standing interest in island biogeography. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology.
Classic biogeography has been expanded by the development of molecular systematics, creating a new discipline known as phylogeography. This development allowed scientists to test theories about the origin and dispersal of populations, such as island endemics. For example, while classic biogeographers were able to speculate about the origins of species in the Hawaiian Islands, phylogeography allows them to test theories of relatedness between these populations and putative source populations on various continents, notably in Asia and North America.
Biogeography continues as a point of study for many life sciences and geography students worldwide, however it may be under different broader titles within institutions such as ecology or evolutionary biology.
In recent years, one of the most important and consequential developments in biogeography has been to show how multiple organisms, including mammals like monkeys and reptiles like squamates, overcame barriers such as large oceans that many biogeographers formerly believed were impossible to cross. See also Oceanic dispersal.
Modern applications
Biogeography now incorporates many different fields including but not limited to physical geography, geology, botany and plant biology, zoology, general biology, and modelling. A biogeographer's main focus is on how the environment and humans affect the distribution of species as well as other manifestations of Life such as species or genetic diversity. Biogeography is being applied to biodiversity conservation and planning, projecting global environmental changes on species and biomes, projecting the spread of infectious diseases, invasive species, and for supporting planning for the establishment of crops. Technological evolving and advances have allowed for generating a whole suite of predictor variables for biogeographic analysis, including satellite imaging and processing of the Earth. Two main types of satellite imaging that are important within modern biogeography are Global Production Efficiency Model (GLO-PEM) and Geographic Information Systems (GIS). GLO-PEM uses satellite-imaging gives "repetitive, spatially contiguous, and time specific observations of vegetation". These observations are on a global scale. GIS can show certain processes on the earth's surface like whale locations, sea surface temperatures, and bathymetry. Current scientists also use coral reefs to delve into the history of biogeography through the fossilized reefs.
Two global information systems are either dedicated to, or have strong focus on, biogeography (in the form of the spatial location of observations of organisms), namely the Global Biodiversity Information Facility (GBIF: 2.57 billion species occurrence records reported as at August 2023) and, for marine species only, the Ocean Biodiversity Information System (OBIS, originally the Ocean Biogeographic Information System: 116 million species occurrence records reported as at August 2023), while at a national scale, similar compilations of species occurrence records also exist such as the U.K. National Biodiversity Network, the Atlas of Living Australia, and many others. In the case of the oceans, in 2017 Costello et al. analyzed the distribution of 65,000 species of marine animals and plants as then documented in OBIS, and used the results to distinguish 30 distinct marine realms, split between continental-shelf and offshore deep-sea areas.
Since it is self evident that compilations of species occurrence records cannot cover with any completeness, areas that have received either limited or no sampling, a number of methods have been developed to produce arguably more complete "predictive" or "modelled" distributions for species based on their associated environmental or other preferences (such as availability of food or other habitat requirements); this approach is known as either Environmental niche modelling (ENM) or Species distribution modelling (SDM). Depending on the reliability of the source data and the nature of the models employed (including the scales for which data are available), maps generated from such models may then provide better representations of the "real" biogeographic distributions of either individual species, groups of species, or biodiversity as a whole, however it should also be borne in mind that historic or recent human activities (such as hunting of great whales, or other human-induced exterminations) may have altered present-day species distributions from their potential "full" ecological footprint. Examples of predictive maps produced by niche modelling methods based on either GBIF (terrestrial) or OBIS (marine, plus some freshwater) data are the former Lifemapper project at the University of Kansas (now continued as a part of BiotaPhy) and AquaMaps, which as at 2023 contain modelled distributions for around 200,000 terrestrial, and 33,000 species of teleosts, marine mammals and invertebrates, respectively. One advantage of ENM/SDM is that in addition to showing current (or even past) modelled distributions, insertion of changed parameters such as the anticipated effects of climate change can also be used to show potential changes in species distributions that may occur in the future based on such scenarios.
Paleobiogeography
Paleobiogeography goes one step further to include paleogeographic data and considerations of plate tectonics. Using molecular analyses and corroborated by fossils, it has been possible to demonstrate that perching birds evolved first in the region of Australia or the adjacent Antarctic (which at that time lay somewhat further north and had a temperate climate). From there, they spread to the other Gondwanan continents and Southeast Asia – the part of Laurasia then closest to their origin of dispersal – in the late Paleogene, before achieving a global distribution in the early Neogene. Not knowing that at the time of dispersal, the Indian Ocean was much narrower than it is today, and that South America was closer to the Antarctic, one would be hard pressed to explain the presence of many "ancient" lineages of perching birds in Africa, as well as the mainly South American distribution of the suboscines.
Paleobiogeography also helps constrain hypotheses on the timing of biogeographic events such as vicariance and geodispersal, and provides unique information on the formation of regional biotas. For example, data from species-level phylogenetic and biogeographic studies tell us that the Amazonian teleost fauna accumulated in increments over a period of tens of millions of years, principally by means of allopatric speciation, and in an arena extending over most of the area of tropical South America (Albert & Reis 2011). In other words, unlike some of the well-known insular faunas (Galapagos finches, Hawaiian drosophilid flies, African rift lake cichlids), the species-rich Amazonian ichthyofauna is not the result of recent adaptive radiations.
For freshwater organisms, landscapes are divided naturally into discrete drainage basins by watersheds, episodically isolated and reunited by erosional processes. In regions like the Amazon Basin (or more generally Greater Amazonia, the Amazon basin, Orinoco basin, and Guianas) with an exceptionally low (flat) topographic relief, the many waterways have had a highly reticulated history over geological time. In such a context, stream capture is an important factor affecting the evolution and distribution of freshwater organisms. Stream capture occurs when an upstream portion of one river drainage is diverted to the downstream portion of an adjacent basin. This can happen as a result of tectonic uplift (or subsidence), natural damming created by a landslide, or headward or lateral erosion of the watershed between adjacent basins.
Concepts and fields
Biogeography is a synthetic science, related to geography, biology, soil science, geology, climatology, ecology and evolution.
Some fundamental concepts in biogeography include:
allopatric speciation – the splitting of a species by evolution of geographically isolated populations
evolution – change in genetic composition of a population
extinction – disappearance of a species
dispersal – movement of populations away from their point of origin, related to migration
endemic areas
geodispersal – the erosion of barriers to biotic dispersal and gene flow, that permit range expansion and the merging of previously isolated biotas
range and distribution
vicariance – the formation of barriers to biotic dispersal and gene flow, that tend to subdivide species and biotas, leading to speciation and extinction; vicariance biogeography is the field that studies these patterns
Comparative biogeography
The study of comparative biogeography can follow two main lines of investigation:
Systematic biogeography, the study of biotic area relationships, their distribution, and hierarchical classification
Evolutionary biogeography, the proposal of evolutionary mechanisms responsible for organismal distributions. Possible mechanisms include widespread taxa disrupted by continental break-up or individual episodes of long-distance movement.
Biogeographic regionalisations
There are many types of biogeographic units used in biogeographic regionalisation schemes, as there are many criteria (species composition, physiognomy, ecological aspects) and hierarchization schemes: biogeographic realms (ecozones), bioregions (sensu stricto), ecoregions, zoogeographical regions, floristic regions, vegetation types, biomes, etc.
The terms biogeographic unit, biogeographic area can be used for these categories, regardless of rank.
In 2008, an International Code of Area Nomenclature was proposed for biogeography. It achieved limited success; some studies commented favorably on it, but others were much more critical, and it "has not yet gained a significant following". Similarly, a set of rules for paleobiogeography has achieved limited success. In 2000, Westermann suggested that the difficulties in getting formal nomenclatural rules established in this field might be related to "the curious fact that neither paleo- nor neobiogeographers are organized in any formal groupings or societies, nationally (so far as I know) or internationally — an exception among active disciplines."
See also
Allen's rule
Bergmann's rule
Biogeographic realm
Bibliography of biology
Biogeography-based optimization
Center of origin
Concepts and Techniques in Modern Geography
Distance decay
Ecological land classification
Geobiology
Macroecology
Marine ecoregions
Max Carl Wilhelm Weber
Miklos Udvardy
Phytochorion – Plant region
Sky island
Systematic and evolutionary biogeography association
Notes and references
Further reading
Albert, J. S., & R. E. Reis (2011). Historical Biogeography of Neotropical Freshwater Fishes. University of California Press, Berkeley. 424 pp.
Cox, C. B. (2001). The biogeographic regions reconsidered. Journal of Biogeography, 28: 511–523, .
Ebach, M.C. (2015). Origins of biogeography. The role of biological classification in early plant and animal geography. Dordrecht: Springer, xiv + 173 pp., .
Lieberman, B. S. (2001). "Paleobiogeography: using fossils to study global change, plate tectonics, and evolution". Kluwer Academic, Plenum Publishing, .
Lomolino, M. V., & Brown, J. H. (2004). Foundations of biogeography: classic papers with commentaries. University of Chicago Press, .
Millington, A., Blumler, M., & Schickhoff, U. (Eds.). (2011). The SAGE handbook of biogeography. Sage, London, .
Nelson, G.J. (1978). From Candolle to Croizat: Comments on the history of biogeography. Journal of the History of Biology, 11: 269–305.
Udvardy, M. D. F. (1975). A classification of the biogeographical provinces of the world. IUCN Occasional Paper no. 18. Morges, Switzerland: IUCN.
External links
The International Biogeography Society
Systematic & Evolutionary Biogeographical Society (archived 5 December 2008)
Early Classics in Biogeography, Distribution, and Diversity Studies: To 1950
Early Classics in Biogeography, Distribution, and Diversity Studies: 1951–1975
Some Biogeographers, Evolutionists and Ecologists: Chrono-Biographical Sketches
Major journals
Journal of Biogeography homepage (archived 15 December 2004)
Global Ecology and Biogeography homepage. .
Ecography homepage.
Landscape ecology
Physical oceanography
Physical geography
Environmental terminology
Habitat
Earth sciences | 0.775899 | 0.994162 | 0.771369 |
Collapsology | The term collapsology is a neologism used to designate the transdisciplinary study of the risks of collapse of industrial civilization. It is concerned with the general collapse of societies induced by climate change, as well as "scarcity of resources, vast extinctions, and natural disasters." Although the concept of civilizational or societal collapse had already existed for many years, collapsology focuses its attention on contemporary, industrial, and globalized societies.
Background
The word collapsology has been coined and publicized by and Raphaël Stevens in their essay: (How everything can collapse: A manual for our times), published in 2015 in France. It also developed into a movement when Jared Diamond's text Collapse was published. Use of the term has spread, especially by journalists reporting on the deep adaptation writings by Jem Bendell.
Collapsology is based on the idea that humans impact their environment in a sustained and negative way, and promotes the concept of an environmental emergency, linked in particular to global warming and the biodiversity loss. Collapsologists believe, however, that the collapse of industrial civilization could be the result of a combination of different crises: environmental, but also economic, geopolitical, democratic, and others.
Collapsology is a transdisciplinary exercise involving ecology, economics, anthropology, sociology, psychology, biophysics, biogeography, agriculture, demography, politics, geopolitics, bioarchaeology, history, futurology, health, law and art.
Etymology
The word collapsology is a neologism invented "with a certain self-mockery" by Pablo Servigne, an agricultural engineer, and Raphaël Stevens, an expert in the resilience of socio-ecological systems. It appears in their book published in 2015.
It is a portmanteau derived from the Latin , 'to fall, to collapse' and from the suffix , logos, 'study', which is intended to name an approach of scientific nature.
Since 2015 and the publication of How everything can collapse in French, several words have been proposed to describe the various approaches dealing with the issue of collapse: collapso-sophy to designate the philosophical approach, collapso-praxis to designate the ideology inspired by this study, and collapsonauts to designate people living with this idea in mind.
Religious foundations
Unlike traditional eschatological thinking, collapsology is based on data and concepts from contemporary scientific research, primarily human understanding of climate change as caused by human economic and geopolitical systems. It is not in line with the idea of a cosmic, apocalyptic "end of the world", but makes the hypothesis of the end of the human current world, the "thermo-industrial civilization".
This distinction is further stressed by historian Eric H. Cline by pointing out that while the whole world has obviously not ended, civilizations have collapsed over the course of history which makes the statement that "prophets have always predicted doom and been wrong" inapplicable to societal collapse.
Scientific basis
As early as 1972, The Limits to Growth, a report produced by MIT researchers, warned of the risks of exponential demographic and economic growth on a planet with limited resources.
As a systemic approach, collapsology is based on prospective studies such as The Limits of Growth, but also on the state of global and regional trends in the environmental, social and economic fields (such as the IPCC, IPBES or Global Environment Outlook (GE) reports periodically published by the Early Warning and Assessment Division of the UNEP, etc.) and numerous scientific works as well as various studies, such as "A safe operating space for humanity"; "Approaching a state shift in Earth's biosphere", published in Nature in 2009 and 2012, "The trajectory of the Anthropocene: The Great Acceleration", published in 2015 in The Anthropocene Review, and "Trajectories of the Earth System in the Anthropocene", published in 2018 in the Proceedings of the National Academy of Sciences of the United States of America.
There is evidence to support the importance of collective processing of the emotional aspects of contemplating societal collapse, and the inherent adaptiveness of these emotional experiences.
History
Precursors
Even if this neologism only appeared in 2015 and concerns the study of the collapse of industrial civilization, the study of the collapse of societies is older and is probably a concern of every civilization. Among the works on this theme (in a broad sense) one can mention those of Berossus (278 B.C.), Pliny the Younger (79 AD), Ibn Khaldun (1375), Montesquieu (1734), Thomas Robert Malthus (1766–1834), Edward Gibbon (1776), Georges Cuvier, (1821), Élisée Reclus (1905), Oswald Spengler (1918), Arnold Toynbee (1939), Günther Anders (1956), Samuel Noah Kramer (1956), Leopold Kohr (1957), Rachel Carson (1962), Paul Ehrlich (1969), Nicholas Georgescu-Roegen (1971), Donella Meadows, Dennis Meadows & Jørgen Randers (1972), René Dumont (1973), Hans Jonas (1979), Joseph Tainter (1988), Al Gore (1992), Hubert Reeves (2003), Richard Posner (2004), Jared Diamond (2005), Niall Ferguson (2013).
Arnold J. Toynbee
In his monumental (initially published in twelve volumes) and highly controversial work of contemporary historiography entitled A Study of History (1972), Arnold J. Toynbee (1889–1975) deals with the genesis of civilizations (chapter 2), their growth (chapter 3), their decline (chapter 4), and their disintegration (chapter 5). According to him, the mortality of civilizations is trivial evidence for the historian, as is the fact that they follow one another over a long period of time.
Joseph Tainter
In his book The Collapse of Complex Societies, the anthropologist and historian Joseph Tainter (born 1949) studies the collapse of various civilizations, including that of the Roman Empire, in terms of network theory, energy economics and complexity theory. For Tainter, an increasingly complex society eventually collapses because of the ever-increasing difficulty in solving its problems.
Jared Diamond
The American geographer, evolutionary biologist and physiologist Jared Diamond (born 1937) already evoked the theme of civilizational collapse in his book called Collapse: How Societies Choose to Fail or Succeed, published in 2005. By relying on historical cases, notably the Rapa Nui civilization, the Vikings and the Maya civilization, Diamond argues that humanity collectively faces, on a much larger scale, many of the same issues as these civilizations did, with possibly catastrophic near-future consequences to many of the world's populations. This book has had a resonance beyond the United States, despite some criticism. Proponents of catastrophism who identify themselves as "enlightened catastrophists" draw from Diamond's work, helping build the expansion of the relational ecology network, whose members believe that man is heading toward disaster. Diamond's Collapse approached civilizational collapse from archaeological, ecological, and biogeographical perspectives on ancient civilizations.
Modern collapsologists
Since the invention of the term collapsology, many French personalities gravitate in or around the collapsologists' sphere. Not all have the same vision of civilizational collapse, some even reject the term "collapsologist", but all agree that contemporary industrial civilization, and the biosphere as a whole, are on the verge of a global crisis of unprecedented proportions. According to them, the process is already under way, and it is now only possible to try to reduce its devastating effects in the near future. The leaders of the movement are Yves Cochet and Agnès Sinaï of the Momentum Institute (a think tank exploring the causes of environmental and societal risks of collapse of the thermo-industrial civilization and possible actions to adapt to it), and Pablo Servigne and Raphaël Stevens who wrote the essay How everything can collapse: A manual for our times.
Beyond the French collapsologists mentioned above, one can mention: Aurélien Barrau (astrophysicist), Philippe Bihouix (engineer, low-tech developer), Dominique Bourg (philosopher), Valérie Cabanes (lawyer, seeking recognition of the crime of ecocide by the international criminal court), Jean-Marc Jancovici (energy-climate specialist), and Paul Jorion (anthropologist, sociologist).
In 2020 the French humanities and social science website Cairn.info published a dossier on collapsology titled The Age of Catastrophe, with contributions from historian François Hartog, economist Emmanuel Hache, philosopher Pierre Charbonnier, art historian Romain Noël, geoscientist Gabriele Salerno, and American philosopher Eugene Thacker.
Even if the term remains rather unknown in the Anglo-Saxon world, many publications deal with the same topic (for example the 2017 David Wallace-Wells article "The Uninhabitable Earth" and 2019 bestselling book of the same name, probably a mass-market collapsology work without using the term). It is now gradually spreading on general and scientific English speaking social networks. In his book Anti-Tech Revolution: Why and How, Ted Kaczynski also warned of the threat of catastrophic societal collapse.
See also
Climate change and civilizational collapse
Counter-Enlightenment
Degeneration theory
Dysgenics
Historic recurrence
Social cycle theory
References
Civilizations
Societal collapse
Secondary sector of the economy
Sociological terminology | 0.788058 | 0.978784 | 0.771339 |
Environmental governance | Environmental governance (EG) consists of a system of laws, norms, rules, policies and practices that dictate how the board members of an environment related regulatory body should manage and oversee the affairs of any environment related regulatory body which is responsible for ensuring sustainability (sustainable development) and manage all human activities—political, social and economic. Environmental governance includes government, business and civil society, and emphasizes whole system management. To capture this diverse range of elements, environmental governance often employs alternative systems of governance, for example watershed-based management.
In some cases, it views natural resources and the environment as global public goods, belonging to the category of goods that are not diminished when they are shared. This means that everyone benefits from, for example, a breathable atmosphere, stable climate and stable biodiversity.
Governance in an environmental context may refer to:
a concept in political ecology which promotes environmental policy that advocates for sustainable human activity (i.e. that governance should be based upon environmental principles).
the processes of decision-making involved in the control and management of the environment and natural resources.
Definitions
Environmental governance refers to the processes of decision-making involved in the control and management of the environment and natural resources. International Union for Conservation of Nature (IUCN), define environmental governance as the "multi-level interactions (i.e., local, national, international/global) among, but not limited to, three main actors, i.e., state, market, and civil society, which interact with one another, whether in formal and informal ways; in formulating and implementing policies in response to environment-related demands and inputs from the society; bound by rules, procedures, processes, and widely accepted behavior; possessing characteristics of “good governance”; for the purpose of attaining environmentally-sustainable development" (IUCN 2014).
Key principles of environmental governance include:
Embedding the environment in all levels of decision-making and action
Conceptualizing cities and communities, economic and political life as a subset of the environment
Emphasizing the connection of people to the ecosystems in which they live
Promoting the transition from open-loop/cradle-to-grave systems (like garbage disposal with no recycling) to closed-loop/cradle-to-cradle systems (like permaculture and zero waste strategies).
Neoliberal environmental governance is an approach to the theory of environmental governance framed by a perspective on neoliberalism as an ideology, policy and practice in relation to the biophysical world. There are many definitions and applications of neoliberalism, e.g. in economic, international relations, etc. However, the traditional understanding of neoliberalism is often simplified to the notion of the primacy of market-led economics through the rolling back of the state, deregulation and privatisation. Neoliberalism has evolved particularly over the last 40 years with many scholars leaving their ideological footprint on the neoliberal map. Hayek and Friedman believed in the superiority of the free market over state intervention. As long as the market was allowed to act freely, the supply/demand law would ensure the ‘optimal’ price and reward. In Karl Polanyi's opposing view this would also create a state of tension in which self-regulating free markets disrupt and alter social interactions and “displace other valued means of living and working”. However, in contrast to the notion of an unregulated market economy there has also been a “paradoxical increase in [state] intervention” in the choice of economic, legislative and social policy reforms, which are pursued by the state to preserve the neoliberal order. This contradictory process is described by Peck and Tickell as roll back/roll out neoliberalism in which on one hand the state willingly gives up the control over resources and responsibility for social provision while on the other, it engages in “purposeful construction and consolidation of neoliberalised state forms, modes of governance, and regulatory relations".
There has been a growing interest in the effects of neoliberalism on the politics of the non-human world of environmental governance. Neoliberalism is seen to be more than a homogenous and monolithic ‘thing’ with a clear end point. It is a series of path-dependent, spatially and temporally “connected neoliberalisation” processes which affect and are affected by nature and environment that “cover a remarkable array of places, regions and countries”. Co-opting neoliberal ideas of the importance of private property and the protection of individual (investor) rights, into environmental governance can be seen in the example of recent multilateral trade agreements (see in particular the North American Free Trade Agreement). Such neoliberal structures further reinforce a process of nature enclosure and primitive accumulation or “accumulation by dispossession” that serves to privatise increasing areas of nature. The ownership-transfer of resources traditionally not privately owned to free market mechanisms is believed to deliver greater efficiency and optimal return on investment. Other similar examples of neo-liberal inspired projects include the enclosure of minerals, the fisheries quota system in the North Pacific and the privatisation of water supply and sewage treatment in England and Wales. All three examples share neoliberal characteristics to “deploy markets as the solution to environmental problems” in which scarce natural resources are commercialized and turned into commodities. The approach to frame the ecosystem in the context of a price-able commodity is also present in the work of neoliberal geographers who subject nature to price and supply/demand mechanisms where the earth is considered to be a quantifiable resource (Costanza, for example, estimates the earth ecosystem's service value to be between 16 and 54 trillion dollars per year).
Environmental issues
Challenges
Challenges facing environmental governance include:
Inadequate continental and global agreements
Unresolved tensions between maximum development, sustainable development and maximum protection, limiting funding, damaging links with the economy and limiting application of Multilateral Environment Agreements (MEAs).
Environmental funding is not self-sustaining, diverting resources from problem-solving into funding battles.
Lack of integration of sector policies
Inadequate institutional capacities
Ill-defined priorities
Unclear objectives
Lack of coordination within the UN, governments, the private sector and civil society
Lack of shared vision
Interdependencies among development/sustainable economic growth, trade, agriculture, health, peace and security.
International imbalance between environmental governance and trade and finance programs, e.g., World Trade Organization (WTO).
Limited credit for organizations running projects within the Global Environment Facility (GEF)
Linking UNEP, United Nations Development Programme (UNDP) and the World Bank with MEAs
Lack of government capacity to satisfy MEA obligations
Absence of the gender perspective and equity in environmental governance
Inability to influence public opinion
Time lag between human action and environmental effect, sometimes as long as a generation
Environmental problems being embedded in very complex systems, of which our understanding is still quite weak
All of these challenges have implications on governance, however international environmental governance is necessary. The IDDRI claims that rejection of multilateralism in the name of efficiency and protection of national interests conflicts with the promotion of international law and the concept of global public goods. Others cite the complex nature of environmental problems.
On the other hand, The Agenda 21 program has been implemented in over 7,000 communities. Environmental problems, including global-scale problems, may not always require global solutions. For example, marine pollution can be tackled regionally, and ecosystem deterioration can be addressed locally. Other global problems such as climate change benefit from local and regional action.
Bäckstrand and Saward wrote, “sustainability and environmental protection is an arena in which innovative experiments with new hybrid, plurilateral forms of governance, along with the incorporation of a transnational civil society spanning the public-private divide, are taking place.”
Local governance
A 1997 report observed a global consensus that sustainable development implementation should be based on local level solutions and initiatives designed with and by the local communities. Community participation and partnership along with the decentralisation of government power to local communities are important aspects of environmental governance at the local level. Initiatives such as these are integral divergence from earlier environmental governance approaches which was “driven by state agendas and resource control” and followed a top-down or trickle down approach rather than the bottom up approach that local level governance encompasses. The adoption of practices or interventions at a local scale can, in part, be explained by diffusion of innovation theory. In Tanzania and in the Pacific, researchers have illustrated that aspects of the intervention, of the adopter, and of the social-ecological context all shape why community-centered conservation interventions spread through space and time. Local level governance shifts decision-making power away from the state and/or governments to the grassroots. Local level governance is extremely important even on a global scale. Environmental governance at the global level is defined as international and as such has resulted in the marginalisation of local voices. Local level governance is important to bring back power to local communities in the global fight against environmental degridation. Pulgar Vidal observed a “new institutional framework, [wherein] decision-making regarding access to and use of natural resources has become increasingly decentralized.” He noted four techniques that can be used to develop these processes:
formal and informal regulations, procedures and processes, such as consultations and participative democracy;
social interaction that can arise from participation in development programs or from the reaction to perceived injustice;
regulating social behaviours to reclassify an individual question as a public matter;
within-group participation in decision-making and relations with external actors.
He found that the key conditions for developing decentralized environmental governance are:
access to social capital, including local knowledge, leaders and local shared vision;
democratic access to information and decision-making;
local government activity in environmental governance: as facilitator of access to natural resources, or as policy maker;
an institutional framework that favours decentralized environmental governance and creates forums for social interaction and making widely accepted agreements acceptable.
The legitimacy of decisions depends on the local population's participation rate and on how well participants represent that population.
With regard to public authorities, questions linked to biodiversity can be faced by adopting appropriate policies and strategies, through exchange of knowledge and experience, the forming of partnerships, correct management of land use, monitoring of biodiversity and optimal use of resources, or reducing consumption, and promoting environmental certifications, such as EMAS and/or ISO 14001. Local authorities undoubtedly have a central role to play in the protection of biodiversity and this strategy is successful above all when the authorities show strength by involving stakeholders in a credible environmental improvement project and activating a transparent and effective communication policy (Ioppolo et al., 2013).
State governance
States play a crucial role in environmental governance, because "however far and fast international economic integration proceeds, political authority remains vested in national governments". It is for this reason that governments should respect and support the commitment to implementation of international agreements.
At the state level, environmental management has been found to be conducive to the creation of roundtables and committees. In France, the Grenelle de l’environnement process:
included a variety of actors (e.g. the state, political leaders, unions, businesses, not-for-profit organizations and environmental protection foundations);
allowed stakeholders to interact with the legislative and executive powers in office as indispensable advisors;
worked to integrate other institutions, particularly the Economic and Social Council, to form a pressure group that participated in the process for creating an environmental governance model;
attempted to link with environmental management at regional and local levels.
If environmental issues are excluded from e.g., the economic agenda, this may delegitimize those institutions.
“In southern countries, the main obstacle to the integration of intermediate levels in the process of territorial environmental governance development is often the dominance of developmentalist inertia in states’ political mindset. The question of the environment has not been effectively integrated in national development planning and programs. Instead, the most common idea is that environmental protection curbs economic and social development, an idea encouraged by the frenzy for exporting raw materials extracted using destructive methods that consume resources and fail to generate any added value.” Of course they are justified in this thinking, as their main concerns are social injustices such as poverty alleviation. Citizens in some of these states have responded by developing empowerment strategies to ease poverty through sustainable development. In addition to this, policymakers must be more aware of these concerns of the global south, and must make sure to integrate a strong focus on social justice in their policies.
Global governance
According to the International Institute for Sustainable Development, global environmental governance is "the sum of organizations, policy instruments, financing mechanisms, rules, procedures and norms that regulate the processes of global environmental protection." At the global level there are numerous important actors involved in environmental governance and "a range of institutions contribute to and help define the practice of global environmental governance. The idea of global environmental governance is to govern the environment at a global level through a range of nation states and non state actors such as national governments, NGOs and other international organisations such as UNEP (United Nations Environment Programme). The global environmental movement can be traced back to the 19th century; academics acknowledge the role of the United Nations for providing a platform for international conversations regarding the environment. Supporters of global environmental governance emphasize the importance of international cooperation on environmental issues such as climate change. Some opponents argue that more aggressive regional environmental governance has a stronger impact compared to global environmental governance.
Global environmental governance is the answer to calls for new forms of governance because of the increasing complexity of the international agenda. It is perceived to be an effective form of multilateral management and essential to the international community in meeting goals of mitigation and the possible reversal of the impacts on the global environment. However, a precise definition of global environmental governance is still vague and there are many issues surrounding global governance.
Elliot argues that “the congested institutional terrain still provides more of an appearance than a reality of comprehensive global governance.” It is a political practice which simultaneously reflects, constitutes and masks global relations of power and powerlessness.” State agendas exploit the use of global environmental governance to enhance their oven agendas or wishes even if this is at the detriment of the vital element behind global environmental governance which is the environment. Elliot states that global environmental governance “is neither normatively neutral nor materially benign.”
As explored by Newell, report notes by The Global Environmental Outlook noted that the systems of global environmental governance are becoming increasingly irrelevant or impotent due to patterns of globalisation such as; imbalances in productivity and the distribution of goods and services, unsustainable progression of extremes of wealth and poverty and population and economic growth overtaking environmental gains. Newell states that, despite such acknowledgements, the “managing of global environmental change within International Relations continues to look to international regimes for the answers.”
Environmental Governance in the Global North and Global South
Relations between the Global North and Global South have been impacted by a history of colonialism, during which Northern colonial powers contributed to environmental degradation of natural resources in the South. This dynamic continues to influence international relations and is the basis for what some historians recognize as the "North-South divide." Scholars argue that this divide has created hurdles in the international lawmaking process regarding the environment. Scholars have noted that unindustrialized countries in the Global South sometimes are disconnected from environmentalism and perceive environmental governance to be a "luxury" priority for the Global North. In recent years, sustainable development has made its way to the forefront of international discourse and urges the North and South to cooperate. Academics recognized that environmental governance priorities in the Global North have been at odds with the desire to focus on economic development in the Global South.
Some analysts propose a shift towards "non-state" actors for the development of environmental governance. Environmental politics researcher Karin Bäckstrand claims this will increase transparency, accountability, and legitimacy. In some cases, scholars have noted that environmental governance in the Global North has had adverse consequences on the environment in the Global South. Environmental and economic priorities in the Global North do not always align with those in the Global South. Producers in the Global North developed voluntary sustainability standards (VSS) to address environmental concerns in the North, but these standards also end up impacting economic activity in the Global South. Jeffrey J. Minneti from the William & Mary Law School has argued that the Global South needs to "manage its own ecological footprint" by creating VSS independent from the Global North. Tension between countries in the Global North and Global South has caused some academics to criticize global environmental governance for being too slow of a process to enact policy change.
Issues of scale
Multi-tier governance
The literature on governance scale shows how changes in the understanding of environmental issues have led to the movement from a local view to recognising their larger and more complicated scale. This move brought an increase in the diversity, specificity and complexity of initiatives. Meadowcroft pointed out innovations that were layered on top of existing structures and processes, instead of replacing them.
Lafferty and Meadowcroft give three examples of multi-tiered governance: internationalisation, increasingly comprehensive approaches, and involvement of multiple governmental entities. Lafferty and Meadowcroft described the resulting multi-tiered system as addressing issues on both smaller and wider scales.
Institutional fit
Hans Bruyninckx claimed that a mismatch between the scale of the environmental problem and the level of the policy intervention was problematic. Young claimed that such mismatches reduced the effectiveness of interventions. Most of the literature addresses the level of governance rather than ecological scale.
Elinor Ostrom, amongst others, claimed that the mismatch is often the cause of unsustainable management practices and that simple solutions to the mismatch have not been identified.
Considerable debate has addressed the question of which level(s) should take responsibility for fresh water management. Development workers tend to address the problem at the local level. National governments focus on policy issues. This can create conflicts among states because rivers cross borders, leading to efforts to evolve governance of river basins.
Environmental governance issues
Climate change
The scientific consensus on climate change is expressed in the reports of Intergovernmental Panel on Climate Change (IPCC) and also in the statements by all major scientific bodies in the United States such as National Academy of Sciences.
There has been increasing actions in order to mitigate climate change and reduce its impact at national, regional and international levels. The Paris Agreement, Kyoto protocol and United Nations Framework Convention on Climate Change (UNFCCC) plays the most important role in addressing climate change at an international level.
Biodiversity
Environmental governance for protecting the biodiversity has to act in many levels. Biodiversity is fragile because it is threatened by almost all human actions. To promote conservation of biodiversity, agreements and laws have to be created to regulate agricultural activities, urban growth, industrialization of countries, use of natural resources, control of invasive species, the correct use of water and protection of air quality.
To promote environmental governance for biodiversity protection there has to be a clear articulation between values and interests while negotiating environmental management plans.
Ozone layer
On 16 September 1987 the United Nations General Assembly signed the Montreal Protocol to address the declining ozone layer. Since that time, the use of chlorofluorocarbons (industrial refrigerants and aerosols) and farming fungicides such as methyl bromide has mostly been eliminated, although other damaging gases are still in use.
Nuclear risk
The Nuclear non-proliferation treaty is the primary multilateral agreement governing nuclear activity.
Transgenic organisms
Genetically modified organisms are not the subject of any major multilateral agreements. They are the subject of various restrictions at other levels of governance. GMOs are in widespread use in the US, but are heavily restricted in many other jurisdictions.
Controversies have ensued over golden rice, genetically modified salmon, genetically modified seeds, disclosure and other topics.
Water security
Socio-environmental conflicts
Environmental issues such as natural resource management and climate change have security and social considerations. Drinking water scarcity and climate change can cause mass migrations of climate refugees, for example.
Social network analysis has been applied to understand how different actors cooperate and conflict in environmental governance. Existing relationships can influence how stakeholders collaborate during times of conflict: a study of transportation planning and land use in California found that stakeholders choose their collaborative partners by avoiding those with the most dissimilar beliefs, rather than by selecting for those with shared views. The result is known as homophily—actors with similar views are more likely to end up collaborating than those with opposing views.
Agreements
Conventions
The main multilateral conventions, also known as Rio Conventions, are as follows:
Convention on Biological Diversity (CBD) (1992–1993): aims to conserve biodiversity. Related agreements include the Cartagena Protocol on biosafety.
United Nations Framework Convention on Climate Change (UNFCC) (1992–1994): aims to stabilize concentrations of greenhouse gases at a level that would stabilize the climate system without threatening food production, and enabling the pursuit of sustainable economic development; it incorporates the Kyoto Protocol.
United Nations Convention to Combat Desertification (UNCCD) (1994–1996): aims to combat desertification and mitigate the effects of drought and desertification, in developing countries (Though initially the convention was primarily meant for Africa).
Further conventions:
Ramsar Convention on Wetlands of International Importance (1971–1975)
UNESCO World Heritage Convention (1972–1975)
Convention on International Trade in Endangered Species of Wild Flora and Fauna (CITES) (1973–1975)
Bonn Convention on the Conservation of Migratory Species (1979–1983)
Convention on the Protection and Use of Transboundary Watercourses and International Lakes (Water Convention) (1992–1996)
Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal (1989–1992)
Rotterdam Convention on the Prior Informed Consent Procedures for Certain Hazardous Chemicals and Pesticides in International Trade
Stockholm Convention on Persistent Organic Pollutants (COP) (2001–2004)
The Rio Conventions are characterized by:
obligatory execution by signatory states
involvement in a sector of global environmental governance
focus on the fighting poverty and the development of sustainable living conditions;
funding from the Global Environment Facility (GEF) for countries with few financial resources;
inclusion of a for assessing ecosystem status
Environmental conventions are regularly criticized for their:
rigidity and verticality: they are too descriptive, homogenous and top down, not reflecting the diversity and complexity of environmental issues. Signatory countries struggle to translate objectives into concrete form and incorporate them consistently;
duplicate structures and aid: the sector-specific format of the conventions produced duplicate structures and procedures. Inadequate cooperation between government ministries;
contradictions and incompatibility: e.g., “if reforestation projects to reduce give preference to monocultures of exotic species, this can have a negative impact on biodiversity (whereas natural regeneration can strengthen both biodiversity and the conditions needed for life).”
Until now, the formulation of environmental policies at the international level has been divided by theme, sector or territory, resulting in treaties that overlap or clash. International attempts to coordinate environment institutions, include the Inter-Agency Coordination Committee and the Commission for Sustainable Development, but these institutions are not powerful enough to effectively incorporate the three aspects of sustainable development.
Multilateral Environmental Agreements (MEAs)
MEAs are agreements between several countries that apply internationally or regionally and concern a variety of environmental questions. As of 2013 over 500 Multilateral Environmental Agreements (MEAs), including 45 of global scope involve at least 72 signatory countries. Further agreements cover regional environmental problems, such as deforestation in Borneo or pollution in the Mediterranean. Each agreement has a specific mission and objectives ratified by multiple states.
Many Multilateral Environmental Agreements have been negotiated with the support from the United Nations Environmental Programme and work towards the achievement of the United Nations Millennium Development Goals as a means to instil sustainable practices for the environment and its people. Multilateral Environmental Agreements are considered to present enormous opportunities for greener societies and economies which can deliver numerous benefits in addressing food, energy and water security and in achieving sustainable development. These agreements can be implemented on a global or regional scale, for example the issues surrounding the disposal of hazardous waste can be implemented on a regional level as per the Bamako Convention on the Ban of the Import into Africa and the Control of Transboundary Movement and Management of Hazardous Waste within Africa which applies specifically to Africa, or the global approach to hazardous waste such as the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal which is monitored throughout the world.
“The environmental governance structure defined by the Rio and Johannesburg Summits is sustained by UNEP, MEAs and developmental organizations and consists of assessment and policy development, as well as project implementation at the country level.
"The governance structure consists of a chain of phases:
a) assessment of environment status;
b) international policy development;
c) formulation of MEAs;
d) policy implementation;
e) policy assessment;
f) enforcement;
g) sustainable development.
"Traditionally, UNEP has focused on the normative role of engagement in the first three
phases. Phases (d) to (f) are covered by MEAs and the sustainable development phase involves developmental organizations such as UNDP and the World Bank.”
Lack of coordination affects the development of coherent governance. The report shows that donor states support development organizations, according to their individual interests. They do not follow a joint plan, resulting in overlaps and duplication. MEAs tend not to become a joint frame of reference and therefore receive little financial support. States and organizations emphasize existing regulations rather than improving and adapting them.
Background
In the 20th century, the risks associated with nuclear fission raised global awareness of environmental threats. The 1963 Partial Nuclear Test Ban Treaty prohibiting atmospheric nuclear testing marked the beginning of the globalization of environmental issues. Environmental law began to be modernized and coordinated with the Stockholm Conference (1972), backed up in 1980 by the Vienna Convention on the Law of Treaties. The Vienna Convention for the Protection of the Ozone Layer was signed and ratified in 1985. In 1987, 24 countries signed the Montreal Protocol which imposed the gradual withdrawal of CFCs.
The Brundtland Report, published in 1987 by the UN Commission on Environment and Development, defined sustainable development as "development that meets the needs of the present without compromising the ability of future generations to meet their own needs."
Rio Conference (1992) and reactions
The United Nations Conference on Environment and Development (UNCED), better known as the 1992 Earth Summit, was the first major international meeting since the end of the Cold War in 1991 and was attended by delegations from 175 countries. Since then the biggest international conferences that take place every 10 years have guided the global governance process with a series of MEAs. Environmental treaties are applied with the help of secretariats.
Governments created international treaties in the 1990s to check global threats to the environment. These treaties are far more restrictive than global protocols and set out to change non-sustainable production and consumption models.
Agenda 21
Agenda 21 is a detailed plan of actions to be implemented at the global, national and local levels by UN organizations, member states and key individual groups in all regions. Agenda 21 advocates making sustainable development a legal principle law. At the local level, local Agenda 21 advocates an inclusive, territory-based strategic plan, incorporating sustainable environmental and social policies.
The Agenda has been accused of using neoliberal principles, including free trade, to achieve environmental goals. For example, chapter two, entitled "International Cooperation to Accelerate Sustainable Development in Developing Countries and Related Domestic Policies", states: "The international economy should provide a supportive international climate for achieving environment and development goals by: promoting sustainable development through trade liberalization."
Actors
International institutions
United Nations Environment Program
The UNEP has had its biggest impact as a monitoring and advisory body, and in developing environmental agreements. It has also contributed to strengthening the institutional capacity of environment ministries.
In 2002 UNEP held a conference to focus on product lifecycle impacts, emphasizing the fashion, advertising, financial and retail industries, seen as key agents in promoting sustainable consumption.
According to Ivanova, UNEP adds value in environmental monitoring, scientific assessment and information sharing, but cannot lead all environmental management processes. She proposed the following tasks for UNEP:
initiate a strategic independent overhaul of its mission;
consolidate the financial information and transparency process;
restructure organizing governance by creating an operative executive council that balances the omnipresence of the overly imposing and fairly ineffectual Governing Council/Global Ministerial Environment Forum (GMEF).
Other proposals offer a new mandate to “produce greater unity amongst social and environmental agencies, so that the concept of ‘environment for development’ becomes a reality. It needs to act as a platform for establishing standards and for other types of interaction with national and international organizations and the United Nations. The principles of cooperation and common but differentiated responsibilities should be reflected in the application of this revised mandate.”
Sherman proposed principles to strengthen UNEP:
obtain a social consensus on a long-term vision;
analyze the current situation and future scenarios;
produce a comprehensive plan covering all aspects of sustainable development;
build on existing strategies and processes;
multiply links between national and local strategies;
include all these points in the financial and budget plan;
adopt fast controls to improve process piloting and identification of progress made;
implement effective participation mechanisms.
Another group stated, “Consider the specific needs of developing countries and respect of the fundamental principle of 'common but differentiated responsibilities'. Developed countries should promote technology transfer, new and additional financial resources, and capacity building for meaningful participation of developing countries in international environmental governance. Strengthening of international environmental governance should occur in the context of sustainable development and should involve civil society as an important stakeholder and agent of transformation.”
Global Environment Facility (GEF)
Created in 1991, the Global Environment Facility is an independent financial organization initiated by donor governments including Germany and France. It was the first financial organization dedicated to the environment at the global level. As of 2013 it had 179 members. Donations are used for projects covering biodiversity, climate change, international waters, destruction of the ozone layer, soil degradation and persistent organic pollutants.
GEF's institutional structure includes UNEP, UNDP and the World Bank. It is the funding mechanism for the four environmental conventions: climate change, biodiversity, persistent organic pollutants and desertification. GEF transfers resources from developed countries to developing countries to fund UNDP, UNEP and World Bank projects. The World Bank manages the annual budget of US$561.10 million.
The GEF has been criticized for its historic links with the World Bank, at least during its first phase during the 1990s, and for having favoured certain regions to the detriment of others. Another view sees it as contributing to the emergence of a global "green market". It represents “an adaptation (of the World Bank) to this emerging world order, as a response to the emergence of environmental movements that are becoming a geopolitical force.” Developing countries demanded financial transfers to help them protect their environment.
GEF is subject to economic profitability criteria, as is the case for all the conventions. It received more funds in its first three years than the UNEP has since its creation in 1972. GEF funding represents less than 1% of development aid between 1992 and 2002.
United Nations Commission on Sustainable Development (CSD)
This intergovernmental institution meets twice a year to assess follow-up on Rio Summit goals. The CSD is made up of 53 member states, elected every three years and was reformed in 2004 to help improve implementation of Agenda 21. It meets twice a year, focusing on a specific theme during each two-year period: 2004-2005 was dedicated to water and 2006–2007 to climate change. The CSD has been criticized for its low impact, general lack of presence and the absence of Agenda 21 at the state level specifically, according to a report by the World Resources Institute. Its mission focuses on sequencing actions and establishing agreements puts it in conflict with institutions such as UNEP and OECD.
World Environment Organization (WEO)
A proposed World Environment Organization, analogous to the World Health Organization could be capable of adapting treaties and enforcing international standards.
The European Union, particularly France and Germany, and a number of NGOs favour creating a WEO. The United Kingdom, the US and most developing countries prefer to focus on voluntary initiatives. WEO partisans maintain that it could offer better political leadership, improved legitimacy and more efficient coordination. Its detractors argue that existing institutions and missions already provide appropriate environmental governance; however the lack of coherence and coordination between them and the absence of clear division of responsibilities prevents them from greater effectiveness.
World Bank
The World Bank influences environmental governance through other actors, particularly the GEF. The World Bank's mandate is not sufficiently defined in terms of environmental governance despite the fact that it is included in its mission. However, it allocates 5 to 10% of its annual funds to environmental projects. The institution's capitalist vocation means that its investment is concentrated solely in areas which are profitable in terms of cost benefits, such as climate change action and ozone layer protection, whilst neglecting other such as adapting to climate change and desertification. Its financial autonomy means that it can make its influence felt indirectly on the creation of standards, and on international and regional negotiations.
Following intense criticism in the 1980s for its support for destructive projects which, amongst other consequences, caused deforestation of tropical forests, the World Bank drew up its own environment-related standards in the 1990s so it could correct its actions. These standards differ from UNEP's standards, meant to be the benchmark, thus discrediting the institution and sowing disorder and conflict in the world of environmental governance. Other financial institutions, regional development banks and the private sector also drew up their own standards. Criticism is not directed at the World Bank's standards in themselves, which Najam considered as “robust”, but at their legitimacy and efficacy.
GEF
The GEF's account of itself as of 2012 is as "the largest public funder of projects to improve the global environment", period, which "provides grants for projects related to biodiversity, climate change, international waters, land degradation, the ozone layer, and persistent organic pollutants." It claims to have provided "$10.5 billion in grants and leveraging $51 billion in co-financing for over 2,700 projects in over 165 countries [and] made more than 14,000 small grants directly to civil society and community-based organizations, totaling $634 million." It serves as mechanism for the:
Convention on Biological Diversity (CBD)
United Nations Framework Convention on Climate Change (UNFCCC)
Stockholm Convention on Persistent Organic Pollutants (POPs)
Convention to Combat Desertification (UNCCD)
implementation of Montreal Protocol on Substances That Deplete the Ozone Layer in some countries with "economies in transition"
This mandate reflects the restructured GEF as of October 2011 .
World Trade Organization (WTO)
The WTO's mandate does not include a specific principle on the environment. All the problems linked to the environment are treated in such a way as to give priority to trade requirements and the principles of the WTO's own trade system. This produces conflictual situations. Even if the WTO recognizes the existence of MEAs, it denounces the fact that around 20 MEAs are in conflict with the WTO's trade regulations. Furthermore, certain MEAs can allow a country to ban or limit trade in certain products if they do not satisfy established environmental protection requirements. In these circumstances, if one country's ban relating to another country concerns two signatories of the same MEA, the principles of the treaty can be used to resolve the disagreement, whereas if the country affected by the trade ban with another country has not signed the agreement, the WTO demands that the dispute be resolved using the WTO's trade principles, in other words, without taking into account the environmental consequences.
Some criticisms of the WTO mechanisms may be too broad. In a recently dispute over labelling of dolphin safe labels for tuna between the US and Mexico, the ruling was relatively narrow and did not, as some critics claimed,
International Monetary Fund (IMF)
The IMF's mission is "to ensure the stability of the international monetary system".
The IMF Green Fund proposal of Dominique Strauss-Kahn specifically to address "climate-related shocks in Africa", despite receiving serious attention was rejected. Strauss-Kahn's proposal, backed by France and Britain, was that "developed countries would make an initial capital injection into the fund using some of the $176 billion worth of SDR allocations from last year in exchange for a stake in the green fund." However, "most of the 24 directors ... told Strauss-Kahn that climate was not part of the IMF's mandate and that SDR allocations are a reserve asset never intended for development issues."
UN ICLEI
The UN's main body for coordinating municipal and urban decision-making is named the International Council for Local Environmental Initiatives. Its slogan is "Local Governments for Sustainability".
This body sponsored the concept of full cost accounting that makes environmental governance the foundation of other governance.
ICLEIs projects and achievements include:
Convincing thousands of municipal leaders to sign the World Mayors and Municipal Leaders Declaration on Climate Change (2005) which notably requests of other levels of government that:
Global trade regimes, credits and banking reserve rules be reformed to advance debt relief and incentives to implement policies and practices that reduce and mitigate climate change.
Starting national councils to implement this and other key agreements, e.g., ICLEI Local Governments for Sustainability USA
Spreading ecoBudget (2008) and Triple Bottom Line (2007) "tools for embedding sustainability into council operations", e.g. Guntur's Municipal Corporation, one of the first four to implement the entire framework.
Sustainability Planning Toolkit (launched 2009) integrating these and other tools
Cities Climate Registry (launched 2010) - part of UNEP Campaign on Cities and Climate Change
ICLEI promotes best practice exchange among municipal governments globally, especially green infrastructure, sustainable procurement.
Other secretariats
Other international institutions incorporate environmental governance in their action plans, including:
United Nations Development Programme (UNDP), promoting development;
World Meteorological Organization (WMO) which works on the climate and atmosphere;
Food and Agriculture Organization (FAO) working on the protection of agriculture, forests and fishing;
International Atomic Energy Agency (IAEA) which focuses on nuclear security.
Over 30 UN agencies and programmes support environmental management, according to Najam. This produces a lack of coordination, insufficient exchange of information and dispersion of responsibilities. It also results in proliferation of initiatives and rivalry between them.
Criticism
According to Bauer, Busch and Siebenhüner, the different conventions and multilateral agreements of global environmental regulation is increasing their secretariats' influence. Influence varies according to bureaucratic and leadership efficiency, choice of technical or client-centered.
The United Nations is often the target of criticism, including from within over the multiplication of secretariats due to the chaos it produces. Using a separate secretariat for each MEA creates enormous overhead given the 45 international-scale and over 500 other agreements.
States
Environmental governance at the state level
Environmental protection has created opportunities for mutual and collective monitoring among neighbouring states. The European Union provides an example of the institutionalization of joint regional and state environmental governance. Key areas include information, led by the European Environment Agency (EEA), and the production and monitoring of norms by states or local institutions.
See also the Environmental policy of the European Union.
State participation in global environmental governance
US refusal to ratify major environment agreements produced tensions with ratifiers in Europe and Japan.
The World Bank, IMF and other institutions are dominated by the developed countries and do not always properly consider the requirements of developing countries.
Business
Environmental governance applies to business as well as government. Considerations are typical of those in other domains:
values (vision, mission, principles);
policy (strategy, objectives, targets);
oversight (responsibility, direction, training, communication);
process (management systems, initiatives, internal control, monitoring and review, stakeholder dialogue, transparency, environmental accounting, reporting and verification);
performance (performance indicators, benchmarking, eco-efficiency, reputation, compliance, liabilities, business development).
White and Klernan among others discuss the correlation between environmental governance and financial performance. This correlation is higher in sectors where environmental impacts are greater.
Business environmental issues include emissions, biodiversity, historical liabilities, product and material waste/recycling, energy use/supply and many others.
Environmental governance has become linked to traditional corporate governance as an increasing number of shareholders are corporate environmental impacts. Corporate governance is the set of processes, customs, policies, laws, and institutions affecting the way a corporation (or company) is managed. Corporate governance is affected by the relationships among stakeholders. These stakeholders research and quantify performance to compare and contrast the environmental performance of thousands of companies.
Large corporations with global supply chains evaluate the environmental performance of business partners and suppliers for marketing and ethical reasons. Some consumers seek environmentally friendly and sustainable products and companies.
Non-governmental organizations
According to Bäckstrand and Saward, “broader participation by non-state actors in multilateral environmental decisions (in varied roles such as agenda setting, campaigning, lobbying, consultation, monitoring, and implementation) enhances the democratic legitimacy of environmental governance.”
Local activism is capable of gaining the support of the people and authorities to combat environmental degradatation. In Cotacachi, Ecuador, a social movement used a combination of education, direct action, the influence of local public authorities and denunciation of the mining company's plans in its own country, Canada, and the support of international environmental groups to influence mining activity.
Fisher cites cases in which multiple strategies were used to effect change. She describes civil society groups that pressure international institutions and also organize local events. Local groups can take responsibility for environmental governance in place of governments.
According to Bengoa, “social movements have contributed decisively to the creation of an institutional platform wherein the fight against poverty and exclusion has become an inescapable benchmark.” But despite successes in this area, “these institutional changes have not produced the processes for transformation that could have made substantial changes to the opportunities available to rural inhabitants, particularly the poorest and those excluded from society.” He cites several reasons:
conflict between in-group cohesion and openness to outside influence;
limited trust between individuals;
contradiction between social participation and innovation;
criticisms without credible alternatives to environmentally damaging activities
A successful initiative in Ecuador involved the establishment of stakeholder federations and management committees (NGOs, communities, municipalities and the ministry) for the management of a protected forest.
Proposals
The International Institute for Sustainable Development proposed an agenda for global governance. These objectives are:
expert leadership;
positioning science as the authoritative basis of sound environmental policy;
coherence and reasonable coordination;
well-managed institutions;
incorporate environmental concerns and actions within other areas of international policy and action
Coherence and coordination
Despite the increase in efforts, actors, agreements and treaties, the global environment continue to degrade at a rapid rate. From the big hole in Earth's ozone layer to over-fishing to the uncertainties of climate change, the world is confronted by several intrinsically global challenges. However, as the environmental agenda becomes more complicated and extensive, the current system has proven ineffective in addressing and tackling problems related to trans-boundary externalities and the environment is still experiencing degradation at unprecedented levels.
Inforesources identifies four major obstacles to global environmental governance, and describes measures in response. The four obstacles are:
parallel structures and competition, without a coherent strategy
contradictions and incompatibilities, without appropriate compromise
competition between multiple agreements with incompatible objectives, regulations and processes
integrating policy from macro- to micro- scales.
Recommended measures:
MDGs (Millennium Development Goals) and conventions, combining sustainability and reduction of poverty and equity;
country-level approach linking global and local scales
coordination and division of tasks in a multilateral approach that supports developing countries and improves coordination between donor countries and institutions
use of Poverty Reduction Strategy Papers (PRSPs) in development planning
transform conflicts into tradeoffs, synergies and win-win options
Contemporary debates surrounding global environmental governance have converged on the idea of developing a stronger and more effective institutional framework. The views on how to achieve this, however, still hotly debated. Currently, rather than teaming up with the United Nations Environment Programme (UNEP), international environmental responsibilities have been spread across many different agencies including: a) specialised agencies within the UN system such as the World Meteorological Organisation, the International Maritime Organisation and others; b) the programs in the UN system such as the UN Development Program; c) the UN regional economic and social commission; d) the Bretton Woods institutions; e) the World Trade Organisation and; f) the environmentally focused mechanisms such as the Global Environment Facility and close to 500 international environmental agreements.
Some analysts also argue that multiple institutions and some degree of overlap and duplication in policies is necessary to ensure maximum output from the system. Others, however, claim that institutions have become too dispersed and lacking in coordination which can be damaging to their effectiveness in global environmental governance. Whilst there are various arguments for and against a WEO, the key challenge, however, remains the same: how to develop a rational and effective framework that will protect the global environment efficiently.
Institutional reform
Actors inside and outside the United Nations are discussing possibilities for global environmental governance that provides a solution to current problems of fragility, coordination and coherence. Deliberation is focusing on the goal of making UNEP more efficient. A 2005 resolution recognizes “the need for more efficient environmental activities in the United Nations system, with enhanced coordination, improved policy advice and guidance, strengthened scientific knowledge, assessment and cooperation, better treaty compliance, while respecting the legal autonomy of the treaties, and better integration of environmental activities in the broader sustainable development framework.”
Proposals include:
greater and better coordination between agencies;
strengthen and acknowledge UNEP's scientific role;
identify MEA areas to strengthen coordination, cooperation and teamwork between different agreements;
increase regional presence;
implement the Bali Strategic Plan on improving technology training and support for the application of environmental measures in poor countries;
demand that UNEP and MEAs participate formally in all relevant WTO committees as observers.
strengthen its financial situation;
improve secretariats' efficiency and effectiveness.
One of the main studies addressing this issue proposes:
clearly divide tasks between development organizations, UNEP and the MEAs
adopt a political direction for environmental protection and sustainable development
authorize the UNEP Governing Council/Global Ministerial Environment Forum to adopt the UNEP medium-term strategy
allow Member States to formulate and administer MEAs an independent secretariat for each convention
support UNEP in periodically assessing MEAs and ensure coordination and coherence
establish directives for setting up national/regional platforms capable of incorporating MEAs in the Common Country Assessment (CCA) process and United Nations Development Assistance Framework (UNDAF)
establish a global joint planning framework
study the aptitude and efficiency of environmental activities' funding, focusing on differential costs
examine and redefine the concept of funding differential costs as applicable to existing financial mechanisms
reconsider remits, division of tasks and responsibilities between entities that provide services to the multipartite conferences. Clearly define the services that UN offices provide to MEA secretariats
propose measures aiming to improve personnel provision and geographic distribution for MEA secretariats
improve transparency resource use for supporting programmes and in providing services to MEAs. Draw up a joint budget for services supplied to MEAs.
See also
References
Sources
Forum for a New World Governance
Lennart J. Lundqvist (2004), Sweden and Environmental governance: Straddling the Fence. Manchester University Press,
Ostrom, Elinor. (1990). Governing the commons. Cambridge: Cambridge University Press.
Srivastwa, Amit. (2017). "Environmental governance in the 21st century: a case study of China's environmental governance" (pdf), researchgate.net.
Environmentalism
Environmental policy
Environmental social science concepts
Sustainable development
Transboundary environmental issues | 0.785166 | 0.98236 | 0.771316 |
Energy policy | Energy policies are the government's strategies and decisions regarding the production, distribution, and consumption of energy within a specific jurisdiction. Energy is essential for the functioning of modern economies because they require energy for many sectors, such as industry, transport, agriculture, housing. The main components of energy policy include legislation, international treaties, energy subsidies and other public policy techniques.
The energy sector emits more greenhouse gas worldwide than any other sector. Therefore, energy policies are closely related to climate change mitigation policies. These decisions affect how high the greenhouse gas emissions by that country are.
Purposes
Access to energy is critical for basic social needs, such as lighting, heating, cooking, and healthcare. Given the importance of energy, the price of energy has a direct effect on jobs, economic productivity, business competitiveness, and the cost of goods and services.
Frequently the dominant issue of energy policy is the risk of supply-demand mismatch (see: energy crisis). Current energy policies also address environmental issues (see: climate change), particularly challenging because of the need to reconcile global objectives and international rules with domestic needs and laws.
The "human dimensions" of energy use are of increasing interest to business, utilities, and policymakers. Using the social sciences to gain insights into energy consumer behavior can help policymakers to make better decisions about broad-based climate and energy options. This could facilitate more efficient energy use, renewable-energy commercialization, and carbon-emission reductions.
Approaches
The attributes of energy policy may include legislation, international treaties, incentives to investment, guidelines for energy conservation, taxation and other public policy techniques. Economic and energy modelling can be used by governmental or inter-governmental bodies as an advisory and analysis tool.
Energy planning is more detailed than energy policy.
National energy policy
Some governments state an explicit energy policy. Others do not but in any case, each government practices some type of energy policy. A national energy policy comprises a set of measures involving that country's laws, treaties and agency directives.
There are a number of elements that are contained in a national energy policy. Some important elements intrinsic to an energy policy include:
What is the extent of energy self-sufficiency for this nation
Where future energy sources will derive
How future energy will be consumed (e.g. among sectors)
What are the goals for future energy intensity, ratio of energy consumed to GDP
How can the national policy drive province, state and municipal functions
What specific mechanisms (e.g. taxes, incentives, manufacturing standards) are in place to implement the total policy
Do you want to develop and promote a plan for how to get the world to net zero emissions?
What fiscal policies related to energy products and services should be used (taxes, exemptions, subsidies, etc.)?
What legislation affecting energy use, such as efficiency standards, emission standards, is needed?
Relationship to other government policies
Energy policy sometimes dominates and sometimes is dominated by other government policies. For example energy policy may dominate, supplying free coal to poor families and schools thus supporting social policy, but thus causing air pollution and so impeding heath policy and environmental policy. On the other hand energy policy may be dominated by defense policy, for example some counties started building expensive nuclear power plants to supply material for bombs. Or defense policy may be dominated for a while, eventually resulting in stranded assets, such as Nord Stream 2.
Energy policy is closely related to climate change policy because totalled worldwide the energy sector emits more greenhouse gas than other sectors.
Energy policy decisions are sometimes not taken democratically.
Corporate energy policy
In 2019, some companies “have committed to set climate targets across their operations and value chains aligned with limiting global temperature rise to 1.5°C above pre-industrial levels and reaching net-zero emissions by no later than 2050”. Corporate power purchase agreements can kickstart renewable energy projects, but the energy policies of some countries do not allow or discourage them.
By type of energy
Nuclear energy
Renewable energy
Examples
China
India
Ecuador
European Union
Russia
United Kingdom
United States
By country
Energy policies vary by country, see tables below.
See also
Energy balance
Energy industry
Energy security
Energy supply
Energy transition
Environmental policy
Petroleum politics
Sustainable energy
References
External links
"Energy Policies of (Country x)" series, International Energy Agency
UN-Energy - Global energy policy co-ordination
Renewable Energy Policy Network (REN21)
Information on energy institutions, policies and local energy companies by country, Enerdata Publications
Energy economics
Environmental social science
Power control
Climate change policy
Policy
Public policy | 0.793123 | 0.972484 | 0.771299 |
History of biotechnology | Biotechnology is the application of scientific and engineering principles to the processing of materials by biological agents to provide goods and services. From its inception, biotechnology has maintained a close relationship with society. Although now most often associated with the development of drugs, historically biotechnology has been principally associated with food, addressing such issues as malnutrition and famine. The history of biotechnology begins with zymotechnology, which commenced with a focus on brewing techniques for beer. By World War I, however, zymotechnology would expand to tackle larger industrial issues, and the potential of industrial fermentation gave rise to biotechnology. However, both the single-cell protein and gasohol projects failed to progress due to varying issues including public resistance, a changing economic scene, and shifts in political power.
Yet the formation of a new field, genetic engineering, would soon bring biotechnology to the forefront of science in society, and the intimate relationship between the scientific community, the public, and the government would ensue. These debates gained exposure in 1975 at the Asilomar Conference, where Joshua Lederberg was the most outspoken supporter for this emerging field in biotechnology. By as early as 1978, with the development of synthetic human insulin, Lederberg's claims would prove valid, and the biotechnology industry grew rapidly. Each new scientific advance became a media event designed to capture public support, and by the 1980s, biotechnology grew into a promising real industry. In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s.
The field of genetic engineering remains a heated topic of discussion in today's society with the advent of gene therapy, stem cell research, cloning, and genetically modified food. While it seems only natural nowadays to link pharmaceutical drugs as solutions to health and societal problems, this relationship of biotechnology serving social needs began centuries ago.
Origins of biotechnology
Biotechnology arose from the field of zymotechnology or zymurgy, which began as a search for a better understanding of industrial fermentation, particularly beer. Beer was an important industrial, and not just social, commodity. In late 19th-century Germany, brewing contributed as much to the gross national product as steel, and taxes on alcohol proved to be significant sources of revenue to the government. In the 1860s, institutes and remunerative consultancies were dedicated to the technology of brewing. The most famous was the private Carlsberg Institute, founded in 1875, which employed Emil Christian Hansen, who pioneered the pure yeast process for the reliable production of consistent beer. Less well known were private consultancies that advised the brewing industry. One of these, the Zymotechnic Institute, was established in Chicago by the German-born chemist John Ewald Siebel.
The heyday and expansion of zymotechnology came in World War I in response to industrial needs to support the war. Max Delbrück grew yeast on an immense scale during the war to meet 60 percent of Germany's animal feed needs. Compounds of another fermentation product, lactic acid, made up for a lack of hydraulic fluid, glycerol. On the Allied side the Russian chemist Chaim Weizmann used starch to eliminate Britain's shortage of acetone, a key raw material for cordite, by fermenting maize to acetone. The industrial potential of fermentation was outgrowing its traditional home in brewing, and "zymotechnology" soon gave way to "biotechnology."
With food shortages spreading and resources fading, some dreamed of a new industrial solution. The Hungarian Károly Ereky coined the word "biotechnology" in Hungary during 1919 to describe a technology based on converting raw materials into a more useful product. He built a slaughterhouse for a thousand pigs and also a fattening farm with space for 50,000 pigs, raising over 100,000 pigs a year. The enterprise was enormous, becoming one of the largest and most profitable meat and fat operations in the world. In a book entitled Biotechnologie, Ereky further developed a theme that would be reiterated through the 20th century: biotechnology could provide solutions to societal crises, such as food and energy shortages. For Ereky, the term "biotechnologie" indicated the process by which raw materials could be biologically upgraded into socially useful products.
This catchword spread quickly after the First World War, as "biotechnology" entered German dictionaries and was taken up abroad by business-hungry private consultancies as far away as the United States. In Chicago, for example, the coming of prohibition at the end of World War I encouraged biological industries to create opportunities for new fermentation products, in particular a market for nonalcoholic drinks. Emil Siebel, the son of the founder of the Zymotechnic Institute, broke away from his father's company to establish his own called the "Bureau of Biotechnology," which specifically offered expertise in fermented nonalcoholic drinks.
The belief that the needs of an industrial society could be met by fermenting agricultural waste was an important ingredient of the "chemurgic movement." Fermentation-based processes generated products of ever-growing utility. In the 1940s, penicillin was the most dramatic. While it was discovered in England, it was produced industrially in the U.S. using a deep fermentation process originally developed in Peoria, Illinois. The enormous profits and the public expectations penicillin engendered caused a radical shift in the standing of the pharmaceutical industry. Doctors used the phrase "miracle drug", and the historian of its wartime use, David Adams, has suggested that to the public penicillin represented the perfect health that went together with the car and the dream house of wartime American advertising. Beginning in the 1950s, fermentation technology also became advanced enough to produce steroids on industrially significant scales. Of particular importance was the improved semisynthesis of cortisone which simplified the old 31 step synthesis to 11 steps. This advance was estimated to reduce the cost of the drug by 70%, making the medicine inexpensive and available. Today biotechnology still plays a central role in the production of these compounds and likely will for years to come.
Single-cell protein and gasohol projects
Even greater expectations of biotechnology were raised during the 1960s by a process that grew single-cell protein. When the so-called protein gap threatened world hunger, producing food locally by growing it from waste seemed to offer a solution. It was the possibilities of growing microorganisms on oil that captured the imagination of scientists, policy makers, and commerce. Major companies such as British Petroleum (BP) staked their futures on it. In 1962, BP built a pilot plant at Cap de Lavera in Southern France to publicize its product, Toprina. Initial research work at Lavera was done by Alfred Champagnat, In 1963, construction started on BP's second pilot plant at Grangemouth Oil Refinery in Britain.
As there was no well-accepted term to describe the new foods, in 1966 the term "single-cell protein" (SCP) was coined at MIT to provide an acceptable and exciting new title, avoiding the unpleasant connotations of microbial or bacterial.
The "food from oil" idea became quite popular by the 1970s, when facilities for growing yeast fed by n-paraffins were built in a number of countries. The Soviets were particularly enthusiastic, opening large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) and Kirishi (1974).
By the late 1970s, however, the cultural climate had completely changed, as the growth in SCP interest had taken place against a shifting economic and cultural scene (136). First, the price of oil rose catastrophically in 1974, so that its cost per barrel was five times greater than it had been two years earlier. Second, despite continuing hunger around the world, anticipated demand also began to shift from humans to animals. The program had begun with the vision of growing food for Third World people, yet the product was instead launched as an animal food for the developed world. The rapidly rising demand for animal feed made that market appear economically more attractive. The ultimate downfall of the SCP project, however, came from public resistance.
This was particularly vocal in Japan, where production came closest to fruition. For all their enthusiasm for innovation and traditional interest in microbiologically produced foods, the Japanese were the first to ban the production of single-cell proteins. The Japanese ultimately were unable to separate the idea of their new "natural" foods from the far from natural connotation of oil. These arguments were made against a background of suspicion of heavy industry in which anxiety over minute traces of petroleum was expressed. Thus, public resistance to an unnatural product led to the end of the SCP project as an attempt to solve world hunger.
Also, in 1989 in the USSR, the public environmental concerns made the government decide to close down (or convert to different technologies) all 8 paraffin-fed-yeast plants that the Soviet Ministry of Microbiological Industry had by that time.
In the late 1970s, biotechnology offered another possible solution to a societal crisis. The escalation in the price of oil in 1974 increased the cost of the Western world's energy tenfold. In response, the U.S. government promoted the production of gasohol, gasoline with 10 percent alcohol added, as an answer to the energy crisis. In 1979, when the Soviet Union sent troops to Afghanistan, the Carter administration cut off its supplies to agricultural produce in retaliation, creating a surplus of agriculture in the U.S. As a result, fermenting the agricultural surpluses to synthesize fuel seemed to be an economical solution to the shortage of oil threatened by the Iran–Iraq War. Before the new direction could be taken, however, the political wind changed again: the Reagan administration came to power in January 1981 and, with the declining oil prices of the 1980s, ended support for the gasohol industry before it was born.
Biotechnology seemed to be the solution for major social problems, including world hunger and energy crises. In the 1960s, radical measures would be needed to meet world starvation, and biotechnology seemed to provide an answer. However, the solutions proved to be too expensive and socially unacceptable, and solving world hunger through SCP food was dismissed. In the 1970s, the food crisis was succeeded by the energy crisis, and here too, biotechnology seemed to provide an answer. But once again, costs proved prohibitive as oil prices slumped in the 1980s. Thus, in practice, the implications of biotechnology were not fully realized in these situations. But this would soon change with the rise of genetic engineering.
Genetic engineering
The origins of biotechnology culminated with the birth of genetic engineering. There were two key events that have come to be seen as scientific breakthroughs beginning the era that would unite genetics with biotechnology. One was the 1953 discovery of the structure of DNA, by Watson and Crick, and the other was the 1973 discovery by Cohen and Boyer of a recombinant DNA technique by which a section of DNA was cut from the plasmid of an E. coli bacterium and transferred into the DNA of another. This approach could, in principle, enable bacteria to adopt the genes and produce proteins of other organisms, including humans. Popularly referred to as "genetic engineering," it came to be defined as the basis of new biotechnology.
Genetic engineering proved to be a topic that thrust biotechnology into the public scene, and the interaction between scientists, politicians, and the public defined the work that was accomplished in this area. Technical developments during this time were revolutionary and at times frightening. In December 1967, the first heart transplant by Christiaan Barnard reminded the public that the physical identity of a person was becoming increasingly problematic. While poetic imagination had always seen the heart at the center of the soul, now there was the prospect of individuals being defined by other people's hearts. During the same month, Arthur Kornberg announced that he had managed to biochemically replicate a viral gene. "Life had been synthesized," said the head of the National Institutes of Health. Genetic engineering was now on the scientific agenda, as it was becoming possible to identify genetic characteristics with diseases such as beta thalassemia and sickle-cell anemia.
Responses to scientific achievements were colored by cultural skepticism. Scientists and their expertise were looked upon with suspicion. In 1968, an immensely popular work, The Biological Time Bomb, was written by the British journalist Gordon Rattray Taylor. The author's preface saw Kornberg's discovery of replicating a viral gene as a route to lethal doomsday bugs. The publisher's blurb for the book warned that within ten years, "You may marry a semi-artificial man or woman…choose your children's sex…tune out pain…change your memories…and live to be 150 if the scientific revolution doesn’t destroy us first." The book ended with a chapter called "The Future – If Any." While it is rare for current science to be represented in the movies, in this period of "Star Trek", science fiction and science fact seemed to be converging. "Cloning" became a popular word in the media. Woody Allen satirized the cloning of a person from a nose in his 1973 movie Sleeper, and cloning Adolf Hitler from surviving cells was the theme of the 1976 novel by Ira Levin, The Boys from Brazil.
In response to these public concerns, scientists, industry, and governments increasingly linked the power of recombinant DNA to the immensely practical functions that biotechnology promised. One of the key scientific figures that attempted to highlight the promising aspects of genetic engineering was Joshua Lederberg, a Stanford professor and Nobel laureate. While in the 1960s "genetic engineering" described eugenics and work involving the manipulation of the human genome, Lederberg stressed research that would involve microbes instead. Lederberg emphasized the importance of focusing on curing living people. Lederberg's 1963 paper, "Biological Future of Man" suggested that, while molecular biology might one day make it possible to change the human genotype, "what we have overlooked is euphenics, the engineering of human development." Lederberg constructed the word "euphenics" to emphasize changing the phenotype after conception rather than the genotype which would affect future generations.
With the discovery of recombinant DNA by Cohen and Boyer in 1973, the idea that genetic engineering would have major human and societal consequences was born. In July 1974, a group of eminent molecular biologists headed by Paul Berg wrote to Science suggesting that the consequences of this work were so potentially destructive that there should be a pause until its implications had been thought through. This suggestion was explored at a meeting in February 1975 at California's Monterey Peninsula, forever immortalized by the location, Asilomar. Its historic outcome was an unprecedented call for a halt in research until it could be regulated in such a way that the public need not be anxious, and it led to a 16-month moratorium until National Institutes of Health (NIH) guidelines were established.
Joshua Lederberg was the leading exception in emphasizing, as he had for years, the potential benefits. At Asilomar, in an atmosphere favoring control and regulation, he circulated a paper countering the pessimism and fears of misuses with the benefits conferred by successful use. He described "an early chance for a technology of untold importance for diagnostic and therapeutic medicine: the ready production of an unlimited variety of human proteins. Analogous applications may be foreseen in fermentation process for cheaply manufacturing essential nutrients, and in the improvement of microbes for the production of antibiotics and of special industrial chemicals." In June 1976, the 16-month moratorium on research expired with the Director's Advisory Committee (DAC) publication of the NIH guidelines of good practice. They defined the risks of certain kinds of experiments and the appropriate physical conditions for their pursuit, as well as a list of things too dangerous to perform at all. Moreover, modified organisms were not to be tested outside the confines of a laboratory or allowed into the environment.
Atypical as Lederberg was at Asilomar, his optimistic vision of genetic engineering would soon lead to the development of the biotechnology industry. Over the next two years, as public concern over the dangers of recombinant DNA research grew, so too did interest in its technical and practical applications. Curing genetic diseases remained in the realms of science fiction, but it appeared that producing human simple proteins could be good business. Insulin, one of the smaller, best characterized and understood proteins, had been used in treating type 1 diabetes for a half century. It had been extracted from animals in a chemically slightly different form from the human product. Yet, if one could produce synthetic human insulin, one could meet an existing demand with a product whose approval would be relatively easy to obtain from regulators. In the period 1975 to 1977, synthetic "human" insulin represented the aspirations for new products that could be made with the new biotechnology. Microbial production of synthetic human insulin was finally announced in September 1978 and was produced by a startup company, Genentech. Although that company did not commercialize the product themselves, instead, it licensed the production method to Eli Lilly and Company. 1978 also saw the first application for a patent on a gene, the gene which produces human growth hormone, by the University of California, thus introducing the legal principle that genes could be patented. Since that filing, 20% of the more than 20,000 to 25,000 genes mapped in the human DNA have been patented.
The radical shift in the connotation of "genetic engineering" from an emphasis on the inherited characteristics of people to the commercial production of proteins and therapeutic drugs was nurtured by Joshua Lederberg. His broad concerns since the 1960s had been stimulated by enthusiasm for science and its potential medical benefits. Countering calls for strict regulation, he expressed a vision of potential utility. Against a belief that new techniques would entail unmentionable and uncontrollable consequences for humanity and the environment, a growing consensus on the economic value of recombinant DNA emerged.
Biosensor technology
The MOSFET invented at Bell Labs between 1955 and 1960, Two years later, L.C. Clark and C. Lyons invented the biosensor in 1962. Biosensor MOSFETs (BioFETs) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.
The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld for electrochemical and biological applications in 1970. the adsorption FET (ADFET) was patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET was demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.
By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
Biotechnology and industry
With ancestral roots in industrial microbiology that date back centuries, the new biotechnology industry grew rapidly beginning in the mid-1970s. Each new scientific advance became a media event designed to capture investment confidence and public support. Although market expectations and social benefits of new products were frequently overstated, many people were prepared to see genetic engineering as the next great advance in technological progress. By the 1980s, biotechnology characterized a nascent real industry, providing titles for emerging trade organizations such as the Biotechnology Industry Organization (BIO).
The main focus of attention after insulin were the potential profit makers in the pharmaceutical industry: human growth hormone and what promised to be a miraculous cure for viral diseases, interferon. Cancer was a central target in the 1970s because increasingly the disease was linked to viruses. By 1980, a new company, Biogen, had produced interferon through recombinant DNA. The emergence of interferon and the possibility of curing cancer raised money in the community for research and increased the enthusiasm of an otherwise uncertain and tentative society. Moreover, to the 1970s plight of cancer was added AIDS in the 1980s, offering an enormous potential market for a successful therapy, and more immediately, a market for diagnostic tests based on monoclonal antibodies. By 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA): synthetic insulin, human growth hormone, hepatitis B vaccine, alpha-interferon, and tissue plasminogen activator (TPa), for lysis of blood clots. By the end of the 1990s, however, 125 more genetically engineered drugs would be approved.
The Great Recession led to several changes in the way the biotechnology industry was financed and organized. First, it led to a decline in overall financial investment in the sector, globally; and second, in some countries like the UK it led to a shift from business strategies focused on going for an initial public offering (IPO) to seeking a trade sale instead. By 2011, financial investment in the biotechnology industry started to improve again and by 2014 the global market capitalization reached $1 trillion.
Genetic engineering also reached the agricultural front as well. There was tremendous progress since the market introduction of the genetically engineered Flavr Savr tomato in 1994. Ernst and Young reported that in 1998, 30% of the U.S. soybean crop was expected to be from genetically engineered seeds. In 1998, about 30% of the US cotton and corn crops were also expected to be products of genetic engineering.
Genetic engineering in biotechnology stimulated hopes for both therapeutic proteins, drugs and biological organisms themselves, such as seeds, pesticides, engineered yeasts, and modified human cells for treating genetic diseases. From the perspective of its commercial promoters, scientific breakthroughs, industrial commitment, and official support were finally coming together, and biotechnology became a normal part of business. No longer were the proponents for the economic and technological significance of biotechnology the iconoclasts. Their message had finally become accepted and incorporated into the policies of governments and industry.
Global trends
According to Burrill and Company, an industry investment bank, over $350 billion has been invested in biotech since the emergence of the industry, and global revenues rose from $23 billion in 2000 to more than $50 billion in 2005. The greatest growth has been in Latin America but all regions of the world have shown strong growth trends. By 2007 and into 2008, though, a downturn in the fortunes of biotech emerged, at least in the United Kingdom, as the result of declining investment in the face of failure of biotech pipelines to deliver and a consequent downturn in return on investment.
See also
Timeline of biotechnology
Genetically modified organism
Green Revolution
References
Further reading
Bud, Robert. "Biotechnology in the Twentieth Century." Social Studies of Science 21.3 (1991), 415–457 .
Dronamraju, Krishna R. Biological and Social Issues in Biotechnology Sharing. Brookfield: Ashgate Publishing Company, 1998. .
Rasmussen, Nicolas, Gene Jockeys: Life Science and the rise of Biotech Enterprise, Johns Hopkins University Press, (Baltimore), 2014. .
External links
The Life Sciences Foundation
Biotechnology, history of
Biotechnology, history of
Life sciences industry | 0.78257 | 0.985598 | 0.771299 |
Biomimetics | Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.
Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality.
Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide.
History
One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.
During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics.
In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated,
In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.
The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.
One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies.
Bio-inspired technologies
Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.
Locomotion
Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird.
Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings.
Biomimetic architecture
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Procedures
Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists.
In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system.
In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.
Examples
Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size.
Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.
A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.
Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.
In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection.
Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.
Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa.
Structural materials
There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness.
Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.
Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.
Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases.
Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites.
Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.
Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools.
New ceramics that exhibit giant electret hysteresis have also been realized.
Neuronal computers
Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the
pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.
Self healing-materials
In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials.
The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.
Surfaces
Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.
Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators.
Adhesion
Wet adhesion
Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.
Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels.
Dry adhesion
Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives.
Liquid repellency
Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.
Superliquiphobicity, a remarkable phenomenon, emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, prominently illustrated by the renowned "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances. These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. Researchers have successfully fabricated various re-entrant geometries, offering a pathway for practical applications in diverse fields. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, and more, presenting innovative solutions to challenges in biomedicine, desalination, and energy conversion.
In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts.
Optics
Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research.
Inspiration from fruits and plants
One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.
One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.
Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light.
The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits.
In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.
Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss).
Inspiration from animals
Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency.
Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales.
Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.
Agricultural systems
Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.
Other uses
Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption.
Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.
Other technologies
Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.
The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.
Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
See also
Artificial photosynthesis
Artificial enzyme
Bio-inspired computing
Bioinspiration & Biomimetics
Biomimetic synthesis
Carbon sequestration
Reverse engineering
Synthetic biology
References
Further reading
Benyus, J. M. (2001). Along Came a Spider. Sierra, 86(4), 46–47.
Hargroves, K. D. & Smith, M. H. (2006). Innovation inspired by nature Biomimicry. Ecos, (129), 27–28.
Marshall, A. (2009). Wild Design: The Ecomimicry Project, North Atlantic Books: Berkeley.
Passino, Kevin M. (2004). Biomimicry for Optimization, Control, and Automation. Springer.
Pyper, W. (2006). Emulating nature: The rise of industrial ecology. Ecos, (129), 22–26.
Smith, J. (2007). It's only natural. The Ecologist, 37(8), 52–55.
Thompson, D'Arcy W., On Growth and Form. Dover 1992 reprint of 1942 2nd ed. (1st ed., 1917).
Vogel, S. (2000). Cats' Paws and Catapults: Mechanical Worlds of Nature and People. Norton.
External links
Biomimetics MIT
Sex, Velcro and Biomimicry with Janine Benyus
Janine Benyus: Biomimicry in Action from TED 2009
Design by Nature - National Geographic
Michael Pawlyn: Using nature's genius in architecture from TED 2010
Robert Full shows how human engineers can learn from animals' tricks from TED 2002
The Fast Draw: Biomimicry from CBS News
Evolutionary biology
Biotechnology
Bioinformatics
Biological engineering
Biophysics
Industrial ecology
Bionics
Water conservation
Renewable energy
Sustainable transport | 0.775057 | 0.995135 | 0.771287 |
Dystopia | A dystopia, also called a cacotopia or anti-utopia, is a community or society that is extremely bad or frightening. It is often treated as an antonym of utopia, a term that was coined by Sir Thomas More and figures as the title of his best known work, published in 1516, which created a blueprint for an ideal society with minimal crime, violence, and poverty. The relationship between utopia and dystopia is in actuality, not one of simple opposition, as many dystopias claim to be utopias and vice versa.
Dystopias are often characterized by fear or distress, tyrannical governments, environmental disaster, or other characteristics associated with a cataclysmic decline in society. Themes typical of a dystopian society include: complete control over the people in a society through the usage of propaganda and police state tactics, heavy censoring of information or denial of free thought, worshiping an unattainable goal, the complete loss of individuality, and heavy enforcement of conformity. Despite certain overlaps, dystopian fiction is distinct from post-apocalyptic fiction, and an undesirable society is not necessarily dystopian. Dystopian societies appear in many fictional works and artistic representations, particularly in historical fiction, such as A Tale of Two Cities (1859) by Charles Dickens, Quo Vadis? by Henryk Sienkiewicz, and A Man for All Seasons (1960) by Robert Bolt, stories set in the alternate history timelines, like Robert Harris' Fatherland (1992), or in the future. Famous examples set in the future included Robert Hugh Benson's Lord of the World (1907), Yevgeny Zamyatin's We (1920), Aldous Huxley's Brave New World (1932), George Orwell's Nineteen Eighty-Four (1949), and Ray Bradbury's Fahrenheit 451 (1953). Dystopian societies appear in many sub-genres of fiction and are often used to draw attention to society, environment, politics, economics, religion, psychology, ethics, science, or technology. Some authors use the term to refer to existing societies, many of which are, or have been, totalitarian states or societies in an advanced state of collapse. Dystopias, through an exaggerated worst-case scenario, often make a criticism about a current trend, societal norm, or political system.
Etymology
"Dustopia", the original spelling of "dystopia", first appeared in Lewis Henry Younge's Utopia: or Apollo's Golden Days in 1747. Additionally, dystopia was used as an antonym for utopia by John Stuart Mill in one of his 1868 Parliamentary Speeches (Hansard Commons) by adding the prefix "dys" ( "bad") to "topia", reinterpreting the initial "u" as the prefix "eu" ( "good") instead of "ou" ( "not"). It was used to denounce the government's Irish land policy: "It is, perhaps, too complimentary to call them Utopians, they ought rather to be called dys-topians, or caco-topians. What is commonly called Utopian is something too good to be practicable; but what they appear to favour is too bad to be practicable".
Decades before the first documented use of the word "dystopia" was "cacotopia"/"kakotopia" (using , "bad, wicked") originally proposed in 1818 by Jeremy Bentham, "As a match for utopia (or the imagined seat of the best government) suppose a cacotopia (or the imagined seat of the worst government) discovered and described". Though dystopia became the more popular term, cacotopia finds occasional use; Anthony Burgess, author of A Clockwork Orange (1962), said it was a better fit for Orwell's Nineteen Eighty-Four because "it sounds worse than dystopia".
Theory
Some scholars, such as Gregory Claeys and Lyman Tower Sargent, make certain distinctions between typical synonyms of dystopias. For example, Claeys and Sargent define literary dystopias as societies imagined as substantially worse than the society in which the author writes. Some of these are anti-utopias, which criticise attempts to implement various concepts of utopia. In the most comprehensive treatment of the literary and real expressions of the concept, Dystopia: A Natural History, Claeys offers a historical approach to these definitions. Here the tradition is traced from early reactions to the French Revolution. Its commonly anti-collectivist character is stressed, and the addition of other themes—the dangers of science and technology, of social inequality, of corporate dictatorship, of nuclear war—are also traced. A psychological approach is also favored here, with the principle of fear being identified with despotic forms of rule, carried forward from the history of political thought, and group psychology introduced as a means of understanding the relationship between utopia and dystopia. Andrew Norton-Schwartzbard noted that "written many centuries before the concept "dystopia" existed, Dante's Inferno in fact includes most of the typical characteristics associated with this genre – even if placed in a religious framework rather than in the future of the mundane world, as modern dystopias tend to be". In the same vein, Vicente Angeloti remarked that "George Orwell's emblematic phrase, a boot stamping on a human face – forever, would aptly describe the situation of the denizens in Dante's Hell. Conversely, Dante's famous inscription Abandon all hope, ye who enter here would have been equally appropriate if placed at the entrance to Orwell's "Ministry of Love" and its notorious "Room 101".
Society
Dystopias typically reflect contemporary sociopolitical realities and extrapolate worst-case scenarios as warnings for necessary social change or caution. Dystopian fictions invariably reflect the concerns and fears of their creators' contemporaneous culture. Due to this, they can be considered a subject of social studies. In dystopias, citizens may live in a dehumanized state, be under constant surveillance, or have a fear of the outside world. In the film What Happened to Monday the protagonists (identical septuplet sisters) risk their lives by taking turns onto the outside world because of a one-child policy place in this futuristic dystopian society.
In a 1967 study, Frank Kermode suggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode. Christopher Schmidt notes that, while the world goes to waste for future generations, people distract themselves from disaster by passively watching it as entertainment.
In the 2010s, there was a surge of popular dystopian young adult literature and blockbuster films. Some have commented on this trend, saying that "it is easier to imagine the end of the world than it is to imagine the end of capitalism". Cultural theorist and critic Mark Fisher identified the phrase as encompassing the theory of capitalist realism—the perceived "widespread sense that not only is capitalism the only viable political and economic system, but also that it is now impossible even to imagine a coherent alternative to it"—and used the above quote as the title to the opening chapter of his book, Capitalist Realism: Is There No Alternative?. In the book, he also refers to dystopian film such as Children of Men (originally a novel by P. D. James) to illustrate what he describes as the "slow cancellation of the future". Theo James, an actor in Divergent (originally a novel by Veronica Roth), explains that "young people in particular have such a fascination with this kind of story [...] It's becoming part of the consciousness. You grow up in a world where it's part of the conversation all the time – the statistics of our planet warming up. The environment is changing. The weather is different. These are things that are very visceral and very obvious, and they make you question the future, and how we will survive. It's so much a part of everyday life that young people inevitably – consciously or not – are questioning their futures and how the Earth will be. I certainly do. I wonder what kind of world my children's kids will live in."
The entire substantial sub-genre of alternative history works depicting a world in which Nazi Germany won the Second World War can be considered as dystopias. So can other works of Alternative History, in which a historical turning point led to a manifestly repressive world. For example, the 2004 mockumentary C.S.A.: The Confederate States of America, and Ben Winters' Underground Airlines, in which slavery in the United States continues to the present, with "electronic slave auctions" carried out via the Internet and slaves controlled by electronic devices implanted in their spines, or Keith Roberts Pavane in which 20th Century Britain is ruled by a Catholic theocracy and the Inquisition is actively torturing and burning "heretics".
Common themes
Politics
In When the Sleeper Wakes, H. G. Wells depicted the governing class as hedonistic and shallow. George Orwell contrasted Wells's world to that depicted in Jack London's The Iron Heel, where the dystopian rulers are brutal and dedicated to the point of fanaticism, which Orwell considered more plausible.
The political principles at the root of fictional utopias (or "perfect worlds") are idealistic in principle and result in positive consequences for the inhabitants; the political principles on which fictional dystopias are based, while often based on utopian ideals, result in negative consequences for inhabitants because of at least one fatal flaw.
Dystopias are often filled with pessimistic views of the ruling class or a government that is brutal or uncaring, ruling with an "iron fist". Dystopian governments are sometimes ruled by a fascist or communist regime or dictator. These dystopian government establishments often have protagonists or groups that lead a "resistance" to enact change within their society, as is seen in Alan Moore's V for Vendetta.
Dystopian political situations are depicted in novels such as We, Parable of the Sower, Darkness at Noon, Nineteen Eighty-Four, Brave New World, The Handmaid's Tale, The Hunger Games, Divergent and Fahrenheit 451 and such films as Metropolis, Brazil (1985), Battle Royale, FAQ: Frequently Asked Questions, Soylent Green, The Purge: Election Year, Logan's Run, and The Running Man (1987).. An earlier example is Jules Verne's The Begum's Millions with its depiction of Stahlstadt (Steel City), a vast industrial and mining complex, which is totally devoted to the production of ever more powerful and destructive weapons, and which is ruled by the dictatorial and totally ruthless Prof. Schultze – a militarist and racist who dreams of world conquest and as the first step plots the complete destruction of the nearby Ville-France, a utopian model city constructed and maintained with public health as its government's primary concern.
Economics
The economic structures of dystopian societies in literature and other media have many variations, as the economy often relates directly to the elements that the writer is depicting as the source of the oppression. There are several archetypes that such societies tend to follow. A theme is the dichotomy of planned economies versus free market economies, a conflict which is found in such works as Ayn Rand's Anthem and Henry Kuttner's short story "The Iron Standard". Another example of this is reflected in Norman Jewison's 1975 film Rollerball (1975).
Some dystopias, such as that of Nineteen Eighty-Four, feature black markets with goods that are dangerous and difficult to obtain or the characters may be at the mercy of the state-controlled economy. Kurt Vonnegut's Player Piano depicts a dystopia in which the centrally controlled economic system has indeed made material abundance plentiful but deprived the mass of humanity of meaningful labor; virtually all work is menial, unsatisfying and only a small number of the small group that achieves education is admitted to the elite and its work. In Tanith Lee's Don't Bite the Sun, there is no want of any kind – only unabashed consumption and hedonism, leading the protagonist to begin looking for a deeper meaning to existence. Even in dystopias where the economic system is not the source of the society's flaws, as in Brave New World, the state often controls the economy; a character, reacting with horror to the suggestion of not being part of the social body, cites as a reason that works for everyone else.
Other works feature extensive privatization and corporatism; both consequences of capitalism, where privately owned and unaccountable large corporations have replaced the government in setting policy and making decisions. They manipulate, infiltrate, control, bribe, are contracted by and function as government. This is seen in the novels Jennifer Government and Oryx and Crake and the movies Alien, Avatar, RoboCop, Visioneers, Idiocracy, Soylent Green, WALL-E and Rollerball. Corporate republics are common in the cyberpunk genre, as in Neal Stephenson's Snow Crash and Philip K. Dick's Do Androids Dream of Electric Sheep? (as well as the film Blade Runner, influenced by and based upon Dick's novel).
Class
Dystopian fiction frequently draws stark contrasts between the privileges of the ruling class and the dreary existence of the working class. In the 1931 novel Brave New World by Aldous Huxley, a class system is prenatally determined with Alphas, Betas, Gammas, Deltas and Epsilons, with the lower classes having reduced brain function and special conditioning to make them satisfied with their position in life. Outside of this society there also exist several human settlements that exist in the conventional way but which the World Government describes as "savages".
In George Orwell's Nineteen Eighty-Four, the dystopian society described within has a tiered class structure with the ruling elite "Inner Party" at the top, the "Outer Party" below them functioning as a type of middle-class with minor privileges, and the working-class "Proles" (short for proletariat) at the bottom of the hierarchy with few rights, yet making up the vast majority of the population.
In Ypsilon Minus by Herbert W. Franke, people are divided into numerous alphabetically ranked groups.
In the film Elysium, the majority of Earth's population on the surface lives in poverty with little access to health care and are subject to worker exploitation and police brutality, while the wealthy live above the Earth in luxury with access to technologies that cure all diseases, reverse aging, and regenerate body parts.
Written a century earlier, the future society depicted in H. G. Wells' The Time Machine had started in a similar way to Elysium – the workers consigned to living and working in underground tunnels while the wealthy live on a surface made into an enormous beautiful garden. But over a long time period, the roles were eventually reversed – the rich degenerated and became a decadent "livestock" regularly caught and eaten by the underground cannibal Morlocks.
Family
Some fictional dystopias, such as Brave New World and Fahrenheit 451, have eradicated the family and kept it from re-establishing itself as a social institution. In Brave New World, where children are reproduced artificially, the concepts of "mother" and "father" are considered obscene. In some novels, such as We, the state is hostile to motherhood, as a pregnant woman from One State is in revolt.
Religion
In dystopias, religious groups may play the role of oppressed or oppressor. One of the earliest examples is Robert Hugh Benson's Lord of the World, about a futuristic world where Marxists and Freemasons led by the Antichrist have taken over the world and the only remaining source of dissent is a tiny and persecuted Catholic minority. In Brave New World the establishment of the state included lopping off the tops of all crosses (as symbols of Christianity) to make them "T"s (as symbols of Henry Ford's Model T). In C. S. Lewis's That Hideous Strength the leaders of the fictional National Institute of Coordinated Experiments, a joint venture of academia and government to promote an anti-traditionalist social agenda, are contemptuous of religion and require initiates to desecrate Christian symbols. Margaret Atwood's novel The Handmaid's Tale takes place in a future United States under a Christian-based theocratic regime.
Identity
In the Russian novel We by Yevgeny Zamyatin, first published in 1921, people are permitted to live out of public view twice a week for one hour and are only referred to by numbers instead of names. The latter feature also appears in the film THX 1138. In some dystopian works, such as Kurt Vonnegut's Harrison Bergeron, society forces individuals to conform to radical egalitarian social norms that discourage or suppress accomplishment or even competence as forms of inequality. Complete conformity and suppression of individuality (to the point of acting in unison) is also depicted in Madeleine L'Engle's A Wrinkle in Time.
Violence
Violence is prevalent in many dystopias, often in the form of war, but also in urban crimes led by (predominately teenage) gangs (e.g. A Clockwork Orange), or rampant crime met by blood sports (e.g. Battle Royale, The Running Man, The Hunger Games, Divergent, and The Purge). It is also explained in Suzanne Berne's essay "Ground Zero", where she explains her experience of the aftermath of 11 September 2001.
Nature
Fictional dystopias are commonly urban and frequently isolate their characters from all contact with the natural world. Sometimes they require their characters to avoid nature, as when walks are regarded as dangerously anti-social in Ray Bradbury's Fahrenheit 451, as well as within Bradbury's short story "The Pedestrian". In That Hideous Strength, science coordinated by government is directed toward the control of nature and the elimination of natural human instincts. In Brave New World, the lower class is conditioned to be afraid of nature but also to visit the countryside and consume transport and games to promote economic activity. Lois Lowry's "The Giver" shows a society where technology and the desire to create a utopia has led humanity to enforce climate control on the environment, as well as to eliminate many undomesticated species and to provide psychological and pharmaceutical repellent against human instincts. E. M. Forster's "The Machine Stops" depicts a highly changed global environment which forces people to live underground due to an atmospheric contamination. As Angel Galdon-Rodriguez points out, this sort of isolation caused by external toxic hazard is later used by Hugh Howey in his series of dystopias of the Silo Series.
Excessive pollution that destroys nature is common in many dystopian films, such as The Matrix, RoboCop, WALL-E, April and the Extraordinary World and Soylent Green, as well as in videogames like Half-Life 2. A few "green" fictional dystopias do exist, such as in Michael Carson's short story "The Punishment of Luxury", and Russell Hoban's Riddley Walker. The latter is set in the aftermath of nuclear war, "a post-nuclear holocaust Kent, where technology has reduced to the level of the Iron Age".
Science and technology
Contrary to the technologically utopian claims, which view technology as a beneficial addition to all aspects of humanity, technological dystopia concerns itself with and focuses largely (but not always) on the negative effects caused by new technology.
Technologies reflect and encourage the worst aspects of human nature. Jaron Lanier, a digital pioneer, has become a technological dystopian: "I think it's a way of interpreting technology in which people forgot taking responsibility." "'Oh, it's the computer that did it, not me.' 'There's no more middle class? Oh, it's not me. The computer did it'" This quote explains that people begin to not only blame the technology for the changes in lifestyle but also believe that technology is an omnipotence. It also points to a technological determinist perspective in terms of reification.
Technologies harm our interpersonal communication, relationships, and communities. A decrease in communication within family members and friend groups due to increased time in technology use. Virtual space misleadingly heightens the impact of real presence; people resort to technological medium for communication nowadays.
Technologies reinforce hierarchies – concentrate knowledge and skills; increase surveillance and erode privacy; widen inequalities of power and wealth; giving up control to machines. Douglas Rushkoff, a technological utopian, states in his article that the professional designers "re-mystified" the computer so it wasn't so readable anymore; users had to depend on the special programs built into the software that was incomprehensible for normal users.
New technologies are sometimes regressive (worse than previous technologies).
The unforeseen impacts of technology are negative. "The most common way is that there's some magic artificial intelligence in the sky or in the cloud or something that knows how to translate, and what a wonderful thing that this is available for free. But there's another way to look at it, which is the technically true way: You gather a ton of information from real live translators who have translated phrases… It's huge but very much like Facebook, it's selling people back to themselves… [With translation] you're producing this result that looks magical but in the meantime, the original translators aren't paid for their work… You're actually shrinking the economy."
More efficiency and choices can harm our quality of life (by causing stress, destroying jobs, making us more materialistic). In his article "Prest-o! Change-o!", technological dystopian James Gleick mentions the remote control being the classic example of technology that does not solve the problem "it is meant to solve". Gleick quotes Edward Tenner, a historian of technology, that the ability and ease of switching channels by the remote control serves to increase distraction for the viewer. Then it is only expected that people will become more dissatisfied with the channel they are watching.
New technologies can solve problems of old technologies or just create new problems. The remote control example explains this claim as well, for the increase in laziness and dissatisfaction levels was clearly not a problem in times without the remote control. He also takes social psychologist Robert Levine's example of Indonesians "'whose main entertainment consists of watching the same few plays and dances, month after month, year after year,' and with Nepalese Sherpas who eat the same meals of potatoes and tea through their entire lives. The Indonesians and Sherpas are perfectly satisfied". Because of the invention of the remote control, it merely created more problems.
Technologies destroy nature (harming human health and the environment). The need for business replaced community and the "story online" replaced people as the "soul of the Net". Because information was now able to be bought and sold, there was not as much communication taking place.
In pop culture
Dystopian themes are in many television shows and video games such as Cyberpunk 2077, The Hunger Games, Cyberpunk: Edgerunners, Blade Runner 2049, Elysium and Titanfall.
See also
Alternate history
Horror fiction
Apocalyptic and post-apocalyptic fiction
Biopunk
Digital dystopia
Dissident
Inner emigration
Kafkaesque
List of dystopian comics
List of dystopian films
List of dystopian literature
List of dystopian works
Lovecraftian horror
Plutocracy
Police state
Self-fulfilling prophecy
Social science fiction
Societal collapse
Soft science fiction
References
See also Gregory Claeys. "When Does Utopianism Produce Dystopia?" in: Zsolt Czigányik, ed.
Utopian Horizons. Utopia and Ideology – The Interaction of Political and Utopian Thought
(Budapest: CEU Press, 2016), pp. 41–61.
External links
Dystopia Tracker, predictions about the future and their realisations in real life.
Dystopic, dystopian fiction and its place in reality.
Dystopias, in The Encyclopedia of Science Fiction.
Climate Change Dystopia, discusses current popularity of the dystopian genre.
Alexandru Bumbas, Penser l'anachronisme comme moteur esthétique de la dystopie théâtrale: quelques considérations sur Bond, Barker, Gabily, et Delbo (In French)
Science fiction themes
Speculative fiction
Suffering | 0.771671 | 0.999499 | 0.771284 |
Nutrient | A nutrient is a substance used by an organism to survive, grow and reproduce. The requirement for dietary nutrient intake applies to animals, plants, fungi and protists. Nutrients can be incorporated into cells for metabolic purposes or excreted by cells to create non-cellular structures such as hair, scales, feathers, or exoskeletons. Some nutrients can be metabolically converted into smaller molecules in the process of releasing energy such as for carbohydrates, lipids, proteins and fermentation products (ethanol or vinegar) leading to end-products of water and carbon dioxide. All organisms require water. Essential nutrients for animals are the energy sources, some of the amino acids that are combined to create proteins, a subset of fatty acids, vitamins and certain minerals. Plants require more diverse minerals absorbed through roots, plus carbon dioxide and oxygen absorbed through leaves. Fungi live on dead or living organic matter and meet nutrient needs from their host.
Different types of organisms have different essential nutrients. Ascorbic acid (vitamin C) is essential to humans and some animal species but most other animals and many plants are able to synthesize it. Nutrients may be organic or inorganic: organic compounds include most compounds containing carbon, while all other chemicals are inorganic. Inorganic nutrients include nutrients such as iron, selenium, and zinc, while organic nutrients include, protein, fats, sugars and vitamins.
A classification used primarily to describe nutrient needs of animals divides nutrients into macronutrients and micronutrients. Consumed in relatively large amounts (grams or ounces), macronutrients (carbohydrates, fats, proteins, water) are primarily used to generate energy or to incorporate into tissues for growth and repair. Micronutrients are needed in smaller amounts (milligrams or micrograms); they have subtle biochemical and physiological roles in cellular processes, like vascular functions or nerve conduction. Inadequate amounts of essential nutrients or diseases that interfere with absorption, result in a deficiency state that compromises growth, survival and reproduction. Consumer advisories for dietary nutrient intakes such as the United States Dietary Reference Intake, are based on the amount required to prevent deficiency and provide macronutrient and micronutrient guides for both lower and upper limits of intake. In many countries, regulations require that food product labels display information about the amount of any macronutrients and micronutrients present in the food in significant quantities. Nutrients in larger quantities than the body needs may have harmful effects. Edible plants also contain thousands of compounds generally called phytochemicals which have unknown effects on disease or health including a diverse class with non-nutrient status called polyphenols which remain poorly understood as of 2024.
Types
Macronutrients
Macronutrients are defined in several ways.
The chemical elements humans consume in the largest quantities are carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulphur, summarized as CHNOPS.
The chemical compounds that humans consume in the largest quantities and provide bulk energy are classified as carbohydrates, proteins, and fats. Water must be also consumed in large quantities but does not provide caloric value.
Calcium, sodium, potassium, magnesium, and chloride ions, along with phosphorus and sulfur, are listed with macronutrients because they are required in large quantities compared to micronutrients, i.e., vitamins and other minerals, the latter often described as trace or ultratrace minerals.
Macronutrients provide energy:
Carbohydrates are compounds made up of types of sugar. Carbohydrates are classified according to their number of sugar units: monosaccharides (such as glucose and fructose), disaccharides (such as sucrose and lactose), oligosaccharides, and polysaccharides (such as starch, glycogen, and cellulose).
Proteins are organic compounds that consist of amino acids joined by peptide bonds. Since the body cannot manufacture some of the amino acids (termed essential amino acids), the diet must supply them. Through digestion, proteins are broken down by proteases back into free amino acids.
Fats consist of a glycerin molecule with three fatty acids attached. Fatty acid molecules contain a -COOH group attached to unbranched hydrocarbon chains connected by single bonds alone (saturated fatty acids) or by both double and single bonds (unsaturated fatty acids). Fats are needed for construction and maintenance of cell membranes, to maintain a stable body temperature, and to sustain the health of skin and hair. Because the body does not manufacture certain fatty acids (termed essential fatty acids), they must be obtained through one's diet.
Ethanol is not an essential nutrient, but it does provide calories.The United States Department of Agriculture uses a figure of per gram of alcohol ( per ml) for calculating food energy. For distilled spirits, a standard serving in the U.S. is , which at 40% ethanol (80 proof) would be 14 grams and 98 calories.
Micronutrients
Micronutrients are essential dietary elements required in varying quantities throughout life to serve metabolic and physiological functions.
Dietary minerals, such as potassium, sodium, and iron, are elements native to Earth, and cannot be synthesized. They are required in the diet in microgram or milligram amounts. As plants obtain minerals from the soil, dietary minerals derive directly from plants consumed or indirectly from edible animal sources.
Vitamins are organic compounds required in microgram or milligram amounts. The importance of each dietary vitamin was first established when it was determined that a disease would develop if that vitamin was absent from the diet.
Essentiality
Essential nutrients
An essential nutrient is a nutrient required for normal physiological function that cannot be synthesized in the body – either at all or in sufficient quantities – and thus must be obtained from a dietary source. Apart from water, which is universally required for the maintenance of homeostasis in mammals, essential nutrients are indispensable for various cellular metabolic processes and for the maintenance and function of tissues and organs. The nutrients considered essential for humans comprise nine amino acids, two fatty acids, thirteen vitamins, fifteen minerals and choline. In addition, there are several molecules that are considered conditionally essential nutrients since they are indispensable in certain developmental and pathological states.
Amino acids
An essential amino acid is an amino acid that is required by an organism but cannot be synthesized de novo by it, and therefore must be supplied in its diet. Out of the twenty standard protein-producing amino acids, nine cannot be endogenously synthesized by humans: phenylalanine, valine, threonine, tryptophan, methionine, leucine, isoleucine, lysine, and histidine.
Fatty acids
Essential fatty acids (EFAs) are fatty acids that humans and other animals must ingest because the body requires them for good health but cannot synthesize them. Only two fatty acids are known to be essential for humans: alpha-linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid).
Vitamins and vitamers
Vitamins occur in a variety of related forms known as vitamers. The vitamers of a given vitamin perform the functions of that vitamin and prevent symptoms of deficiency of that vitamin. Vitamins are those essential organic molecules that are not classified as amino acids or fatty acids. They commonly function as enzymatic cofactors, metabolic regulators or antioxidants. Humans require thirteen vitamins in their diet, most of which are actually groups of related molecules (e.g. vitamin E includes tocopherols and tocotrienols): vitamins A, C, D, E, K, thiamine (B1), riboflavin (B2), niacin (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), folate (B9), and cobalamin (B12). The requirement for vitamin D is conditional, as people who get sufficient exposure to ultraviolet light, either from the sun or an artificial source, synthesize vitamin D in the skin.
Minerals
Minerals are the exogenous chemical elements indispensable for life. Although the four elements: carbon, hydrogen, oxygen, and nitrogen (CHON) are essential for life, they are so plentiful in food and drink that these are not considered nutrients and there are no recommended intakes for these as minerals. The need for nitrogen is addressed by requirements set for protein, which is composed of nitrogen-containing amino acids. Sulfur is essential, but again does not have a recommended intake. Instead, recommended intakes are identified for the sulfur-containing amino acids methionine and cysteine.
The essential nutrient trace elements for humans, listed in order of Recommended Dietary Allowance (expressed as a mass), are potassium, chloride, sodium, calcium, phosphorus, magnesium, iron, zinc, manganese, copper, iodine, chromium, molybdenum, and selenium. Additionally, cobalt is a component of Vitamin B12 which is essential. There are other minerals which are essential for some plants and animals, but may or may not be essential for humans, such as boron and silicon.
Choline
Choline is an essential nutrient. The cholines are a family of water-soluble quaternary ammonium compounds. Choline is the parent compound of the cholines class, consisting of ethanolamine having three methyl substituents attached to the amino function. Healthy humans fed artificially composed diets that are deficient in choline develop fatty liver, liver damage, and muscle damage. Choline was not initially classified as essential because the human body can produce choline in small amounts through phosphatidylcholine metabolism.
Conditionally essential
Conditionally essential nutrients are certain organic molecules that can normally be synthesized by an organism, but under certain conditions in insufficient quantities. In humans, such conditions include premature birth, limited nutrient intake, rapid growth, and certain disease states. Inositol, taurine, arginine, glutamine and nucleotides are classified as conditionally essential and are particularly important in neonatal diet and metabolism.
Non-essential
Non-essential nutrients are substances within foods that can have a significant impact on health. Dietary fiber is not absorbed in the human digestive tract. Soluble fiber is metabolized to butyrate and other short-chain fatty acids by bacteria residing in the large intestine. Soluble fiber is marketed as serving a prebiotic function with claims for promoting "healthy" intestinal bacteria.
Non-nutrients
Ethanol (C2H5OH) is not an essential nutrient, but it does supply approximately of food energy per gram. For spirits (vodka, gin, rum, etc.) a standard serving in the United States is , which at 40%ethanol (80proof) would be 14 grams and . At 50%alcohol, 17.5 g and . Wine and beer contain a similar amount of ethanol in servings of , respectively, but these beverages also contribute to food energy intake from components other than ethanol. A serving of wine contains . A serving of beer contains . According to the U.S. Department of Agriculture, based on NHANES 2013–2014 surveys, women ages 20 and up consume on average 6.8grams of alcohol per day and men consume on average 15.5 grams per day. Ignoring the non-alcohol contribution of those beverages, the average ethanol contributions to daily food energy intake are , respectively. Alcoholic beverages are considered empty calorie foods because, while providing energy, they contribute no essential nutrients.
By definition, phytochemicals include all nutritional and non-nutritional components of edible plants. Included as nutritional constituents are provitamin A carotenoids, whereas those without nutrient status are diverse polyphenols, flavonoids, resveratrol, and lignans that are present in numerous plant foods. Some phytochemical compounds are under preliminary research for their potential effects on human diseases and health. However, the qualification for nutrient status of compounds with poorly defined properties in vivo is that they must first be defined with a Dietary Reference Intake level to enable accurate food labeling, a condition not established for most phytochemicals that are claimed to provide antioxidant benefits.
Deficiencies and toxicity
See Vitamin, Mineral (nutrient), Protein (nutrient)
An inadequate amount of a nutrient is a deficiency. Deficiencies can be due to several causes, including an inadequacy in nutrient intake, called a dietary deficiency, or any of several conditions that interfere with the utilization of a nutrient within an organism. Some of the conditions that can interfere with nutrient utilization include problems with nutrient absorption, substances that cause a greater-than-normal need for a nutrient, conditions that cause nutrient destruction, and conditions that cause greater nutrient excretion. Nutrient toxicity occurs when excess consumption of a nutrient does harm to an organism.
In the United States and Canada, recommended dietary intake levels of essential nutrients are based on the minimum level that "will maintain a defined level of nutriture in an individual", a definition somewhat different from that used by the World Health Organization and Food and Agriculture Organization of a "basal requirement to indicate the level of intake needed to prevent pathologically relevant and clinically detectable signs of a dietary inadequacy".
In setting human nutrient guidelines, government organizations do not necessarily agree on amounts needed to avoid deficiency or maximum amounts to avoid the risk of toxicity. For example, for vitamin C, recommended intakes range from 40 mg/day in India to 155 mg/day for the European Union. The table below shows U.S. Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins and minerals, PRIs for the European Union (same concept as RDAs), followed by what three government organizations deem to be the safe upper intake. RDAs are set higher than EARs to cover people with higher-than-average needs. Adequate Intakes (AIs) are set when there is insufficient information to establish EARs and RDAs. Countries establish tolerable upper intake levels, also referred to as upper limits (ULs), based on amounts that cause adverse effects. Governments are slow to revise information of this nature. For the U.S. values, except calcium and vitamin D, all data date from 1997 to 2004.
* The daily recommended amounts of niacin and magnesium are higher than the tolerable upper limit because, for both nutrients, the ULs identify the amounts which will not increase risk of adverse effects when the nutrients are consumed as a serving of a dietary supplement. Magnesium supplementation above the UL may cause diarrhea. Supplementation with niacin above the UL may cause flushing of the face and a sensation of body warmth. Each country or regional regulatory agency decides on a safety margin below when symptoms may occur, so the ULs may differ based on source.
EAR U.S. Estimated Average Requirements.
RDA U.S. Recommended Dietary Allowances; higher for adults than for children, and may be even higher for women who are pregnant or lactating.
AI U.S. Adequate Intake; AIs established when there is not sufficient information to set EARs and RDAs.
PRI Population Reference Intake is European Union equivalent of RDA; higher for adults than for children, and may be even higher for women who are pregnant or lactating. For Thiamin and Niacin, the PRIs are expressed as amounts per megajoule (239 kilocalories) of food energy consumed.
Upper Limit Tolerable upper intake levels.
ND ULs have not been determined.
NE EARs, PRIs or AIs have not yet been established or will not be (EU does not consider chromium an essential nutrient).
Plant
Plant nutrients consist of more than a dozen minerals absorbed through roots, plus carbon dioxide and oxygen absorbed or released through leaves. All organisms obtain all their nutrients from the surrounding environment.
Plants absorb carbon, hydrogen, and oxygen from air and soil as carbon dioxide and water. Other nutrients are absorbed from soil (exceptions include some parasitic or carnivorous plants). Counting these, there are 17 important nutrients for plants: these are macronutrients; nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg), carbon (C), oxygen(O) and hydrogen (H), and the micronutrients; iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo) and nickel (Ni). In addition to carbon, hydrogen, and oxygen, nitrogen, phosphorus, and sulfur are also needed in relatively large quantities. Together, the "Big Six" are the elemental macronutrients for all organisms.
They are sourced from inorganic matter (for example, carbon dioxide, water, nitrates, phosphates, sulfates, and diatomic molecules of nitrogen and, especially, oxygen) and organic matter (carbohydrates, lipids, proteins).
See also
References
External links
USDA. Dietary Reference Intakes
Chemical oceanography
Ecology
Edaphology
Biology and pharmacology of chemical elements
Nutrition
Essential nutrients | 0.773002 | 0.997777 | 0.771283 |
Theory U | Theory U is a change management method and the title of a book by Otto Scharmer. Scharmer with colleagues at MIT conducted 150 interviews with entrepreneurs and innovators in science, business, and society and then extended the basic principles into a theory of learning and management, which he calls Theory U. The principles of Theory U are suggested to help political leaders, civil servants, and managers break through past unproductive patterns of behavior that prevent them from empathizing with their clients' perspectives and often lock them into ineffective patterns of decision-making.
Some notes about theory U
Fields of attention
Thinking (individual)
Conversing (group)
Structuring (institutions)
Ecosystem coordination (global systems)
Presencing
The author of the theory U concept expresses it as a process or journey, which is also described as Presencing, as indicated in the diagram (for which there are numerous variants).
At the core of the "U" theory is presencing: sensing + presence. According to The Learning Exchange, Presencing is a journey with five movements:
On that journey, at the bottom of the U, lies an inner gate that requires us to drop everything that isn't essential. This process of letting-go (of our old ego and self) and letting-come (our highest future possibility: our Self) establishes a subtle connection to a deeper source of knowing. The essence of presencing is that these two selves – our current self and our best future self – meet at the bottom of the U and begin to listen and resonate with each other. Once a group crosses this threshold, nothing remains the same. Individual members and the group as a whole begin to operate with a heightened level of energy and sense of future possibility. Often they then begin to function as an intentional vehicle for an emerging future.
The core elements are shown below.
"Moving down the left side of the U is about opening up and dealing with the resistance of thought, emotion, and will; moving up the right side is about intentionally reintegrating the intelligence of the head, the heart, and the hand in the context of practical applications".
Leadership capacities
According to Scharmer, a value created by journeying through the "U" is to develop seven essential leadership capacities:
Holding the space: listen to what life calls you to do (listen to oneself, to others and make sure that there is space where people can talk)
Observing: Attend with your mind wide open (observe without your voice of judgment, effectively suspending past cognitive schema)
Sensing: Connect with your heart and facilitate the opening process (i.e. see things as interconnected wholes)
Presencing: Connect to the deepest source of your self and will and act from the emerging whole
Crystallizing: Access the power of intention (ensure a small group of key people commits itself to the purpose and outcomes of the project)
Prototyping: Integrating head, heart, and hand (one should act and learn by doing, avoiding the paralysis of inaction, reactive action, over-analysis, etc.)
Performing: Playing the "macro violin" (i.e. find the right leaders, find appropriate social technology to get a multi-stakeholder project going).
The sources of Theory U include interviews with 150 innovators and thought leaders on management and change. Particularly the work of Brian Arthur, Francisco Varela, Peter Senge, Ed Schein, Joseph Jaworski, Arawana Hayashi, Eleanor Rosch, Friedrich Glasl, Martin Buber, Rudolf Steiner and Johann Wolfgang von Goethe have been critical. Artists are represented in the project from 2001 -2010 by Andrew Campbell, whose work was given a separate index page linked to the original project site. https://web.archive.org/web/20050404033150/http://www.dialogonleadership.org/indexPaintings.html
Today, Theory U constitutes a body of leadership and management praxis drawing from a variety of sources and more than 20 years of elaboration by Scharmer and colleagues. Theory U is translated into 20 languages and is used in change processes worldwide.
Meditation teacher Arawana Hayashi has explained how she considers Theory U relevant to "the feminine principle".
Earlier work: U-procedure
The earlier work by Glasl involved a sociotechnical, Goethean and anthroposophical process involving a few or many co-workers, managers and/or policymakers. It proceeded from phenomenological diagnosis of the present state of the organisation to plans for the future. They described a process in a U formation consisting of three levels (technical and instrumental subsystem, social subsystem and cultural subsystem) and seven stages beginning with the observation of organisational phenomena, workflows, resources etc., and concluding with specific decisions about desired future processes and phenomena. The method draws on the Goethean techniques described by Rudolf Steiner, transforming observations into intuitions and judgements about the present state of the organisation and decisions about the future. The three stages represent explicitly recursive reappraisals at progressively advanced levels of reflective, creative and intuitive insight and (epistemologies), thereby enabling more radically systemic intervention and redesign. The stages are: phenomena – picture (a qualitative metaphoric visual representation) – idea (the organising idea or formative principle) – and judgement (does this fit?). The first three then are reflexively replaced by better alternatives (new idea --> new image --> new phenomena) to form the design design. Glasl published the method in Dutch (1975), German (1975, 1994) and English (1997).
The seven stages are shown below.
In contrast to that earlier work on the U procedure, which assumes a set of three subsystems in the organization that need to be analyzed in a specific sequence, Theory U starts from a different epistemological view that is grounded in Varela's approach to neurophenomenology. It focuses on the process of becoming aware and applies to all levels of systems change. Theory U contributed to advancing organizational learning and systems thinking tools towards an awareness-based view of systems change that blends systems thinking with systems sensing. On the left-hand side of the U the process is going through the three main "gestures" of becoming aware that Francisco Varela spelled out in his work (suspension, redirection, letting-go). On the right-hand side of the U this process extends towards actualizing the future that is wanting to emerge (letting come, enacting, embodying).
Criticism
Sociologist Stefan Kühl criticizes Theory U as a management fashion on three main points: First of all, while Theory U posits to create change on all levels, including the level of the individual "self" and the institutional level, case studies mainly focus on clarifying the positions of individuals in groups or teams. Except of the idea of participating in online courses on Theory U, the theory remains silent on how broad organisational or societal changes may take place. Secondly, Theory U, like many management fashions, neglects structural conflicts of interest, for instance between groups, organisations and class. While it makes sense for top management to emphasize common values, visions and the community of all staff externally, Kühl believes this to be problematic if organisations internally believe too strongly in this community, as this may prevent the articulation of conflicting interests and therefore organisational learning processes. Finally, the 5 phase model of Theory U, like other cyclical (but less esoteric) management models, such as PDCA, are a gross simplification of decision-making processes in organisation that are often wilder, less structured and more complex. Kühl argues that Theory U may be useful as it allows management to make decisions despite unsure knowledge and encourages change, but expects that Theory U will lose its glamour.
See also
Appreciative inquiry
Art of Hosting
Decision cycle
Learning cycle
OODA loop
V-Model
References
External links
C. Otto Scharmer Home Page
Presencing Home Page
The U-Process for Discovery
Change management | 0.781274 | 0.987202 | 0.771276 |
Phenotypic trait | A phenotypic trait, simply trait, or character state is a distinct variant of a phenotypic characteristic of an organism; it may be either inherited or determined environmentally, but typically occurs as a combination of the two. For example, having eye color is a character of an organism, while blue, brown and hazel versions of eye color are traits. The term trait is generally used in genetics, often to describe phenotypic expression of different combinations of alleles in different individual organisms within a single population, such as the famous purple vs. white flower coloration in Gregor Mendel's pea plants. By contrast, in systematics, the term character state is employed to describe features that represent fixed diagnostic differences among taxa, such as the absence of tails in great apes, relative to other primate groups.
Definition
A phenotypic trait is an obvious, observable, and measurable characteristic of an organism; it is the expression of genes in an observable way. An example of a phenotypic trait is a specific hair color or eye color. Underlying genes, that make up the genotype, determine the hair color, but the hair color observed is the phenotype. The phenotype is dependent on the genetic make-up of the organism, and also influenced by the environmental conditions to that of the organism is subjected across its ontogenetic development, including various epigenetic processes. Regardless of the degree of influence of genotype versus environment, the phenotype encompasses all of the characteristics of an organism, including traits at multiple levels of biological organization, ranging from behavior and evolutionary history of life traits (e.g., litter size), through morphology (e.g., body height and composition), physiology (e.g., blood pressure), cellular characteristics (e.g., membrane lipid composition, mitochondrial densities), components of biochemical pathways, and even messenger RNA.
Genetic origin of traits in diploid organisms
Different phenotypic traits are caused by different forms of genes, or alleles, which arise by mutation in a single individual and are passed on to successive generations.
Biochemistry of dominance and extensions to expression of traits
The biochemistry of the intermediate proteins determines how they interact in the cell. Therefore, biochemistry predicts how different combinations of alleles will produce varying traits.
Extended expression patterns seen in diploid organisms include facets of incomplete dominance, codominance, and multiple alleles. Incomplete dominance is the condition in which neither allele dominates the other in one heterozygote. Instead the phenotype is intermediate in heterozygotes. Thus you can tell that each allele is present in the heterozygote. Codominance refers to the allelic relationship that occurs when two alleles are both expressed in the heterozygote, and both phenotypes are seen simultaneously. Multiple alleles refers to the situation when there are more than 2 common alleles of a particular gene. Blood groups in humans is a classic example. The ABO blood group proteins are important in
determining blood type in humans, and this is determined by different alleles of the one locus.
Continuum versus categorical traits
Schizotypy is an example of a psychological phenotypic trait found in schizophrenia-spectrum disorders. Studies have shown that gender and age influences the expression of schizotypal traits. For instance, certain schizotypal traits may develop further during adolescence, whereas others stay the same during this period.
See also
Allometric engineering of traits
Character displacement
Eye color
Phene
Phenotype
Race (biology)
Skill
Citations
References
Lawrence, Eleanor (2005) Henderson's Dictionary of Biology. Pearson, Prentice Hall.
Classical genetics | 0.778062 | 0.991266 | 0.771267 |
Water security | The aim of water security is to make the most of water's benefits for humans and ecosystems. The second aim is to limit the risks of destructive impacts of water to an acceptable level. These risks include for example too much water (flood), too little water (drought and water scarcity) or poor quality (polluted) water. People who live with a high level of water security always have access to "an acceptable quantity and quality of water for health, livelihoods and production". For example, access to water, sanitation and hygiene services is one part of water security. Some organizations use the term water security more narrowly for water supply aspects only.
Decision makers and water managers aim to reach water security goals that address multiple concerns. These outcomes can include increasing economic and social well-being while reducing risks tied to water. There are linkages and trade-offs between the different outcomes. Planners often consider water security effects for varied groups when they design climate change reduction strategies.
Three main factors determine how difficult or easy it is for a society to sustain its water security. These include the hydrologic environment, the socio-economic environment, and future changes due to the effects of climate change. Decision makers may assess water security risks at varied levels. These range from the household to community, city, basin, country and region.
The opposite of water security is water insecurity. Water insecurity is a growing threat to societies. The main factors contributing to water insecurity are water scarcity, water pollution and low water quality due to climate change impacts. Others include poverty, destructive forces of water, and disasters that stem from natural hazards. Climate change affects water security in many ways. Changing rainfall patterns, including droughts, can have a big impact on water availability. Flooding can worsen water quality. Stronger storms can damage infrastructure, especially in the Global South.
There are different ways to deal with water insecurity. Science and engineering approaches can increase the water supply or make water use more efficient. Financial and economic tools can include a safety net to ensure access for poorer people. Management tools such as demand caps can improve water security. They work on strengthening institutions and information flows. They may also improve water quality management, and increase investment in water infrastructure. Improving the climate resilience of water and hygiene services is important. These efforts help to reduce poverty and achieve sustainable development.
There is no single method to measure water security. Metrics of water security roughly fall into two groups. This includes those that are based on experiences versus metrics that are based on resources. The former mainly focus on measuring the water experiences of households and human well-being. The latter tend to focus on freshwater stores or water resources security.
The IPCC Sixth Assessment Report found that increasing weather and climate extreme events have exposed millions of people to acute food insecurity and reduced water security. Scientists have observed the largest impacts in Africa, Asia, Central and South America, Small Islands and the Arctic. The report predicted that global warming of 2 °C would expose roughly 1-4 billion people to water stress. It finds 1.5-2.5 billion people live in areas exposed to water scarcity.
Definitions
Broad definition
There are various definitions for the term water security. It emerged as a concept in the 21st century. It is broader than the absence of water scarcity. It differs from the concepts of food security and energy security. Whereas those concepts cover reliable access to food or energy, water security covers not only the absence of water but also its presence when there is too much of it.
One definition of water security is "the reliable availability of an acceptable quantity and quality of water for health, livelihoods and production, coupled with an acceptable level of water-related risks".
A similar definition of water security by UN-Water is: "the capacity of a population to safeguard sustainable access to adequate quantities of acceptable quality water for sustaining livelihoods, human well-being, and socio-economic development, for ensuring protection against water-borne pollution and water-related disasters, and for preserving ecosystems in a climate of peace and political stability."
World Resources Institute also gave a similar definition in 2020. "For purposes of this report, we define water security as the capacity of a population to
safeguard sustainable access to adequate quantities of acceptable quality water for sustaining livelihoods, human well-being, and socioeconomic development;
protect against water pollution and water-related disasters; and
preserve ecosystems, upon which clean water availability and other ecosystem services depend."
Narrower definition with a focus on water supply
Some organizations use water security in a more specific sense to refer to water supply only. They do not consider the water-related risks of too much water. For example, the definition of WaterAid in 2012 focuses on water supply issues. They defined water security as "reliable access to water of sufficient quantity and quality for basic human needs, small-scale livelihoods and local ecosystem services, coupled with a well managed risk of water-related disasters". The World Water Council also uses this more specific approach with a focus on water supply. "Water security refers to the availability of water, in adequate quantity and quality, to sustain all these needs together (social and economic sectors, as well as the larger needs of the planet's ecosystems) – without exceeding its ability to renew."
Relationship with WASH and IWRM
WASH (water, sanitation and hygiene) is an important concept when in discussions of water security. Access to WASH services is one part of achieving water security. The relationship works both ways. To be sustainable, WASH services need to address water security issues. For example WASH relies on water resources that are part of the water cycle. But climate change has many impacts on the water cycle which can threaten water security. There is also growing competition for water. This reduces the availability of water resources in many areas in the world.
Water security incorporates ideas and concepts to do with the sustainability, integration and adaptiveness of water resource management. In the past, experts used terms such as integrated water resources management (IWRM) or sustainable water management for this.
Related concepts
Water risk
Water risk refers to the possibility of problems to do with water. Examples are water scarcity, water stress, flooding, infrastructure decay and drought. There exists an inverse relationship between water risk and water security. This means as water risk increases, water security decreases. Water risk is complex and multilayered. It includes risks flooding and drought. These can lead to infrastructure failure and worsen hunger. When these disasters take place, they result in water scarcity or other problems. The potential economic effects of water risk are important to note. Water risks threaten entire industries. Examples are the food and beverage sector, agriculture, oil and gas and utilities. Agriculture uses 69% of total freshwater in the world. So this industry is very vulnerable to water stress.
Risk is a combination of hazard, exposure and vulnerability. Examples of hazards are droughts, floods and decline in quality. Bad infrastructure and bad governance lead to high exposure to risk.
The financial sector is becoming more aware of the potential impacts of water risk and the need for its proper management. By 2025, water risk will threaten $145 trillion in assets under management.
To control water risk, companies can develop water risk management plans. Stakeholders within financial markets can use these plans to measure company environmental, social and governance (ESG) performance. They can then identify leaders in water risk management. The World Resources Institute has developed an online water data platform named Aqueduct for risk assessment and water management. China Water Risk is a nonprofit dedicated to understanding and managing water risk in China. The World Wildlife Fund has a Water Risk Filter that helps companies assess and respond to water risk with scenarios for 2030 and 2050.
Understanding risk is part of water security policy. But it is also important to take social equity considerations more into account.
There is no wholly accepted theory or mathematical model for determining or managing water risk. Instead, managers use a range of theories, models and technologies to understand the trade-offs that exist in responding to risk.
Water conflict
Desired outcomes
There are three groups of water security outcomes. These include economic, environmental and equity (or social) outcomes. Outcomes are things that happen or people would want to see happen as a result of policy and management:
Economic outcomes: Sustainable growth which takes changing water needs and threats into account. Sustainable growth includes job creation, increased productivity and standards of living.
Environmental outcomes: Quality and availability of water for the ecosystems services that depend on this water resource. Loss of freshwater biodiversity and depletion of groundwater are examples of negative environmental outcomes.
Equity or social outcomes: Inclusive services so that consumers, industry and agriculture can access safe, reliable, sufficient and affordable water. These also mean they can dispose of wastewater safely. This area includes gender issues, empowerment, participation and accountability.
There are four major focus areas for water security and its outcomes. It is about using water to increase economic and social welfare, move towards long-term sustainability or reduce risks tied to water. Decision makers and water managers must consider the linkages and trade-offs between the varied types of outcomes.
Improving water security is a key factor to achieve growth, development that is sustainable and reduce poverty. Water security is also about social justice and fair distribution of environmental benefits and harms. Development that is sustainable can help reduce poverty and increase living standards. This is most likely to benefit those affected by the impacts of insecure water resources in the region, especially women and children.
Water security is important for attaining most of the 17 United Nations Sustainable Development Goals (SDGs). This is because access to adequate and safe water is a precondition for meeting many of the individual goals. It is also important for attaining development that is resilient to climate change. Planners take note of water security outcomes for various groups in society when they design strategies for climate change adaptation.
Determining factors
Three main factors determine the ability of a society to sustain water security:
Hydrologic environment
Socio-economic environment
Changes in the future environment (due to the effects of climate change)
Hydrologic environment
The hydrologic environment is important for water security. The term hydrologic environment refers to the "absolute level of water resource availability". But it also refers to how much it varies in time and location. Inter-annual means from one year to the next, Intra-annual means from one season to the next. It is possible to refer to location as spatial distribution. Scholars distinguish between a hydrologic environment that is easy to manage and one that is difficult.
An easy to manage hydrologic environment would be one with low rainfall variability. In this case rain is distributed throughout the year and perennial river flows sustained by groundwater base flows. For example, many of the world's industrialized nations have a hydrologic environment that they can manage quite easily. This has helped them achieve water security early in their development.
A difficult to manage hydrologic environment is one with absolute water scarcity such as deserts or low-lying lands prone to severe flood risk. Regions where rainfall is very variable from one season to the next, or regions where rainfall varies a lot from one year to the next are also likely to face water security challenges. The term for this is high inter-annual climate variability. An example would be East Africa, where there have been prolonged droughts every two to three years since 1999. Most of the world's developing countries have challenges in managing hydrologies and have not achieved water security. This is not a coincidence.
The poverty and hydrology hypothesis states that regions with a difficult hydrology remain poor because the respective governments have not been able to make the large investments necessary to achieve water security. Examples of such regions would be those with rainfall variability within one year and across several years. This leads to water insecurity which constrains economic growth. There is a statistical link between increased changes in rainfall patterns and lower per capita incomes.
Socio-economic environment
Relative levels of economic development and equality or inequality are strong determinants of community and household scale water security. Whilst the poverty and hydrology hypothesis suggests that there is a link between poverty and difficult hydrologies, there are many examples of "difficult hydrologies" that have not (yet) resulted in poverty and water insecurity.
Social and economic inequalities are strong drivers of water insecurity, especially at the community and household scales. Gender, race and caste inequalities have all been linked to differential access to water services such as drinking water and sanitation. In particular women and girls frequently have less access to economic and social opportunities as a directly consequence of being primarily responsible for meeting household water needs. The entire journey from water source to point of use is fraught with hazards largely faced by women and girls. There is strong evidence that improving access to water and sanitation is a good way of addressing such inequalities.
Climate change
Impacts of climate change that are tied to water, affect people's water security on a daily basis. They include more frequent and intense heavy precipitation which affects the frequency, size and timing of floods. Also droughts can alter the total amount of freshwater and cause a decline in groundwater storage, and reduction in groundwater recharge. Reduction in water quality due to extreme events can also occur.: 558 Faster melting of glaciers can also occur.
Global climate change will probably make it more complex and expensive to ensure water security. It creates new threats and adaptation challenges. This is because climate change leads to increased hydrological variability and extremes. Climate change has many impacts on the water cycle. These result in higher climatic and hydrological variability, which can threaten water security. Changes in the water cycle threaten existing and future water infrastructure. It will be harder to plan investments for future water infrastructure as there are so many uncertainties about future variability for the water cycle. This makes societies more exposed to risks of extreme events linked to water and therefore reduces water security.
It is difficult to predict the effects of climate change on national and local levels. Water security will be affected by sea level rise in low lying coastal areas while populations dependent on snowmelt as their water source will be affected by the recession of glaciers and mountain snow.
Future climate change must be viewed in context of other existing challenges for water security. Other challenges existing climate variability in areas closer to the equator, population growth and increased demand for water resources. Others include political challenges, increased disaster exposure due to settlement in hazard-prone areas, and environmental degradation. Water demand for irrigation in agriculture will increase due to climate change. This is because evaporation rates and the rate of water loss from crops will be higher due to rising temperatures.
Climate factors have a major effect on water security as various levels. Geographic variability in water availability, reliability of rainfall and vulnerability to droughts, floods and cyclones are inherent hazards that affect development opportunities. These play out at international to intra-basin scales. At local scales, social vulnerability is a factor that increases the risks to water security, no matter the cause. For example, people affected by poverty may have less ability to cope with climate shocks.
Challenges and threats
There are many factors that contribute to low water security. Some examples are:
Water scarcity: Water demand exceeds supply in many regions of the world. This can be due to population growth, higher living standards, general economic expansion and/or greater quantities of water used in agriculture for irrigation.
Increasing water pollution and low levels of wastewater treatment, which is making local water unusable.
Poor planning of water use, poor water management and misuse. These can cause groundwater levels to drop, rivers and lakes to dry out, and local ecosystems to collapse.
Trans-boundary waters and international rivers which belong to several countries. Country borders often do not align with natural watersheds. One reason is that international borders result from boundaries during colonialism.
Climate change. This makes water-related disasters such as droughts and floods more frequent and intense; rising temperatures and sea levels can contaminate freshwater sources.
Water scarcity
A major threat to water security is water scarcity. About 27% of the world's population lived in areas affected by water scarcity in the mid-2010s. This number will likely increase to 42% by 2050.
Water pollution
Water pollution is a threat to water security. It can affect the supply of drinking water and indirectly contribute to water scarcity.
Reduced water quality due to climate change
Weather and its related shocks can affect water quality in several ways. These depend on the local climate and context. Shocks that are linked to weather include water shortages, heavy rain and temperature extremes. They can damage water infrastructure through erosion under heavy rainfall and floods, cause loss of water sources in droughts, and make water quality deteriorate.
Climate change can reduce lower water quality in several ways:
Heavy rainfall can rapidly reduce the water quality in rivers and shallow groundwater. It can affect water quality in reservoirs even if these effects can be slow. Heavy rainfall also impacts groundwater in deeper, unfractured aquifers. But these impacts are less pronounced. Rainfall can increase fecal contamination of water sources.
Floods after heavy rainfalls can mix floodwater with wastewater. Also pollutants can reach water bodies by increased surface runoff.
Groundwater quality may deteriorate due to droughts. The pollution in rivers that feed groundwater becomes less diluted. As groundwater levels drop, rivers may lose direct contact with groundwater.
In coastal regions, more saltwater may mix into freshwater aquifers due to sea level rise and more intense storms. This process is called saltwater intrusion.
Warmer water in lakes, oceans, reservoirs and rivers can cause more eutrophication. This results in more frequent harmful algal blooms. Higher temperatures cause problems for water bodies and aquatic ecosystems because warmer water contains less oxygen.
Permafrost thawing leads to an increased flux of contaminants.
Increased meltwater from glaciers may release contaminants. As glaciers shrink or disappear, the positive effect of seasonal meltwater on downstream water quality through dilution is disappearing.
Poverty
People in low-income countries are at greater risk of water insecurity and may also have less resources to mitigate it. This can result in human suffering, sustained poverty, constrained growth and social unrest.
Food and water insecurity pose significant challenges for numerous individuals across the United States. Strategies employed by households in response to these pressing issues encompass labor intensive methods, such as melting ice, earning wages, and occasionally incurring debt, all aimed at water conservation. Additionally, families may turn to foraging for water-based plants and animals, seeking alternative sources of sustenance. Adjusting consumption patterns becomes imperative, involving the rationing of servings and prioritizing nutritional value, particularly for vulnerable members like small children. The phenomenon of substituting more expensive, nutritious food with cheaper alternatives is also observed.
Furthermore, individuals may consume from sources considered "stigmatized" by society, such as urine or unfiltered water. Migration emerges as a viable option, with families fostering children to relatives outside famine zones and engaging in seasonal or permanent resettlement. In certain instances, resource preservation involves the challenging decision of abandoning specific family members. This is achieved through withholding resources from non-family members, prioritizing the health of some family members over others, and, in extreme cases, leaving individuals behind. As the climate changes, the impact of food and water insecurity is disproportionately felt, necessitating a re-evaluation of societal misconceptions about those making survival sacrifices. Larger entities, including the government and various organizations, extend assistance based on available resources, highlighting the importance of addressing information gaps in specific data.
Destructive forces of water
Water can cause large-scale destruction due to its huge power. This destruction can result from sudden events. Examples are tsunamis, floods or landslides. Events that happen slowly over time such as erosion, desertification or water pollution can also cause destruction.
Other threats
Other threats to water security include:
Disasters caused by natural hazards such as hurricanes, earthquakes, and wildfires. These can damage man-made structures such as dams and fill waterways with debris;
Some climate change mitigation measures which need a lot of water. Bioenergy with carbon capture and storage, afforestation and reforestation may use relatively large amounts of water if done at inappropriate locations. The term for this is a high water footprint.
Terrorism such as water supply terrorism;
Radiation due to a nuclear accident;
New water uses such as hydraulic fracturing for energy resources;
Armed conflict and migration. Migration can be due to water scarcity at the origin or it can lead to more water scarcity at the target destinations.
Management approaches
There are different ways to tackle water insecurity. Science and engineering approaches can increase the water supply or make water use more efficient. Financial and economic tools can be used as a safety net for poorer people. Higher prices may encourage more investments in water systems. Finally, management tools such as demand caps can improve water security. Decision makers invest in institutions, information flows and infrastructure to achieve a high level of water security.
Investment decisions
Institutions
The right institutions are important to improve water security. Institutions govern how decisions can promote or constrain water security outcomes for the poor. Strengthening institutions might involve reallocating risks and duties between the state, market and communities in new ways. This can include performance-based models, development impact bonds, or blended finance from government, donors and users. These finance mechanisms are set up to work jointly with state, private sector and communities investors.
Sustainable Development Goal 16 is about peace, justice and strong institutions. It recognizes that strong institutions are a necessary condition for sustainable development, including water security.
Drinking water quality and water pollution are linked. But policymakers often do not address them in a comprehensive way. For example, pollution from industries is often not linked to drinking water quality in developing countries. Keeping track of river, groundwater and wastewater is important. It can identify sources of contamination and guide targeted regulatory responses. The WHO has described water safety plans as the most effective means of maintaining a safe supply of drinking water to the public.
Information flows
It is important for institutions to have access to information about water. This helps them with their planning and decision-making. It also helps with tracking how accountable and effective policies are. Investments into climate information tools that are appropriate for the local context are useful. They cover a wide range of temporal and spatial scales. They also respond to regional climate risks tied to water.
Seasonal climate and hydrological forecasts can be useful to prepare for and reduce water security risks. They are especially useful if people can apply them at the local scale. Applying knowledge of how climate anomalies relate to each other over long distances can improve seasonal forecasts for specific regions. These teleconnections are correlations between patterns of rainfall, temperature, and wind speed between distant areas. They are caused by large-scale ocean and atmospheric circulation.
In regions where rainfall varies with the seasons and from year to year, water managers would like to have more accurate seasonal weather forecasts. In some locations the onset of seasonal rainfall is particularly hard to predict. This is because aspects of the climate system are difficult to describe with mathematical models. For example, the long rains in East Africa which fall March to May have been difficult to simulate with climate models. When climate models work well they can produce useful seasonal forecasts. One reason for these difficulties is the complex topography of the area. Improved understanding of atmospheric processes may allow climate scientists to provide more relevant and localized information to water managers on a seasonal timescale. They could also provide more detailed predictions for the effects of climate change on a longer timeframe.One example would be seasonal forecasts of rainfall in Ethiopia's Awash river basin. These may become more accurate by understanding better how sea surface temperatures in different ocean regions relate to rainfall patterns in this river basin. At a larger regional scale, a better understanding of the relationship between pressure systems in the Indian Ocean and the South Atlantic on the one hand, and wind speeds and rainfall patterns in the Greater Horn of Africa on the other hand would be helpful. This kind of scientific analysis may contribute to improved representation of this region in climate models to assist development planning. It could also guide people when they plan water allocation in the river basin or prepare emergency response plans for future events of water scarcity and flooding.
Infrastructure
Water infrastructure serves to access, store, regulate, move and conserve water. Several assets carry out these functions. Natural assets are lakes, rivers, wetlands, aquifers, springs. Engineered assets are bulk water management infrastructure, such as dams. Examples include:
Improved water storage: using natural water storage systems such as aquifers and wetlands or built infrastructure such as storage tanks and dams.
Using new water sources to add to the existing water supplies. This can be done through water reuse, desalination, rainwater harvesting and groundwater pumping.
Embankments (or levee or dike) for flood protection.
Public and private spending on water infrastructure and supporting institutions must be well balanced. They are likely to evolve over time. This is important to avoid unplanned social and environmental costs from building new facilities.
For example, in the case of Africa, investments into groundwater use is an option to increase water security and for climate change adaptation. Water security in African countries could benefit from the distribution of groundwater storage and recharge on the continent. Recharge is a process where water moves to groundwater. Many countries that have low recharge have substantial groundwater storage. Countries with low storage typically have high, regular recharge.
Consideration of scales
People manage water security risks at different spatial scales. These range from the household to community, town, city, basin and region. At the local scale, actors include county governments, schools, water user groups, local water providers and the private sector. At the next larger scale there are basin and national level actors. These actors help to identify any constraints with regards to policy, institutions and investments. Lastly, there are global actors such as the World Bank, UNICEF, FCDO, WHO and USAID. They help to develop suitable service delivery models.
The physical geography of a country shows the correct scale that planners should use for managing water security risks. Even within a country, the hydrologic environment may vary a lot. See for example the variations in seasonal rainfall across Ethiopia.
Reducing inequalities in water security
Inequalities with regards to water security within a society have structural and historical roots. They can affect people at different scales. These range from the household, to the community, town, river basin or the region. High risk social groups and regions can be identified during political debates but are often ignored. Water inequality is often tied to gender in low-income countries. At the household level, women are often the "water managers". But they have limited choices over water and related issues.
Improving climate resilience of water and sanitation services
Many institutions are working to develop WASH services that are resilient to climate.
Measurement tools
There is no single way to measure water security. There are no standard indicators to measure water security. That is because it is a concept that focuses on outcomes. The outcomes that are regard as important can change depending on the context and stakeholders.
Instead, it is common to compare relative levels of water security by using metrics for certain aspects of water security. For example, the Global Water Security Index includes metrics on:
availability (water scarcity index, drought index, groundwater depletion);
accessibility to water services (access to sanitation and drinking water);
safety and quality (water quality index, global flood frequency);
management (World Governance Index, transboundary legal framework, transboundary political tension).
Scientists have been working on ways to measure water security at a variety of levels. The metrics roughly fall into two groups. There are those that are based on experiences versus metrics that are based on resources. The former mainly focus on measuring the experiences of households and human well-being. Meanwhile the latter focuses on the amount of
available freshwater.
The Household Water Insecurity Experiences (HWISE) Scale measures several components of water insecurity at the household level. These include adequacy, reliability, accessibility and safety. This scale can help to identify vulnerable subpopulations and ensure resources are allocated to those in need. It can also measure how effective of water policies and projects are.
Global estimates
The IPCC Sixth Assessment Report summarises the current and future water security trends. It says that increasing weather and extreme climate events have led to acute food insecurity and reduced water security for millions of people. The largest impacts are seen in Africa, Asia, Central and South America, Small Islands and the Arctic.
The same report predicted that global warming of 2 °C would expose roughly 1-4 billion people to water stress. This would depend on regional patterns of climate change and the socio-economic scenarios. On water scarcity which is one factor in water insecurity the report finds 1.5-2.5 billion people live water scarce areas.
Water scarcity and water security are not always equal. There are regions with high water security even though they also experience water scarcity. Examples are parts of the United States, Australia and Southern Europe. This is due to efficient water services that have a high level of safety, quality, and accessibility. However, even in those regions, groups such as Indigenous peoples tend to have less access to water and face water insecurity at times.
Country examples
Bangladesh
Risks to water security in Bangladesh include:
natural hazards that related to the climate (climate hazards)
some impacts of urbanization
impacts from climate change such as changes to precipitation patterns and sea level rise.
The country experiences water security risks in the capital Dhaka as well as in the coastal region. In Dhaka, monsoonal pulses can lead to urban flooding. This can pollute the water supply. A number of processes and events cause water risks for about 20 million people in the coastal regions. These include aquifers that are getting saltier, seasonal water scarcity, fecal contamination, and flooding from the monsoon and from storm surges due to cyclones.
Different types of floods occur in coastal Bangladesh. They are: river floods, tidal floods and storm surge floods due to tropical cyclones. These floods can damage drinking water infrastructure. They can also lead to reduced water quality as well as losses in agricultural and fishery yields. There is a link between water insecurity and poverty in the low-lying areas in the Ganges-Brahmaputra tidal delta plain. Those low-lying areas are embanked areas in coastal Bangladesh.
The government has various programs to reduce risks for people who live in coastal communities. These programs also lead to increases in economic wellbeing. Examples include the "Coastal Embankment Improvement Project" by World Bank in 2013, the BlueGold project in 2012, UNICEF's "Managed Aquifer Recharge" program in 2014 and the Bangladesh Delta Plan in 2014. Such investments in water security aim to increase the continued use and upkeep of water facilities. They can help coastal communities to escape the poverty trap caused by water insecurity.
A program called the "SafePani framework" focuses on how the state shares risks and responsibilities with service providers and communities. This program aims to help decision makers to address climate risks through a process called climate resilient water safety planning. The program is a cooperation between UNICEF and the Government of Bangladesh.
Ethiopia
Ethiopia has two main wet seasons per year. It rains in the spring and summer. These seasonal patterns of rainfall vary a lot across the country. Western Ethiopia has a seasonal rainfall pattern that is similar to the Sahel. It has rainfall from February to November (which is decreasing to the north), and has peak rainfall from June to September. Southern Ethiopia has a rainfall pattern similar to the one in East Africa. There are two distinct wet seasons every year, February to May, and October to November. Central and eastern Ethiopia has some rainfall between February and November, with a smaller peak in rainfall from March to May and a second higher peak from June to September.
In 2022 Ethiopia had one of the most severe La Niña-induced droughts in the last forty years. It came about due to four consecutive rainy seasons which did not produce enough rain. This drought increased water insecurity for more than 8 million pastoralists and agro-pastoralists in the Somali, Oromia, SNNP and South-West regions. About 7.2 million people needed food aid, and 4.4 million people needed help to access water. Food prices have increased a lot due to the drought conditions. Many people in the affected area have experienced food shortages due to the water insecurity situation.
In the Awash basin in central Ethiopia floods and droughts are common. Agriculture in the basin is mainly rainfed (without irrigation systems). This applies to around 98% of total cropland as of 2012. So changes in rainfall patterns due to climate change will reduce economic activities in the basin. Rainfall shocks have a direct impact on agriculture. A rainfall decrease in the Awash basin could lead to a 5% decline in the basin's overall GDP. The agricultural GDP could even drop by as much as 10%.
Partnerships with the Awash Basin Development Office (AwBDO) and the Ministry of Water, Irrigation and Electricity (MoWIE) have led to the development of new models of water allocation in the Awash basin. This can improve water security for the 18.3 million residents in the basin. With this they will have enough water for their domestic, irrigation and industry needs.
Kenya
Kenya ranked 46th out of 54 African countries in an assessment of water security in 2022. Major water security issues in Kenya include drinking water safety, water scarcity, lack of water storage, poor wastewater treatment, and drought and flood. Large-scale climate patterns influence the rainfall patterns in East Africa. Such climate patterns include the El Niño–Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD). Cooling in the Pacific Ocean during the La Niña phase of ENSO is linked with dryer conditions in Kenya. This can lead to drought as it did in 2016-17. On the other hand a warmer Western Indian Ocean due to a strong positive Indian Ocean Dipole caused extreme flooding in Kenya in 2020.
Around 38% of Kenya's population and 70% of its livestock live in arid and semi-arid lands. These areas have low rainfall which varies a lot from one season to the next. This means that surface water and groundwater resources vary a lot by location and time of year. Residents in Northern Kenya are seeing increased changes in rainfall patterns and more frequent droughts. These changes affect livelihoods in this region where people have been living as migratory herders. They are used to herding livestock with a seasonal migration pattern. More people are now settling in small urban centers, and there is increasing conflict over water and other resources. Water insecurity is a feature of life for both settled and nomadic pastoralists. Women and children bear the burden for fetching water.
Groundwater sources have great potential to improve water supply in Kenya. However, the use of groundwater is limited by low quality and knowledge, pumping too much groundwater, known as overdrafting, and salt water intrusion along coastal areas. Another challenge is the upkeep of groundwater infrastructure, mainly in rural areas.
Ukraine
Russian forces have destroyed one-third of Ukraine’s freshwater storage since February 2022 to 2024. Potable, industrial and irrigation water supplies have been cut across the south and east of the country. Occupation of the southern and eastern regions of Ukraine and destruction of the Kakhovka Reservoir have all but terminated irrigation. Irrigated cereals and technical crops are now unprofitable, even where practicable – not least because of the difficulty of selling and exporting the produce. The strategic development of irrigation should be based on optimal technology to minimize water costs and redesign cultivation systems, for example, by drip irrigation, diverse crop rotations and focus on vegetable farming, orchards, and viticulture.
See also
References
External links
International Water Security Network
Water Security (an open source journal that started in 2017)
Water and the environment
Security studies
Right to health | 0.777815 | 0.991522 | 0.77122 |
Bioconservatism | Bioconservatism is a philosophical and ethical stance that emphasizes caution and restraint in the use of biotechnologies, particularly those involving genetic manipulation and human enhancement. The term "bioconservatism" is a portmanteau of the words biology and conservatism.
Critics of bioconservatism, such as Steve Clarke and Rebecca Roache, argue that bioconservatives ground their views primarily in intuition, which can be subject to various cognitive biases. Bioconservatives' reluctance to acknowledge the fragility of their position is seen as a reason for stalled debate.
Bioconservatism is characterized by a belief that technological trends risk compromising human dignity, and by opposition to movements and technologies including transhumanism, human genetic modification, "strong" artificial intelligence, and the technological singularity. Many bioconservatives also oppose the use of technologies such as life extension and preimplantation genetic screening.
Bioconservatives range in political perspective from right-leaning religious and cultural conservatives to left-leaning environmentalists and technology critics. What unifies bioconservatives is skepticism about medical and other biotechnological transformations of the living world. In contrast to bioluddism, the bioconservative perspective typically presents a more focused critique of technological society. It is distinguished by its defense of the natural, framed as a moral category.
Bioconservatism advocates
Bioconservatives seek to counter the arguments made by transhumanists who support the use of human enhancement technologies despite acknowledging the risks they involve. Transhumanists believe that these technologies have the power to radically change what we currently perceive of as a human being, and that they are necessary for future human development. Transhumanist philosophers such as Nick Bostrom believe that genetic modification will be essential to improving human health in the future.
The three major elements of the bioconservative argument, as described by Bostrom, are firstly, that human augmentation is innately degrading and therefore harmful; secondly, that the existence of augmented humans poses a threat to "ordinary humans;" and thirdly, that human augmentation shows a lack of acknowledgement that "not everything in the world is open to any use we may desire or devise." The first two of these elements are secular whilst the last derives "from religious or crypto-religious sentiments."
Michael Sandel
Michael J. Sandel is an American political philosopher and a prominent bioconservative. His article and subsequent book, both titled The Case Against Perfection, concern the moral permissibility of genetic engineering or genome editing. Sandel compares genetic and non-genetic forms of enhancement pointing to the fact that much of non-genetic alteration has largely the same effect as genetic engineering. SAT tutors or study drugs such as Ritalin can have similar effects as minor tampering with natural born intelligence. Sandel uses such examples to argue that the most important moral issue with genetic engineering is not that the consequences of manipulating human nature will undermine human agency but the perfectionist aspiration behind such a drive to mastery. For Sandel, "the deepest moral objection to enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes.” For example, the parental desire for a child to be of a certain genetic quality is incompatible with the special kind of unconditional love parents should have for their children. He writes “[t]o appreciate children as gifts is to accept them as they come, not as objects of our design or products of our will or instruments of our ambition.”
Sandel insists that consequentialist arguments overlook the principle issue of whether bioenhancement should be aspired to at all. He is attributed with the view that human augmentation should be avoided as it expresses an excessive desire to change oneself and 'become masters of our nature.' For example, in the field of cognitive enhancement, he argues that moral question we should be concerned with is not the consequences of inequality of access to such technology in possibly creating two classes of humans but whether we should aspire to such enhancement at all. Similarly, he has argued that the ethical problem with genetic engineering is not that it undermines the child's autonomy, as this claim "wrongly implies that absent a designing parent, children are free to choose their characteristics for themselves." Rather, he sees enhancement as hubristic, taking nature into our own hands: pursuing the fixity of enhancement is an instance of vanity. Sandel also criticizes the argument that a genetically engineered athlete would have an unfair advantage over his unenhanced competitors, suggesting that it has always been the case that some athletes are better endowed genetically than others. In short, Sandel argues that the real ethical problems with genetic engineering concern its effects on humility, responsibility and solidarity.
Humility
Sandel argues that humility is a moral virtue that will be undermined by genetic engineering. He argues that humility encourages one to 'abide the unexpected, to live with dissonance, to rein in the impulse control,' and therefore, is worth fostering in all aspects of one's life. This includes the humility of parents regarding their own genetic endowment and that of their children. Sandel's concern is that, through genetic engineering, the relationship between parent and child is "disfigured:" The problem lies in the hubris of the designing parents, in their drive to master the mystery of genetics. Even if this disposition did not make parents tyrants to their children, it would disfigure the relation between parent and child, thus depriving the parent of the humility and enlarged human sympathies that an openness to the unbidden can cultivate.
Essentially, Sandel believes that in order to be a good parent with the virtue of humility, one needs to accept that their child may not progress exactly according to their expectations. Designing an athletic child, for example, is incompatible with the idea of parents having such open expectations. He argues that genetic enhancement deprives the parent of the humility of an 'openness to the unbidden' fosters. Sandel believes that parents must be prepared to love their child unconditionally and to see their children as gifts from nature, rather than entities to be defined according to parental and genetic expectations. Moreover, in the paper The Case Against Perfection, Sandel argues:
I do not think the main problem with enhancement and genetic engineering is that they undermine effort and erode human agency. The deeper danger is that they represent a kind of hyperagency—a Promethean aspiration to remake nature, including human nature, to serve our purposes and satisfy our desires".
In doing so, Sandel worries that an essential aspect of human nature - and the meaning of life derived from such, would be eroded in the process of expanding radically beyond our naturally endowed capacities. He calls this yearning the "Promethean project," which is necessarily constrained by appreciating our humility and place in nature. Sandel adds:
It is in part a religious sensibility. But its resonance reaches beyond religion.
Responsibility
Sandel argues that due to the increasing role of genetic enhancement, there will be an 'explosion' of responsibility on humanity.
He argues that genetic engineering will increase parental responsibility as "parents become responsible for choosing, or failing to choose, the right traits for their children." He believes that such responsibility will lead to genes become a matter of choice rather than a matter of chance. Sandel illustrates this argument through the lens of sports: in athletics, undesirable outcomes are often attributed to extrinsic values such as lack of preparation or lapse in discipline. With the introduction of genetic engineering athletes, Sandel believes that athletes will bear additional responsibility for their talents and performance; for example, for failing to acquire the intrinsic traits necessary for success. Sandel believes this can be extrapolated to society as a whole: individuals will be forced to shoulder more responsibility for deficiencies in the face of increased genetic choice.
Solidarity
Sandel points out that without genetic engineering, a child is "at the mercy of the genetic lottery." Insurance markets allow a pooling of risk for the benefit of all: those who turn out to be healthy subsidise those who are not. This could be phrased more generally as: individual success is not fully determined by that individual or their parents, as genetic traits are to some extent randomly assigned from a collective pool. Sandel argues that, because we all face the same risks, social insurance schemes that rely on a sense of solidarity are possible. However, genetic enhancement gives individuals perfect genetic knowledge and increased resistance to some diseases. Enhanced individuals would not opt into such a system or such human community, because it would involve guaranteed losses for them. They would feel no debt to their community, and social solidarity would disappear.
Sandel argues that solidarity 'arises when men and women reflect on the contingency of their talents and fortunes.'
He argues that if our genetic endowments begin to be seen as 'achievements for which we can claim credit,' society would have no obligation to share with those less fortunate. Consequently, Sandel mounts a case against the perfection of genetic knowledge because it would end the solidarity arising when people reflect on the non-necessary nature of their fortunes.
Leon Kass
In his paper “Ageless Bodies, Happy Souls," Leon Kass argues for bioconservatism. His argument was first delivered as a lecture at the Washington D.C. Ethics and Public Policy Center and later published as an article in The Atlantic. Although it was written during the time when Kass chaired the President's Council on Bioethics, the views expressed are his own, and not those of the council.
In brief, he argues that for three main reasons there is something wrong with biotechnological enhancement. Kass calls them the arguments of "the attitude of mastery," "'unnatural' means" and "dubious ends."
Before he turns to these arguments, he focuses on the distinction between "therapy" and "enhancement." While therapy has the aim of (re-)establishing the state of what could be considered as "normal" (e.g. replacement of organs), enhancement gives people an advantage over the "normal workings" of the human body (e.g. immortality). On the basis of this distinction, Kass argues, most people would support therapy, but remain sceptical towards enhancement. However, he believes this distinction is not clear, since it is hard to tell where therapy stops and enhancement begins. One reason he gives is that the "normal workings" of the human body cannot be unambiguously defined due to the variance within humans: someone may be born with perfect pitch, another deaf.
Bostrom and Roache reply to this by giving an instance where one may clearly speak of permissible enhancement. They claim that extending a life (i.e. making it longer than it would normally have been) means that one saves this particular life. Since one would believe it is morally permissible to save lives (as long as no harm is caused), they claim that there is no good reason to believe extending a life is impermissible.
The relevance of the above counterargument presented by Bostrom and Roache becomes clearer when we consider the essence of Kass's skepticism with 'enhancement.' Firstly, he labels natural human experiences like aging, death and unhappiness as preconditions of human flourishing. By extension, given that technological enhancement diminishes these preconditions and therefore hinders human flourishing, he is able to assert that enhancement is not morally permissible. That being said, Bostrom and Roache challenge Kass's inherent assumption that extending life is different from saving it. In other words, they argue that by alleviating ageing and death, someone's life is being extended, which is no different from saving their life. By this argument, the concept of human flourishing becomes entirely irrelevant since it is morally permissible to save someone's life, regardless of whether they are leading a flourishing life or not.
The problematic attitude of biotechnological enhancement
One of Leon Kass' main arguments on this matter concern the attitude of 'mastery'. Kass implies that although the means are present to modify human nature (both body and mind), the ends remain unknown, filled with unintended consequences:
Due to the unawareness of the goodness of potential ends, Kass claims this not to be mastery at all. Instead, we are acting on the momentary whims that nature exposes us to, effectively making it impossible for humanity to escape from the "grip of our own nature."
Kass builds on Sandel's argument that transhumanists fail to properly recognise the 'giftedness' of the world. He agrees that this idea is useful in that it should teach us an attitude of modesty, restraint and humility. However, he believes it will not by itself sufficiently indicate which things can be manipulated and which should be left untouched. Therefore, Kass additionally proposes that we must also respect the 'givenness' of species-specified natures – 'given' in the sense of something fixed and specified.
'Unnatural' means of biotechnological enhancement
Kass refers to biotechnological enhancement as cheating or ‘cheap,’ because it undermines the feeling of having worked hard to achieve a certain aim. He writes, “The naturalness of means matters. It lies not in the fact that the assisting drugs and devices are artifacts, but in the danger of violating or deforming the deep structure of the natural human activity.” By nature, there is "an experiential and intelligible connection between means and ends."
Kass suggests that the struggles one has to go through to achieve excellence "is not only the source of our deeds, but also their product." Therefore, they build character. He maintains that biotechnology as a shortcut does not build character but instead erodes self-control. This can be seen in how confronting fearful things might eventually enable us to cope with our fears, unlike a pill which merely prevents people from experiencing fear and thereby doesn't help us overcome it. As Kass notes, "people who take pills to block out from memory the painful or hateful aspects of new experience will not learn how to deal with suffering or sorrow. A drug to induce fearlessness does not produce courage." He contends that there is a necessity in having limited biotechnological enhancement for humans as it recognises giftedness and forges humility.
Kass notes that while there are biological interventions that may assist in the pursuit of excellence without cheapening its attainment, "partly because many of life's excellences have nothing to do with competition or adversity," (e.g. "drugs to decrease drowsiness or increase alertness... may actually help people in their natural pursuits of learning or painting or performing their civic duty,") "the point is less the exertions of good character against hardship, but the manifestation of an alert and self-experiencing agent making his deeds flow intentionally from his willing, knowing, and embodied soul." Kass argues that we need to have an "intelligible connection" between means and ends in order to call one's bodies, minds, and transformations genuinely their own.
'Dubious' ends of biotechnological enhancement
The case for ageless bodies is that the prevention of decay, decline, and disability, the avoidance of blindness, deafness, and debility, the elimination of feebleness, frailty, and fatigue, are conducive to living fully as a human being at the top of one's powers, and a "good quality of life" from beginning to end.
However, Kass argues that human limitation is what gives the opportunity for happiness. Firstly, he argues that "a concern with one's own improving agelessness is finally incompatible with accepting the need for procreation and human renewal." This creates a world "hostile to children," and arguably "increasingly dominated by anxiety over health and the fear of death." This is because the existence of decline and decay is precisely what allows us to accept mortality. The hostility towards children is resultant of the redundancy of new generations to the progression of the human species, given infinite lifespan; progression and evolution of the human race would no longer arise from procreation and succession, but from the engineered enhancement of existing generations. Secondly, He explains that one needs to grieve in order to love, and that one must feel a lack to be capable of aspiration:
[...] human fulfillment depends on our being creatures of need and finitude and hence of longings and attachment.
Finally, Kass warns, "the engaged and energetic being-at-work of what uniquely gave to us is what we need to treasure and defend. All other perfection is at best a passing illusion, at worst a Faustian bargain that will cost us our full and flourishing humanity."
Jürgen Habermas
Jürgen Habermas has also written against genetic human enhancement. In his book “The Future of Human Nature,” Habermas rejects the use of prenatal genetic technologies to enhance offspring. Habermas rejects genetic human enhancement on two main grounds: the violation of ethical freedom, and the production of asymmetrical relationships. He broadens this discussion by then discussing the tensions between the evolution of science with religion and moral principles.
Violation of ethical freedom
Habermas points out that a genetic modification produces an external imposition on a person's life that is qualitatively different from any social influence. This prenatal genetic modification will most likely be chosen by one's parents, therefore threatening the ethical freedom and equality that one is entitled to as a birthright. For Habermas, the difference relies in that while socialisation processes can always be contested, genetic designs cannot therefore possess a level of unpredictability. This argument builds on Habermas' magnum opus discourse ethics. For Habermas:
Eugenic interventions aiming at enhancement reduce ethical freedom insofar as they tie down the person concerned to rejected, but irreversible intentions of third parties, barring him from the spontaneous self-perception of being the undivided author of his own life.
Asymmetrical relationships
Habermas suggested that genetic human enhancements would create asymmetric relationships that endanger democracy, which is premised on the idea of moral equality. He claims that regardless of the scope of the modifications, the very knowledge of enhancement obstructs symmetrical relationships between parents and their children. The child's genome was interfered with nonconsensually, making predecessors responsible for the traits in question. Unlike for thinkers like Fukuyama, Habermas' point is not that these traits might produce different ‘types of humans’. Rather, he placed the emphasis on how others are responsible in choosing these traits. This is the fundamental difference between natural traits and human enhancement, and it is what bears decisive weight for Habermas: the child's autonomy as self-determination is violated. However, Habermas does acknowledge that, for example, making one's son very tall in the hope that they will become a basketball player does not automatically determine that he will choose this path.
However, although the opportunity can be turned down, this does not make it any less of a violation from being forced into an irreversible situation. Genetic modification has two large-scale consequences. Firstly, no action the child undertakes can be ascribed to her own negotiation with the natural lottery, since a ‘third party’ has negotiated on the child's behalf. This imperils the sense of responsibility for one's own life that comes along with freedom. As such, individuals’ self-understanding as ethical beings is endangered, opening the door to ethical nihilism. This is so because the genetic modification creates a type of dependence in which one of the parts does not even have the hypothetical possibility of changing social places with the other. Secondly, it becomes impossible to collectively and democratically establish moral rules through communication, since a condition for their establishment is the possibility to question assertions. Genetically modified individuals, however, never realise if their very questioning might have been informed by enhancement, nor can they question it. That being said, Habermas acknowledges that our societies are full of asymmetric relationships, such as oppression of minorities or exploitation. However, these conditions could be different. On the contrary, genetic modification cannot be reverted once it is performed.
Criticism
The transhumanist Institute for Ethics and Emerging Technologies criticizes bioconservatism as a form of "human racism" (more commonly known as speciesism), and as being motivated by a "yuck factor" that ignores individual freedoms.
Nick Bostrom on posthuman dignity
Nick Bostrom argues that bioconservative concerns regarding the threat of transhumanism to posthuman dignity are unsubstantiated. Bostrom himself identifies with forms of posthuman dignity, and in his article In Defence of Posthuman Dignity, argues that such does not run in contradiction with the ideals of transhumanism.
Bostrom argues in the article that Fukuyama's concerns about the threats transhumanism pose to dignity as moral status - that transhumanism might strip away humanity's inalienable right of respect- lacks empirical evidence. He states that the proportion of people given full moral respect in Western societies has actually increased through history. This increase includes such populations as non-whites, women and non-property owners. Following this logic, it will similarly be feasible to incorporate future posthumans without compensating the dignities of the rest of the population.
Bostrom then goes on to discuss dignity in the sense of moral worthiness, which varies among individuals. He suggests that posthumans can similarly possess dignity in this sense. Further, he suggests, it is possible that posthumans, being genetically enhanced, may come to possess even higher levels of moral excellence than contemporary human beings. While he considers that certain posthumans may live more degraded lives as a result of self-enhancement, he also notes that even at this time many people are not living worthy lives either. He finds this regrettable and suggests that countermeasures as education and cultural reforms can be helpful in curtailing such practices. Bostrom supports the morphological and reproductive freedoms of human beings, suggesting that ultimately, leading whatever life one aspires should be an unalienable right.
Reproductive freedom means that parents should be free to choose the technological enhancements they want when having a child. According to Bostrom, there is no reason to prefer the random processes of nature over human design. He dismisses claims that describe this kind of operations as "tyranny" of the parents over their future chrildren. In his opinion, the tyranny of nature is no different. In fact, he claims that "Had Mother Nature been a real parent, she would have been in jail for child abuse and murder."
Earlier in the paper, Bostrom also replies to Leon Kass with the claim that, in his words, "nature's gifts are sometimes poisoned and should not always be accepted." He makes the point that nature cannot be relied upon for normative standards. Instead, he suggests that transhumanism can, over time, allow for the technical improvement of "human nature," consistent with our widely held societal morals.
According to Bostrom, the way that bioconservatives justify banning certain human enhancements while not others, reveal the double standard that is present in this line of thought. For him, a misleading conception of human dignity is to blame for this. We mistakenly take for granted that human nature is an intrinsic, unmodifiable set of properties. This problem, he argues, is overcome when human nature is conceived as 'dynamic, partially human-made, and improvable.' If we acknowledge that social and technological factors influence our nature, then dignity 'consists in what we are and what we have the potential to become, not in our pedigree or social origin'. It can be seen, then, than improved capabilities does not affect moral status, and that we should sustain an inclusive view that recognize our enhanced descendants as possessors of dignity. Transhumanists reject the notion that there is a significant moral difference between enhancing human lives through technological means compared to other methods.
Distinguishing between types of enhancement
Bostrom discusses a criticism levelled against transhumanists by bioconservatives, that children who are biologically enhanced by certain kinds of technologies will face psychological anguish because of the enhancement.
Prenatal enhancements may create expectations for the individual's future traits or behaviour.
If the individual learns of these enhancements, this is likely to cause them psychological anguish stemming from pressure to fulfil such expectations.
Actions which are likely to cause individuals psychological anguish are undesirable to the point of being morally reprehensible.
Therefore, prenatal enhancements are morally reprehensible.
Bostrom finds that bioconservatives rely on a false dichotomy between technological enhancements that are harmful and those that are not, thus challenging premise two. Bostrom argues that children whose mothers played Mozart to them in the womb would not face psychological anguish upon discovering that their musical talents had been “prenatally programmed by her parents.” However, he finds that bioconservative writers often employ analogous arguments to the contrary demonstrating that technological enhancements, rather than playing Mozart in the womb, could potentially disturb children.
Hans Jonas on reproductive freedom
Hans Jonas contends the criticisms about bio-enhanced children by questioning their freedom without the presence of enhancement. He argues that enhancement would increase their freedom. This is because enhanced physical and mental capabilities would allow for greater opportunities; the children would no longer be constrained by physical or mental deficiencies. Jonas further weakens the arguments about reproductive freedom by referencing Habermas. Habermas argues that freedom for offspring is restricted by the knowledge of their enhancement. To challenge this, Jonas elaborates on his notion about reproductive freedom.
Notable bioconservatives
George Annas
Dale Carrico
Francis Fukuyama (as attributed by observers)
Leon Kass
Bill McKibben
Oliver O'Donovan
Jeremy Rifkin
Wesley Smith
Michael Sandel
Edmund Pellegrino
See also
Bioluddism
Posthumanization
Techno-progressivism
Appeal to nature
References
Further reading
Gregg, Benjamin (2021). "Regulating genetic engineering guided by human dignity, not genetic essentialism", Politics and the Life Sciences, 10.1017/pls.2021.29, 41, 1, (60-75),
Savulescu, Julian (2019). "Rational Freedom and Six Mistakes of a Bioconservative", The American Journal of Bioethics, 19(7), 1–5. https://doi.org/10.1080/15265161.2019.1626642
External links
Nick Bostrom, "In defense of posthuman dignity", full text
Climate change
How Climate Change Makes Bioconservatism the Most Relevant Ideology, Chet Bowers, Truthout, 2016.
Bioengineer humans to tackle climate change, say philosophers, The Guardian, 2012, featuring Rebecca Roache
Political ideologies
Transhumanism | 0.793893 | 0.971426 | 0.771208 |
Mores | Mores (, sometimes ; , plural form of singular , meaning "manner, custom, usage, or habit") are social norms that are widely observed within a particular society or culture. Mores determine what is considered morally acceptable or unacceptable within any given culture. A folkway is what is created through interaction and that process is what organizes interactions through routine, repetition, habit and consistency.
William Graham Sumner (1840–1910), an early U.S. sociologist, introduced both the terms "mores" (1898)
and "folkways" (1906) into modern sociology.
Mores are strict in the sense that they determine the difference between right and wrong in a given society, and people may be punished for their immorality which is common place in many societies in the world, at times with disapproval or ostracizing. Examples of traditional customs and conventions that are mores include lying, cheating, causing harm, alcohol use, drug use, marriage beliefs, gossip, slander, jealousy, disgracing or disrespecting parents, refusal to attend a funeral, politically incorrect humor, sports cheating, vandalism, leaving trash, plagiarism, bribery, corruption, saving face, respecting your elders, religious prescriptions and fiduciary responsibility.
Folkways are ways of thinking, acting and behaving in social groups which are agreed upon by the masses and are useful for the ordering of society. Folkways are spread through imitation, oral means or observation, and are meant to encompass the material, spiritual and verbal aspects of culture. Folkways meet the problems of social life, we feel security and order from their acceptance and application. Examples of folkways include: acceptable dress, manners, social etiquette, body language, posture, level of privacy, working hours and five day work week, acceptability of social drinking—abstaining or not from drinking during certain working hours, actions and behaviours in public places, school, university, business and religious institution, ceremonial situations, ritual, customary services and keeping personal space.
Terminology
The English word morality comes from the same Latin root "mōrēs", as does the English noun moral. However, mores do not, as is commonly supposed, necessarily carry connotations of morality. Rather, morality can be seen as a subset of mores, held to be of central importance in view of their content, and often formalized into some kind of moral code or even into customary law. Etymological derivations include More danico, More judaico, More veneto, Coitus more ferarum, and O tempora, o mores!.
The Greek terms equivalent to Latin mores are ethos (ἔθος, ἦθος, 'character') or nomos (νόμος, 'law'). As with the relation of mores to morality, ethos is the basis of the term ethics, while nomos gives the suffix -onomy, as in astronomy.
Anthropology
The meaning of all these terms extend to all customs of proper behavior in a given society, both religious and profane, from more trivial conventional aspects of custom, etiquette or politeness—"folkways" enforced by gentle social pressure, but going beyond mere "folkways" or conventions in including moral codes and notions of justice—down to strict taboos, behavior that is unthinkable within the society in question, very commonly including incest and murder, but also the commitment of outrages specific to the individual society such as blasphemy. Such religious or sacral customs may vary. Some examples include funerary services, matrimonial services; circumcision and covering of the hair in Judaism, Christian Ten Commandments, New Commandment and the sacraments or for example baptism, and Protestant work ethic, Shahada, prayer, alms, the fast and the pilgrimage as well as modesty in Islam, and religious diet.
While cultural universals are by definition part of the mores of every society (hence also called "empty universals"), the customary norms specific to a given society are a defining aspect of the cultural identity of an ethnicity or a nation. Coping with the differences between two sets of cultural conventions is a question of intercultural competence.
Differences in the mores of various nations are at the root of ethnic stereotype, or in the case of reflection upon one's own mores, autostereotypes.
The customary norms in a given society may include indigenous land rights, honour, filial piety, customary law and the customary international law that affects countries who may not have codified their customary norms. Land rights of indigenous peoples is under customary land tenure, its a system of arrangement in-line with customs and norms. This is the case in colonies. An example of a norm is an culture of honor exists in some societies, where the family is viewed as the main source of honor and the conduct of family members reflects upon their family honor. For instance some writers say in Rome to have an honorable stance, to be equals with someone, existed for those who are most similar to one another (family and friends) this could be due to the competing for public recognition and therefore for personal and public honor, over rhetoric, sport, war, wealth and virtue. To protrude, stand out, be recognized and demonstrate this "A Roman could win such a "competition" by pointing to past evidences of their honor" and "Or, a critic might be refuted by one's performance in a fresh showdown in which one's bona fides could be plainly demonstrated." Honor culture only can exist if the society has for males the shared code, a standard to uphold, guidelines and rules to follow, do not want to break those rules and how to interact successfully and to engage, this exists within a "closed" community of equals.
Filial piety is ethics towards one's family, as Fung Yu-lan states "the ideological basis for traditional [Chinese] society" and according to Confucious repay a burden debt back to ones parents or caregiver but its also traditional in another sense so as to fulfill an obligation to ones own ancestors, also to modern scholars it suggests extends an attitude of respect to superiors also, who are deserving to have that respect.
See also
Culture-bound syndrome
Enculturation
Euthyphro dilemma, discussing the conflict of sacral and secular mores
Habitus (sociology)
Nihonjinron "Japanese mores"
Piety
Political and Moral Sociology: see Luc Boltanski and French Pragmatism
Repugnancy costs
Value (personal and cultural)
References
Conformity
Consensus reality
Deviance (sociology)
Morality
Social agreement
Sociological terminology
Folklore | 0.774499 | 0.995742 | 0.771201 |
Structural equation modeling | Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology, business, and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself.
SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.
The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis, confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling.
SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.
A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.
History
Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).
Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. The economic version of SEM can be seen in SEMNET discussions of endogeneity, and in the heat produced as Judea Pearl's approach to causality via directed acyclic graphs (DAG's) rubs against economic approaches to modeling. Discussions comparing and contrasting various SEM approaches are available but disciplinary differences in data structures and the concerns motivating economic models make reunion unlikely. Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.
SEM analyses are popular in the social sciences because computer programs make it possible to estimate complicated causal structures, but the complexity of the models introduces substantial variability in the quality of the results. Some, but not all, results are obtained without the "inconvenience" of understanding experimental design, statistical control, the consequences of sample size, and other features contributing to good research design.
General steps and considerations
The following considerations apply to the construction and assessment of many structural equation models.
Model specification
Building or specifying a model requires attending to:
the set of variables to be employed,
what is known about the variables,
what is presumed or hypothesized about the variables' causal connections and disconnections,
what the researcher seeks to learn from the modeling,
and the cases for which values of the variables will be available (kids? workers? companies? countries? cells? accidents? cults?).
Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:
which effects and/or correlations/covariances are to be included and estimated,
which effects and other coefficients are forbidden or presumed unnecessary,
and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).
The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.
The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections.
Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.
There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.
Estimation of free model coefficients
Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on:
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables),
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).
A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.
The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.
One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.
Model assessment
Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:
whether the data contain reasonable measurements of appropriate variables,
whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.)
whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.)
whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.)
the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.)
the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.)
Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.
If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification.
Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to . The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.
Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that increases (and hence probability decreases) with increasing sample size (N). There are two mistakes in discounting on this basis. First, for proper models, does not increase with increasing N, so if increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by , so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The model test, possibly adjusted, is the strongest available structural equation model test.
Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.
This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.
Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.
The considerations relevant to using fit indices include checking:
whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency);
whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured);
whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables);
whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.);
whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time);
whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.);
whether a model test is, or is not, available. (A value, degrees of freedom, and probability will be available for models reporting indices based on .)
and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.).
Some of the more commonly used fit statistics include
Chi-square
A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.
Akaike information criterion (AIC)
An index of relative model fit: The preferred model is the one with the lowest AIC value.
where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model.
Root Mean Square Error of Approximation (RMSEA)
Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested.
Standardized Root Mean Squared Residual (SRMR)
The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.
Comparative Fit Index (CFI)
In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.
The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.
Sample size, power, and estimation
Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.
The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.
Interpretation
Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.
Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.
SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model.
SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.
Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.
The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R2, though the Blocked-Error R2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.
The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.
Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.
Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.
The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.
Controversies and movements
Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides.
These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives.
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM.
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.
Extensions, modeling alternatives, and statistical kin
Categorical dependent variables
Categorical intervening variables
Copulas
Deep Path Modelling
Exploratory Structural Equation Modeling
Fusion validity models
Item response theory models
Latent class models
Latent growth modeling
Link functions
Longitudinal models
Measurement invariance models
Mixture model
Multilevel models, hierarchical models (e.g. people nested in groups)
Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.)
Multi-method multi-trait models
Random intercepts models
Structural Equation Model Trees
Structural Equation Multidimensional scaling
Software
Structural equation modeling programs differ widely in their capabilities and user requirements.
See also
References
Bibliography
Further reading
Bartholomew, D. J., and Knott, M. (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7, Edward Arnold Publishers,
Bentler, P.M. & Bonett, D.G. (1980), "Significance tests and goodness of fit in the analysis of covariance structures", Psychological Bulletin, 88, 588–606.
Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley,
Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA,
Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001.
Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE,
.
External links
Structural equation modeling page under David Garson's StatNotes, NCSU
Issues and Opinion on Structural Equation Modeling, SEM in IS Research
The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000.
Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models
Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM
Graphical models
Latent variable models
Regression models
Structural equation models | 0.774106 | 0.996226 | 0.771185 |
Bioecological model | The bioecological model of development is the mature and final revision of Urie Bronfenbrenner's ecological system theory. The primary focus of ecological systems theory is on the systemic examination of contextual variability in development processes. It focuses on the world outside the developing person and how they were affected by it. After publication of The Ecology of Human Development, Bronfenbrenner's first comprehensive statement of ecological systems theory, additional refinements were added to the theory. Whereas earlier statements of ecological systems theory focused on characteristics of the environment, the goal of the bioecological model was to explicate how characteristics of the developing person influenced the environments to which the person was exposed and how they were affected by the environment. The bioecological model is strongly influenced by Bronfenbrenner's collaborations with Stephen Ceci. Whereas much of Bronfenbrenner's work had focused on social development and the influence of social environments on development, Ceci's work focuses on memory and intelligence. The bioecological model reflects Ceci's work on contextual variability in intelligence and cognition and Bronfenbrenner's interest in developmentally instigative characteristics - how people help to create their own environments.
Evolution of Bronfenbrenner's theory
Bronfenbrenner's initial investigations into contextual variability in developmental processes can be seen in the 1950s in the analysis of differences in methods of parental discipline as a function of historical time and social class. It was further developed in his work on the differential effects of parental discipline on boys and girls in the 1960s and of convergence of socialization processes in the US and USSR in the 1970s. These works were expressed in the experimental variations built in the development and implementation of the HeadStart program. informally discussed new ideas concerning Ecological Systems Theory throughout the late 1970s and early 1980s during lectures and presentations to the psychological community. Bronfenbrenner published a major statement of ecological systems theory in American Psychologist, articulated it in a series of propositions and hypotheses in his most cited book, The Ecology of Human Development and further developing it in The Bioecological Model of Human Development and later writings.
Bronfenbrenner's early thinking was strongly influenced by other developmentalists and social psychologists who studied developmental processes as contextually bound and dependent on the meaning of experience as defined by the developing person. One strong influence was Lev Vygotsky, a Russian psychologist who emphasized recognized that learning always occurs and cannot be separated from a social context. A second influence was Kurt Lewin, a German forerunner of ecological systems models who focused on a person's psychological activities that occur within a kind of psychological field, including all the events in the past, present, and future that shape and affect an individual. The centrality of the person's interpretation of their environment and phenomenological nature was built on the work of Thomas & Thomas: “(i)f men define situations as real they are real in their consequences”.
Bronfenbrenner was also influenced by his colleague, Stephen J. Ceci, with whom he co-authored the article “Nature-nurture reconceptualized in developmental perspective: A bioecological theory” in 1994. Ceci is a developmental psychologist who redefined modern developmental psychology's approach to intellectual development. He focused on predicting a pattern of associations among ecological, genetic, and cognitive variables as a function of proximal processes. Together, Bronfenbrenner and Ceci published the beginnings of the bioecological model and made it an accessible framework to use in understanding developmental processes.
History
The history of bioecological systems theory is divided into two periods. The first period resulted in the publication of Bronfenbrenner's theory of ecological systems theory, titled The Ecology of Human Development, in 1979. Bronfenbrenner described the second period as a time of criticism and evaluation of his original work.
The development of ecological systems theory arose because Bronfenbrenner noted a lack of focus on the role of context in terms of development. He argued the environment in which children operate is important because development may be shaped by their interactions with the specific environment. He urged his colleagues to study development in terms of ecological contexts, that is the normal environments of children (schools, homes, daycares). Researchers heeded his advice and a great deal of research flourished in the early 1980s that focused on context.
However, where prior research was ignoring context, Bronfenbrenner felt current research focused too much on context and ignored development. In his justification for a new theory, Bronfenbrenner wrote he was not pleased with the direction of research in the mid 1980s and that he felt there were other realms of development that were overlooked.
In comparison to the original theory, bioecological systems theory adds more emphasis to the person in the context of development. Additionally, Bronfenbrenner chose to leave out key features of the ecological systems theory (e.g., ecological validity and ecological experiments) during his development of bioecological systems theory. As a whole, Bronfenbrenner's new theory continued to go through a series of transformations as he continuously analyzed different factors in human development. Critical components of bioecological systems theory did not emerge all at once. Instead, his ideas evolved and adapted to the research and ideas of the times. For example, the role of proximal processes, which is now recognized as a key feature of bioecological systems theory, did not emerge until the 1990s. This theory went through a series of transformations and elaborations until 2005 when Bronfenbrenner died.
Process–Person–Context–Time
Bronfenbrenner further developed the model by adding the chronosystem, which refers to how the person and environments change over time. He also placed a greater emphasis on processes and the role of the biological person. The Process–Person–Context–Time Model (PPCT) has since become the bedrock of the bioecological model. PPCT includes four concepts. The interactions between the concepts form the basis for the theory.
1. Process – Bronfenbrenner viewed proximal processes as the primary mechanism for development, featuring them in two central propositions of the bioecological model.
Proposition 1: [H]uman development takes place through processes of progressively more complex reciprocal interaction between an active, evolving biopsychological human organism and the persons, objects, and symbols in its immediate external environment. To be effective, the interaction must occur on a fairly regular basis over extended periods of time. Such enduring forms of interaction in the immediate environment are referred to as proximal processes.
Proximal processes are the development processes of systematic interaction between person and environment. Bronfenbrenner identifies group and solitary activities such as playing with other children or reading as mechanisms through which children come to understand their world and formulate ideas about their place within it. However, processes function differently depending on the person and the context.
Proposition 2: The form, power, content, and direction of the proximal processes effecting development vary systematically as a joint function of the characteristics of the developing person; of the environment—both immediate and more remote—in which the processes are taking place; the nature of the developmental outcomes under consideration; and the social continuities and changes occurring over time through the life course and the historical period during which the person has lived.
2. Person – Bronfenbrenner acknowledged the role that personal characteristics of individuals play in social interactions. He identified three personal characteristics that can significantly influence proximal processes across the lifespan. Demand characteristics such as age, gender or physical appearance set processes in motion, acting as “personal stimulus” characteristics. Resource characteristics are not as immediately recognizable and include mental and emotional resources such as past experiences, intelligence, and skills as well as material resources such as access to housing, education, and responsive caregivers. Force characteristics are related to variations in motivation, persistence and temperament. Bronfenbrenner notes that even when children have equivalent access to resources, their developmental courses may differ as a function of characteristics such as drive to succeed and persistence in the face of hardship. In doing this, Bronfenbrenner provides a rationale for how environments (i.e., the systems mentioned above under “The Original Model: Ecological Systems Theory”) influence personal characteristics, yet also suggests personal characteristics can change environments.
3. Context – Context involves five interconnected systems, which are based on Bronfenbrenner’s original model, ecological systems theory. The microsystem describes environments such as home or school in which children spend significant time interacting. Mesosystems are interrelations between microsystems. The exosystem describes events that have important indirect influence on development (e.g., a parent consistently working late). The macrosystem is a feature of any group (culture, subculture) that share values and belief systems. The chronosystem describes historical circumstances that affect contexts at all other levels.
4. Time – Time has a prominent place in this developmental model. It is constituted at three levels: micro, meso, and macro. Micro-time refers to what is happening during specific episodes of proximal processes. Meso-time refers to the extent to which the processes occur in the person’s environment, such as over the course of days, weeks or years. Macro-time (or the chronosystem) focuses on the shifting expectancies in wider culture. This functions both within and across generations and affects proximal processes across the lifespan.
Thus, the bioecological model highlights the importance of understanding a person's development within environmental systems. It further explains that both the person and the environment affect one another bidirectionally. Although even Bronfenbrenner himself critiqued the falsifiability of the model, the bioecological model has real world applications for developmental research, practice, and policies (as demonstrated below).
Research implications
In addition to adding to the theoretical understanding of human development, the bioecological model lends itself to changes in the conceptualization of the research endeavor. In some of his earliest comments on the state of developmental research, Bronfenbrenner lamented that developmental research concerned itself with studying “strange behavior of children in strange situations for the briefest possible period of time”. He proposed, rather, that developmental science should take as its goal a study of children in context in order to best determine which processes are naturally “developmentally generative” (promote development) and which are naturally “developmentally disruptive” (prevent development).
Bronfenbrenner set up a contrast to the traditional “confirmatory” approach to hypothesis testing (in which research is done to “confirm” that a hypothesis is correct or incorrect) when specifying the types of research needed to support the bioecological model of development. In Bronfenbrenner's view, the dynamic nature of the model calls for “primarily generative” research designs that explore interactions between proximal processes (see Proposition 1) and the developing person, environment, time, and developmental outcome (Proposition 2). Bronfenbrenner called this type of research the “discovery mode” of developmental science.
To best capture such dynamic processes, developmental research designs would ideally be longitudinal (over time), rather than cross-sectional (a single point in time), and conducted in children's natural environments, rather than a laboratory. Such designs would thus occur in schools, homes, day-care centers, and other environments in which proximal processes are most likely to occur. The bioecological model also proposes that the most scientifically rich studies would include more than one distinct but theoretically related proximal process in the same design. Indeed, studies that claim to be based upon bioecological theory should include elements of process, person, context, and time, and should include explicit explanation and acknowledgement if one of the elements is lacking. Based on the interactions of proposed elements of the PPCT model, appropriate statistical analyses of PPCT data would likely include explorations of mediation and moderation effects, as well as multilevel modeling of data to account for the nesting of different components of the model. Moreover, research that includes both genetic and environmental components would capture even more of the bioecological model's elements.
Ecological techno-subsystem
The ecological systems theory emerged before the advent of Internet revolution and the developmental influence of then available technology (e.g., television) was conceptually situated in the child's microsystem. Johnson and Puplampu, for instance, proposed in 2008 the ecological techno-subsystem, a dimension of the microsystem. This microsystem comprises both child interaction with living (e.g., peers, parents, teachers) and non-living (e.g., hardware, gadgets) elements of communication, information, and recreation technologies in immediate or direct environments. Johnson published a validation study in 2010.
Neo-ecological Theory
Whereas the theory of the techno-subsystem merely highlights the influence that digital technologies have on the development of an individual within the microsystem, Navarro and Tudge argue that the virtual world be given its own consideration throughout the Bioecological model. They suggest two key modifications as a way to incorporate Bonfenbrenner's theory into our technologized world:
The microsystem should be delineated to include distinct forms in which an individual lives: physical microsystem and virtual microsystem.
The role of the macrosystem, specifically the cultural influence of digital technology, should be emphasized in understanding human development.
See also
Ecological systems theory
Diathesis-stress model
References
Genetics
Developmental psychology
Systems psychology | 0.7845 | 0.983013 | 0.771173 |
Population bottleneck | A population bottleneck or genetic bottleneck is a sharp reduction in the size of a population due to environmental events such as famines, earthquakes, floods, fires, disease, and droughts; or human activities such as genocide, speciocide, widespread violence or intentional culling. Such events can reduce the variation in the gene pool of a population; thereafter, a smaller population, with a smaller genetic diversity, remains to pass on genes to future generations of offspring. Genetic diversity remains lower, increasing only when gene flow from another population occurs or very slowly increasing with time as random mutations occur. This results in a reduction in the robustness of the population and in its ability to adapt to and survive selecting environmental changes, such as climate change or a shift in available resources. Alternatively, if survivors of the bottleneck are the individuals with the greatest genetic fitness, the frequency of the fitter genes within the gene pool is increased, while the pool itself is reduced.
The genetic drift caused by a population bottleneck can change the proportional random distribution of alleles and even lead to loss of alleles. The chances of inbreeding and genetic homogeneity can increase, possibly leading to inbreeding depression. Smaller population size can also cause deleterious mutations to accumulate.
Population bottlenecks play an important role in conservation biology (see minimum viable population size) and in the context of agriculture (biological and pest control).
Minimum viable population size
In conservation biology, minimum viable population (MVP) size helps to determine the effective population size when a population is at risk for extinction. The effects of a population bottleneck often depend on the number of individuals remaining after the bottleneck and how that compares to the minimum viable population size.
Founder effects
A slightly different form of bottleneck can occur if a small group becomes reproductively (e.g., geographically) separated from the main population, such as through a founder event, e.g., if a few members of a species successfully colonize a new isolated island, or from small captive breeding programs such as animals at a zoo. Alternatively, invasive species can undergo population bottlenecks through founder events when introduced into their invaded range.
Examples
Humans
According to a 1999 model, a severe population bottleneck, or more specifically a full-fledged speciation, occurred among a group of Australopithecina as they transitioned into the species known as Homo erectus two million years ago. It is believed that additional bottlenecks must have occurred since Homo erectus started walking the Earth, but current archaeological, paleontological, and genetic data are inadequate to give much reliable information about such conjectured bottlenecks. Nonetheless, a 2023 genetic analysis discerned such a human ancestor population bottleneck of a possible 100,000 to 1000 individuals "around 930,000 and 813,000 years ago [which] lasted for about 117,000 years and brought human ancestors close to extinction."
A 2005 study from Rutgers University theorized that the pre-1492 native populations of the Americas are the descendants of only 70 individuals who crossed the land bridge between Asia and North America.
The Neolithic Y-chromosome bottleneck refers to a period around 5000 BC where the diversity in the male y-chromosome dropped precipitously, to a level equivalent to reproduction occurring with a ratio between men and women of 1:17. Discovered in 2015 the research suggests that the reason for the bottleneck was not a reduction in the number of males, but a drastic decrease in the percentage of males with reproductive success.
Toba catastrophe theory
The controversial Toba catastrophe theory, presented in the late 1990s to early 2000s, suggested that a bottleneck of the human population occurred approximately 75,000 years ago, proposing that the human population was reduced to perhaps 10,000–30,000 individuals when the Toba supervolcano in Indonesia erupted and triggered a major environmental change. Parallel bottlenecks were proposed to exist among chimpanzees, gorillas, rhesus macaques, orangutans and tigers. The hypothesis was based on geological evidence of sudden climate change and on coalescence evidence of some genes (including mitochondrial DNA, Y-chromosome DNA and some nuclear genes) and the relatively low level of genetic variation in humans.
However, subsequent research, especially in the 2010s, appeared to refute both the climate argument and the genetic argument. Recent research shows the extent of climate change was much smaller than believed by proponents of the theory.
In 2000, a Molecular Biology and Evolution paper suggested a transplanting model or a 'long bottleneck' to account for the limited genetic variation, rather than a catastrophic environmental change. This would be consistent with suggestions that in sub-Saharan Africa numbers could have dropped at times as low as 2,000, for perhaps as long as 100,000 years, before numbers began to expand again in the Late Stone Age.
Other animals
European bison, also called wisent (Bison bonasus), faced extinction in the early 20th century. The animals living today are all descended from 12 individuals and they have extremely low genetic variation, which may be beginning to affect the reproductive ability of bulls.
The population of American bison (Bison bison) fell due to overhunting, nearly leading to extinction around the year 1890, though it has since begun to recover (see table).
A classic example of a population bottleneck is that of the northern elephant seal, whose population fell to about 30 in the 1890s. Although it now numbers in the hundreds of thousands, the potential for bottlenecks within colonies remains. Dominant bulls are able to mate with the largest number of females—sometimes as many as 100. With so much of a colony's offspring descended from just one dominant male, genetic diversity is limited, making the species more vulnerable to diseases and genetic mutations.
The golden hamster is a similarly bottlenecked species, with the vast majority of domesticated hamsters descended from a single litter found in the Syrian desert around 1930, and very few wild golden hamsters remain.
An extreme example of a population bottleneck is the New Zealand black robin, of which every specimen today is a descendant of a single female, called Old Blue. The Black Robin population is still recovering from its low point of only five individuals in 1980.
The genome of the giant panda shows evidence of a severe bottleneck about 43,000 years ago. There is also evidence of at least one primate species, the golden snub-nosed monkey, that also suffered from a bottleneck around this time. An unknown environmental event is suspected to have caused the bottlenecks observed in both of these species. The bottlenecks likely caused the low genetic diversity observed in both species.
Other facts can sometimes be inferred from an observed population bottleneck. Among the Galápagos Islands giant tortoises—themselves a prime example of a bottleneck—the comparatively large population on the slopes of the Alcedo volcano is significantly less diverse than four other tortoise populations on the same island. DNA analyses date the bottleneck to around 88,000 years before present (YBP). About 100,000 YBP the volcano erupted violently, deeply burying much of the tortoise habitat in pumice and ash.
Another example can be seen in the greater prairie chickens, which were prevalent in North America until the 20th century. In Illinois alone, the number of greater prairie chickens plummeted from over 100 million in 1900 to about 46 in 1998. These declines in population were the result of hunting and habitat destruction, but the random consequences have also caused a great loss in species diversity. DNA analysis comparing the birds from 1990 and mid-century shows a steep genetic decline in recent decades. Management of the greater prairie chickens now includes genetic rescue efforts including the translocation prairie chickens between leks to increase each populations genetic diversity.
Population bottlenecking poses a major threat to the stability of species populations as well. Papilio homerus is the largest butterfly in the Americas and is endangered according to the IUCN. The disappearance of a central population poses a major threat of population bottleneck. The remaining two populations are now geographically isolated and the populations face an unstable future with limited remaining opportunity for gene flow.
Genetic bottlenecks exist in cheetahs.
Selective breeding
Bottlenecks also exist among pure-bred animals (e.g., dogs and cats: pugs, Persian) because breeders limit their gene pools by a few (show-winning) individuals for their looks and behaviors. The extensive use of desirable individual animals at the exclusion of others can result in a popular sire effect.
Selective breeding for dog breeds caused constricting breed-specific bottlenecks. These bottlenecks have led to dogs having an average of 2–3% more genetic loading than gray wolves. The strict breeding programs and population bottlenecks have led to the prevalence of diseases such as heart disease, blindness, cancers, hip dysplasia, and cataracts.
Selective breeding to produce high-yielding crops has caused genetic bottlenecks in these crops and has led to genetic homogeneity. This reduced genetic diversity in many crops could lead to broader susceptibility to new diseases or pests, which threatens global food security.
Plants
Research showed that there is incredibly low, nearly undetectable amounts of genetic diversity in the genome of the Wollemi pine (Wollemia nobilis). The IUCN found a population count of 80 mature individuals and about 300 seedlings and juveniles in 2011, and previously, the Wollemi pine had fewer than 50 individuals in the wild. The low population size and low genetic diversity indicates that the Wollemi pine went through a severe population bottleneck.
A population bottleneck was created in the 1970s through the conservation efforts of the endangered Mauna Kea silversword (Argyroxiphium sandwicense ssp. sandwicense). The small natural population of silversword was augmented through the 1970s with outplanted individuals. All of the outplanted silversword plants were found to be first or subsequent generation offspring of just two maternal founders. The low amount of polymorphic loci in the outplanted individuals led to the population bottleneck, causing the loss of the marker allele at eight of the loci.
See also
Baby boom
Population boom
References
External links
Northern Elephant Seal History
Population dynamics
Population genetics
Human evolution | 0.773103 | 0.99748 | 0.771155 |
Subsets and Splits